The question of which engine — LCZero (Leela Chess Zero) or Stockfish — is better for analysis is no longer a simple “one is better than the other.”
These two giants represent different philosophies of computer chess: Stockfish is the product of decades of hand-tuned evaluation and ultra-efficient tree search, while LCZero brings deep neural networks and Monte Carlo Tree Search (MCTS) inspired by AlphaZero’s breakthrough. Each has clear strengths and tradeoffs, and the best choice depends on what kind of analysis you want, what hardware you have, and how you like to learn from an engine.
Different engines, different brains
Stockfish uses alpha-beta search with a highly optimized evaluation function and, since recent years, a CPU-friendly neural-network component called NNUE. The result is an engine that examines millions of concrete variations quickly and gives extremely reliable tactical assessments; it’s the go-to engine for fast, concrete calculation and verification. Stockfish’s development is active and incremental: frequent releases and tuning come from an active community and rigorous self-play testing.
LCZero, by contrast, is a pure self-learning neural-network engine that evaluates positions using a trained network and guides search with MCTS. Its strength lies in long-term positional judgment — pawn structure, piece activity, latent strategic resources — learned from millions of self-play games rather than hand-coded heuristics. Because the neural net evaluations are expensive, LCZero traditionally relies on a GPU for reasonable speed; on equivalent hardware the two engines may find very different “best” moves because their internal priorities differ.
Comparison Table
| Feature / Aspect | Stockfish | LCZero (Leela Chess Zero) |
|---|---|---|
| Core Method | Alpha-beta search + NNUE evaluation | Monte Carlo Tree Search + Deep Neural Network |
| Hardware | CPU-based (multithreading efficient) | GPU-based (needs strong Nvidia card for speed) |
| Speed | Extremely fast on almost any machine | Slower unless powerful GPU is available |
| Playing Style | Tactical, concrete, precise | Positional, strategic, “human-like” |
| Evaluation | Numeric and brute-force accurate | Pattern-based, learned intuition |
| Strength (on equal hardware) | Slightly higher in short, tactical tests | Competitive in long, complex positions |
| Endgames | Excellent with tablebases | Sometimes struggles without tablebases |
| Openings | Theoretically accurate | Can find creative deviations |
| Best For | Checking tactics, verifying concrete lines | Exploring plans, positional understanding |
| Output | Clear move choices and deep variations | Ideas, plans, and positional themes |
| Ease of Use | Plug-and-play on most devices | Requires setup (GPU, backend configuration) |
| Human Learning Value | Good for concrete training | Great for conceptual understanding |
What each engine is best at for analysis
Stockfish is the workhorse for:
- Tactical verification and sharp lines. If you need to be sure a variation is sound or you want the deepest concrete refutation of a line, Stockfish’s brute-force search excels.
- Speed and practicality. On a typical multicore CPU or in mobile apps and online analysis boards, Stockfish delivers fast, accurate assessments.
- Endgame precision. Combined with tablebases and efficient search heuristics, Stockfish often finds the shortest technical wins or most precise defenses.
LCZero shines for:
- Strategic, long-term ideas. LCZero discovers plans and maneuvers that are sometimes non-intuitive to classical engines; it’s especially strong in closed, maneuvering positions and in creating or recognizing fortresses and imbalances.
- Human-style play. Many players find LCZero’s suggestions more “understandable” as plans (sacrifices for positional compensation, slow reorientation moves) rather than purely mechanical tactics.
- Alternative perspectives. LCZero is excellent when your goal is to explore unfamiliar plans or generate creative candidate moves that Stockfish may dismiss early.
Practical considerations: hardware and setup
A crucial practical difference is hardware. Stockfish runs extremely well on CPUs and benefits from multithreading; it is practical for laptops, phones (through apps), and web analysis without special hardware. LCZero is GPU-centric: to get high evaluation throughput you normally want a modern Nvidia GPU (RTX 20/30 series or better), otherwise LCZero will be slow and won’t reach its potential. This hardware gap often dictates which engine a player uses for everyday analysis.
Complementary, not exclusive
Top engine competitions and academic comparisons show both engines remain world-class but play differently; results swing depending on time controls and hardware. Large matches and TCEC reports have shown Stockfish often prevailing in many competitions, but LCZero has scored notable victories and is sometimes superior in long positional tests. That history underlines a useful principle for analysts: using both engines gives a fuller picture than relying on one.
Real-World Results
In engine tournaments such as TCEC, Stockfish often wins the majority of matches, especially in faster or balanced time controls. However, LCZero has achieved spectacular positional victories — including games where Stockfish collapsed strategically in blocked or fortress-like positions.
These results show that LCZero’s “understanding” complements Stockfish’s brute-force power. When both engines agree on a move, you can be almost certain it’s the best.
How to use them together for better analysis
Here’s a practical workflow that takes advantage of both tools:
- Opening and quick checks — Stockfish. Use Stockfish to validate opening moves and check for immediate tactical refutations; it’s fast and practical for pruning bad lines.
- Strategic exploration — LCZero. When a position demands deep positional understanding (closed structures, blocked pawn storms, long kingside/queenside maneuvers), let LCZero run on a GPU to suggest plans and unconventional moves.
- Cross-verification. When LCZero suggests a speculative or sacrificial idea, switch to Stockfish to verify the concrete correctness of the combination — sometimes LCZero’s strategic choices hide tactical refutations that brute search will find.
- Endgame finishing — Stockfish + tablebases. For technical endgames use Stockfish with tablebases for the most reliable path to conversion or defense.
This complementary approach mirrors what many strong players and engine researchers do: let the neural net inspire plans, then use brute-force search to check the nuts-and-bolts.
Limitations and gotchas
- Different eval scales. LCZero and Stockfish use different evaluation philosophies; a +0.50 from one is not perfectly equivalent to +0.50 from the other. Treat their scores as directional rather than absolute.
- Play vs. analysis modes. Some neural-net engines can be set into “play” modes that favor human-like moves over engine-optimal analysis output. Make sure you configure LCZero for analysis if you want deep, consistent evaluation rather than humanistic play.
- Resource fairness. Comparing the two engines on unequal hardware (Stockfish on many CPU cores vs LCZero on a weak GPU) is misleading. For fair analysis, balance runtime and compute budget across both engines.
Recommendation
If you want fast, reliable tactical and endgame checking on standard hardware, Stockfish is the practical default.
If you want fresh strategic ideas, creative plans, and different human-style perspectives — and you have access to a good GPU — LCZero is a powerful exploratory tool.
For the best analysis, use both: LCZero to propose plans and Stockfish to verify concrete variations. For serious students and coaches this hybrid workflow often produces the deepest learning and the most robust game preparation.

I’m a passionate board game enthusiast and a skilled player in chess, xiangqi and Go. Words for Attacking Chess since 2023. Ping me at Lichess for a game or chat.