Factory: An engineer in every tab!
Most AI coding tools just autocomplete, but real-world engineering is so much more.
It requires understanding your codebase, reasoning through tasks, planning solutions, writing and testing code, and shipping.
Factory gives you Droids that handle the entire workflow: they take tickets, plan solutions, write and test code, and open production-ready PRs—using your actual tools and codebase.
Here’s one of our test runs where we asked the Droids to build a stock analysis MCP server in Factory:
And it did it perfectly with ZERO errors, while creating a README and usage guide, and implementing error-handling, without asking:
Solve your engineering problems with Factory here →
3 prompting techniques for reasoning in LLMs
Continuing the discussion from Factory….
A large part of what makes such tools so powerful isn't just their ability to write code, but their ability to reason through it.
And that's not unique to code. It’s the same when we prompt LLMs to solve complex reasoning tasks like math, logic, or multi-step problems.
Today, let’s look at three popular prompting techniques that help LLMs think more clearly before they answer.
These are depicted below:
#1) Chain of Thought (CoT)
The simplest and most widely used technique.
Instead of asking the LLM to jump straight to the answer, we nudge it to reason step by step.
This often improves accuracy because the model can walk through its logic before committing to a final output.
For instance:
Q: If John has 3 apples and gives away 1, how many are left?
Let's think step by step:
It’s a simple example but this tiny nudge can unlock reasoning capabilities that standard zero-shot prompting could misses.
#2) Self-Consistency (a.k.a. Majority Voting over CoT)
CoT is useful but not always consistent.
If you prompt the same question multiple times, you might get different answers depending on the temperature setting (we covered temperature in LLMs here).
Self-Consistency embraces this variation.
You ask the LLM to generate multiple reasoning paths and then select the most common final answer.
It’s a simple idea: when in doubt, ask the model several times and trust the majority.
This technique often leads to more robust results, especially on ambiguous or complex tasks.
However, it doesn’t evaluate how the reasoning was done—just whether the final answer is consistent across paths.
#3) Tree of Thoughts (ToT)
While Self-Consistency varies the final answer, Tree of Thoughts varies the steps of reasoning at each point and then picks the best path overall.
At every reasoning step, the model explores multiple possible directions. These branches form a tree, and a separate process evaluates which path seems the most promising at a particular timestamp.
Think of it like a search algorithm over reasoning paths, where we try to find the most logical and coherent trail to the solution.
It’s more compute-intensive, but in most cases, it significantly outperforms basic CoT.
We’ll put together a demo on this soon, covering several use cases and best practices for inducing reasoning in LLMs through prompting.
Let us know what you would like to learn.
ReAct pattern for AI Agents also involves Reasoning. We implemented it from scratch here →
Thanks for reading!
P.S. For those wanting to develop “Industry ML” expertise:
At the end of the day, all businesses care about impact. That’s it!
Can you reduce costs?
Drive revenue?
Can you scale ML models?
Predict trends before they happen?
We have discussed several other topics (with implementations) that align with such topics.
Here are some of them:
Learn how to build Agentic systems in an ongoing crash course with 13 parts.
Learn how to build real-world RAG apps and evaluate and scale them in this crash course.
Learn sophisticated graph architectures and how to train them on graph data.
So many real-world NLP systems rely on pairwise context scoring. Learn scalable approaches here.
Learn how to run large models on small devices using Quantization techniques.
Learn how to generate prediction intervals or sets with strong statistical guarantees for increasing trust using Conformal Predictions.
Learn how to identify causal relationships and answer business questions using causal inference in this crash course.
Learn how to scale and implement ML model training in this practical guide.
Learn techniques to reliably test new models in production.
Learn how to build privacy-first ML systems using Federated Learning.
Learn 6 techniques with implementation to compress ML models.
All these resources will help you cultivate key skills that businesses and companies care about the most.