Part 12 and Part 13 of the AI Agents crash course are available.
It’s a practical deep dive on incrementally improving an Agent over 10 steps.
So in a way, it combines the learnings from the first 11 parts of the Agents crash course into one full-fledged agentic system.
Why care about Agents?
Given the scale and capabilities of modern LLMs, it feels limiting to use them as “standalone generative models” for pretty ordinary tasks like text summarization, text completion, code completion, etc.
Instead, their true potential is only realized when you build systems around these models, where they are allowed to:
access, retrieve, and filter data from relevant sources,
analyze and process this data to make real-time decisions and more.
RAG was a pretty successful step towards building such compound AI systems:
But since most RAG systems follow a programmatic flow (you, as a programmer, define the steps, the database to search for, the context to retrieve, etc.), it does not unlock the full autonomy one may expect these compound AI systems to possess in some situations.
That is why the primary focus in 2024 was (and going ahead in 2025 will be) on building and shipping AI Agents.
These are autonomous systems that can reason, think, plan, figure out the relevant sources and extract information from them when needed, take actions, and even correct themselves if something goes wrong.
This full crash course covers everything you need to know about building robust Agentic systems, starting from the fundamentals.
Here’s what we have done so far in this crash course (with implementation):
In Part 1, we covered the fundamentals of Agentic systems, understanding how AI agents act autonomously to perform tasks.
In Part 2, we extended Agent capabilities by integrating custom tools, using structured outputs, and we also built modular Crews.
In Part 3, we focused on Flows, learning about state management, flow control, and integrating a Crew into a Flow.
In Part 4, we extended these concepts into real-world multi-agent, multi-crew Flow projects.
In Part 5 and Part 6, we moved into advanced techniques that make AI agents more robust, dynamic, and adaptable, like Guardrails, Async execution, Callbacks, Human-in-the-loop, Multimodal Agents, and more.
In Part 8 and Part 9, we primarily focused on 5 types of Memory for AI agents, which help agents “remember” and utilize past information.
In Part 10, we implemented the ReAct pattern from scratch.
Just like the RAG crash course, we are covering everything about AI agents in detail as we progress to fully equip you with building agentic systems.
Of course, if you have never worked with LLMs, that’s okay.
We cover everything in a practical and beginner-friendly way.
Thanks for reading!