Browserbase is hiring!
The #1 early-stage company on the Enterprise Tech 30 is hiring across Engineering & GTM.
Browserbase powers web browsing capabilities for AI agents and applications.
GTM: Sales Engineer, Account Executive, Customer Engineer, Brand Designer, Demand Generation Lead, Technical Product Marketing Lead, Developer Advocate Lead.
Engineering: Distributed System Software Engineer, Dashboard Software Engineer, DevOps Engineer, Infrastructure Engineer.
Thanks to Browserbase for partnering today!
10 MCP, AI Agents, and RAG projects for AI Engineers
So far, we have done several demos in this newsletter.
Here’s a quick recap of some of them along with detailed walkthroughs and GitHub repos.
1) MCP-powered Agentic RAG
In this project, you'll learn how to create an MCP-powered Agentic RAG that searches a vector database and falls back to web search if needed.
2) A multi-agent book writer
In this project, you'll build an Agentic workflow that can write a 20k word book from a 3-5 word book title.
3) RAG over audio
In this project, learn how to build a RAG system capable of ingesting & understanding audio content—think podcasts, lectures & more!
4) Build a local MCP server
MCPs are here to stay. In this project, you will understand MCP with a simple analogy, build a local MCP server, and interact with it via Cursor IDE.
5) RAG powered by Llama 4
Meta recently released multilingual and multimodal open-source LLMs. Learn how to build a RAG app that's powered by Llama 4.
6) Multimodal RAG powered by DeepSeek Janus
In this project, build a local multimodal RAG on a complex data set like shown below using:
Colpali to understand and embed docs.
Qdrant as the vector DB.
DeepSeek Janus as the multimodal LLM.
7) A mini-ChatGPT using DeepSeek-R1
In this project, build a local mini-ChatGPT using DeepSeek-R1, Ollama, and Chainlit. You could chat with it just like you chat with ChatGPT.
8) Corrective RAG
Corrective RAG is a common technique to improve RAG systems. It introduces a self-assessment step of the retrieved documents, which helps in retaining the relevance of generated responses.
9) Build your reasoning model
In this project, learn how to train your reasoning model like DeepSeek-R1 using Unsloth for efficient fine-tuning and Llama 3.1-8B as the LLM.
10) Fine-tune DeepSeek-R1
In this project, you'll fine-tune your private and locally running DeepSeek-R1 (distilled Llama variant).
That’s a wrap.
What projects would you like to learn next? Let us know!
Thanks for reading!