Part 7 of our RAG crash course is now available.
Read here: A Crash Course on Building RAG Systems – Part 7 (With Implementation).
What's inside Part 7?
So far, we have only covered traditional RAG systems that rely on vector search.
This time, we are building graph RAG systems, wherein, we'll utilize a graph database to store information in the form of entities and relations, and build RAG apps over.
The motivation is simple.
LLMs love structured data.
LLMs are inherently adept at reasoning with structured data since they provide clear relationships between entities.
For instance, consider the difference between these two data inputs:
Unstructured sentence:
"LinkedIn is a social media network for professionals owned by Microsoft."
Structured entity-relation-entity triplets:
(LinkedIn, is, social media network)
(LinkedIn, isFor, professionals)
(LinkedIn, ownedBy, Microsoft)
The second case immensely lifts off the mental fatigue from the LLM because, in this case, the relationships are explicit, which makes it much easier to produce coherent responses.
On a side note, even search engines now actively use Graph RAG systems due to their high utility.
Like always, we'll dive into the motivation of Graph RAG and then cover an implementation of the Graph RAG system, which is completely beginner-friendly.
Read here: A Crash Course on Building RAG Systems – Part 7 (With Implementation).
What's in the crash course?
So far in this crash course series on building RAG systems, we’ve logically built on the foundations laid in the previous parts:
In Part 1, we explored the foundational components of RAG systems, the typical RAG workflow, and the tool stack, and also learned the implementation.
In Part 2, we understood how to evaluate RAG systems (with implementation).
In Part 3, we learned techniques to optimize RAG systems and handle millions/billions of vectors (with implementation).
In Part 4, we understood multimodality and covered techniques to build RAG systems on complex docs—ones that have images, tables, and texts (with implementation):
In Part 5, we understood the fundamental building blocks of multimodal RAG systems that will help us improve what we built in Part 4.
In Part 6, we utilized the learnings from Part 5 to build a much more extensive multimodal RAG system.
So, even if you are a complete beginner at RAG, it has you covered.
Why care about RAG?
RAG is a key NLP system that got massive attention due to one of the key challenges it solved around LLMs.
More specifically, if you know how to build a reliable RAG system, you can bypass the challenge and cost of fine-tuning LLMs.
That’s a considerable cost saving for enterprises.
And at the end of the day, all businesses care about impact. That’s it!
Can you reduce costs?
Drive revenue?
Can you scale ML models?
Predict trends before they happen?
Thus, the objective of this crash course is to help you implement reliable RAG systems, understand the underlying challenges, and develop expertise in building RAG apps on LLMs, which every industry cares about now.
Read the first part here →
Read the second part here →
Read the third part here →
Read the fourth part here →
Read the fifth part here [OPEN ACCESS] →
Read the sixth part here →
Read the seventh part here →
Of course, if you have never worked with LLMs, that’s okay. We cover everything in a practical and beginner-friendly way.
Thanks for reading!