guarantees of retrieval-augmented era (RAG) is that it permits AI methods to reply questions utilizing up-to-date or domain-specific data, with out retraining the mannequin. However most RAG pipelines nonetheless deal with paperwork and data as flat and disconnected—retrieving remoted chunks primarily based on vector similarity, with no sense of how these chunks relate.
With a view to treatment RAG’s ignorance of—usually apparent—connections between paperwork and chunks, builders have turned to graph RAG approaches, however usually discovered that the advantages of graph RAG have been not worth the added complexity of implementing it.
In our current article on the open-source Graph RAG Project and GraphRetriever, we launched a brand new, easier method that mixes your current vector search with light-weight, metadata-based graph traversal, which doesn’t require graph building or storage. The graph connections will be outlined at runtime—and even query-time—by specifying which doc metadata values you want to use to outline graph “edges,” and these connections are traversed throughout retrieval in graph RAG.
On this article, we increase on one of many use circumstances within the Graph RAG Mission documentation—a demo notebook can be found here—which is a straightforward however illustrative instance: looking film opinions from a Rotten Tomatoes dataset, routinely connecting every evaluate with its native subgraph of associated data, after which placing collectively question responses with full context and relationships between films, opinions, reviewers, and different knowledge and metadata attributes.
The dataset: Rotten Tomatoes opinions and film metadata
The dataset used on this case research comes from a public Kaggle dataset titled “Massive Rotten Tomatoes Movies and Reviews”. It consists of two main CSV recordsdata:
- rotten_tomatoes_movies.csv — containing structured data on over 200,000 films, together with fields like title, forged, administrators, genres, language, launch date, runtime, and field workplace earnings.
- rotten_tomatoes_movie_reviews.csv — a group of practically 2 million user-submitted film opinions, with fields akin to evaluate textual content, ranking (e.g., 3/5), sentiment classification, evaluate date, and a reference to the related film.
Every evaluate is linked to a film through a shared movie_id, making a pure relationship between unstructured evaluate content material and structured film metadata. This makes it an ideal candidate for demonstrating GraphRetriever’s capability to traverse doc relationships utilizing metadata alone—no have to manually construct or retailer a separate graph.
By treating metadata fields akin to movie_id, style, and even shared actors and administrators as graph edges, we will construct a linked retrieval move that enriches every question with associated context routinely.
The problem: placing film opinions in context
A standard aim in AI-powered search and advice methods is to let customers ask pure, open-ended questions and get significant, contextual outcomes. With a big dataset of film opinions and metadata, we wish to help full-context responses to prompts like:
- “What are some good household films?”
- “What are some suggestions for thrilling motion films?”
- “What are some basic films with superb cinematography?”
An ideal reply to every of those prompts requires subjective evaluate content material together with some semi-structured attributes like style, viewers, or visible fashion. To present a superb reply with full context, the system must:
- Retrieve probably the most related opinions primarily based on the person’s question, utilizing vector-based semantic similarity
- Enrich every evaluate with full film particulars—title, launch yr, style, director, and many others.—so the mannequin can current an entire, grounded advice
- Join this data with different opinions or films that present a fair broader context, akin to: What are different reviewers saying? How do different films within the style examine?
A standard RAG pipeline would possibly deal with step 1 nicely—pulling related snippets of textual content. However, with out data of how the retrieved chunks relate to different data within the dataset, the mannequin’s responses can lack context, depth, or accuracy.
How graph RAG addresses the problem
Given a person’s question, a plain RAG system would possibly advocate a film primarily based on a small set of instantly semantically related opinions. However graph RAG and GraphRetriever can simply pull in related context—for instance, different opinions of the identical films or different films in the identical style—to check and distinction earlier than making suggestions.
From an implementation standpoint, graph RAG offers a clear, two-step resolution:
Step 1: Construct a regular RAG system
First, identical to with any RAG system, we embed the doc textual content utilizing a language mannequin and retailer the embeddings in a vector database. Every embedded evaluate could embody structured metadata, akin to reviewed_movie_id, ranking, and sentiment—data we’ll use to outline relationships later. Every embedded film description consists of metadata akin to movie_id, style, release_year, director, and many others.
This permits us to deal with typical vector-based retrieval: when a person enters a question like “What are some good household films?”, we will shortly fetch opinions from the dataset which are semantically associated to household films. Connecting these with broader context happens within the subsequent step.
Step 2: Add graph traversal with GraphRetriever
As soon as the semantically related opinions are retrieved in step 1 utilizing vector search, we will then use GraphRetriever to traverse connections between opinions and their associated film data.
Particularly, the GraphRetriever:
- Fetches related opinions through semantic search (RAG)
- Follows metadata-based edges (like reviewed_movie_id) to retrieve extra data that’s instantly associated to every evaluate, akin to film descriptions and attributes, knowledge in regards to the reviewer, and many others
- Merges the content material right into a single context window for the language mannequin to make use of when producing a solution
A key level: no pre-built data graph is required. The graph is outlined completely by way of metadata and traversed dynamically at question time. If you wish to increase the connections to incorporate shared actors, genres, or time intervals, you simply replace the sting definitions within the retriever config—no have to reprocess or reshape the information.
So, when a person asks about thrilling motion films with some particular qualities, the system can usher in datapoints just like the film’s launch yr, style, and forged, enhancing each relevance and readability. When somebody asks about basic films with superb cinematography, the system can draw on opinions of older movies and pair them with metadata like style or period, giving responses which are each subjective and grounded in info.
Briefly, GraphRetriever bridges the hole between unstructured opinions (subjective textual content) and structured context (linked metadata)—producing question responses which are extra clever, reliable, and full.
GraphRetriever in motion
To point out how GraphRetriever can join unstructured evaluate content material with structured film metadata, we stroll by a primary setup utilizing a pattern of the Rotten Tomatoes dataset. This entails three essential steps: making a vector retailer, changing uncooked knowledge into LangChain paperwork, and configuring the graph traversal technique.
See the example notebook in the Graph RAG Project for full, working code.
Create the vector retailer and embeddings
We start by embedding and storing the paperwork, identical to we’d in any RAG system. Right here, we’re utilizing OpenAIEmbeddings and the Astra DB vector retailer:
from langchain_astradb import AstraDBVectorStore
from langchain_openai import OpenAIEmbeddings
COLLECTION = "movie_reviews_rotten_tomatoes"
vectorstore = AstraDBVectorStore(
embedding=OpenAIEmbeddings(),
collection_name=COLLECTION,
)
The construction of information and metadata
We retailer and embed doc content material as we normally would for any RAG system, however we additionally protect structured metadata to be used in graph traversal. The doc content material is stored minimal (evaluate textual content, film title, description), whereas the wealthy structured knowledge is saved within the “metadata” fields within the saved doc object.
That is instance JSON from one film doc within the vector retailer:
> pprint(paperwork[0].metadata)
{'audienceScore': '66',
'boxOffice': '$111.3M',
'director': 'Barry Sonnenfeld',
'distributor': 'Paramount Photos',
'doc_type': 'movie_info',
'style': 'Comedy',
'movie_id': 'addams_family',
'originalLanguage': 'English',
'ranking': '',
'ratingContents': '',
'releaseDateStreaming': '2005-08-18',
'releaseDateTheaters': '1991-11-22',
'runtimeMinutes': '99',
'soundMix': 'Encompass, Dolby SR',
'title': 'The Addams Household',
'tomatoMeter': '67.0',
'author': 'Charles Addams,Caroline Thompson,Larry Wilson'}
Word that graph traversal with GraphRetriever makes use of solely the attributes this metadata area, doesn’t require a specialised graph DB, and doesn’t use any LLM calls or different costly
Configure and run GraphRetriever
The GraphRetriever traverses a easy graph outlined by metadata connections. On this case, we outline an edge from every evaluate to its corresponding film utilizing the directional relationship between reviewed_movie_id (in opinions) and movie_id (in film descriptions).
We use an “keen” traversal technique, which is likely one of the easiest traversal methods. See documentation for the Graph RAG Project for extra particulars about methods.
from graph_retriever.methods import Keen
from langchain_graph_retriever import GraphRetriever
retriever = GraphRetriever(
retailer=vectorstore,
edges=[("reviewed_movie_id", "movie_id")],
technique=Keen(start_k=10, adjacent_k=10, select_k=100, max_depth=1),
)
On this configuration:
start_k=10: retrieves 10 evaluate paperwork utilizing semantic searchadjacent_k=10: permits as much as 10 adjoining paperwork to be pulled at every step of graph traversalselect_k=100: as much as 100 complete paperwork will be returnedmax_depth=1: the graph is just traversed one degree deep, from evaluate to film
Word that as a result of every evaluate hyperlinks to precisely one reviewed film, the graph traversal depth would have stopped at 1 no matter this parameter, on this easy instance. See more examples in the Graph RAG Project for extra subtle traversal.
Invoking a question
Now you can run a pure language question, akin to:
INITIAL_PROMPT_TEXT = "What are some good household films?"
query_results = retriever.invoke(INITIAL_PROMPT_TEXT)
And with just a little sorting and reformatting of textual content—see the pocket book for particulars—we will print a primary checklist of the retrieved films and opinions, for instance:
Film Title: The Addams Household
Film ID: addams_family
Overview: A witty household comedy that has sufficient sly humour to maintain adults chuckling all through.
Film Title: The Addams Household
Film ID: the_addams_family_2019
Overview: ...The movie's simplistic and episodic plot put a significant dampener on what may have been a welcome breath of contemporary air for household animation.
Film Title: The Addams Household 2
Film ID: the_addams_family_2
Overview: This serviceable animated sequel focuses on Wednesday's emotions of alienation and advantages from the household's kid-friendly jokes and street journey adventures.
Overview: The Addams Household 2 repeats what the primary film achieved by taking the favored household and turning them into one of the crucial boringly generic children movies lately.
Film Title: Addams Household Values
Film ID: addams_family_values
Overview: The title is apt. Utilizing these morbidly sensual cartoon characters as pawns, the brand new film Addams Household Values launches a witty assault on these with mounted concepts about what constitutes a loving household.
Overview: Addams Household Values has its moments -- slightly a whole lot of them, in truth. You knew that simply from the title, which is a pleasant approach of turning Charles Addams' household of ghouls, monsters and vampires unfastened on Dan Quayle.
We are able to then cross the above output to the LLM for era of a remaining response, utilizing the complete set data from the opinions in addition to the linked films.
Establishing the ultimate immediate and LLM name appears like this:
from langchain_core.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
from pprint import pprint
MODEL = ChatOpenAI(mannequin="gpt-4o", temperature=0)
VECTOR_ANSWER_PROMPT = PromptTemplate.from_template("""
A listing of Film Evaluations seems beneath. Please reply the Preliminary Immediate textual content
(beneath) utilizing solely the listed Film Evaluations.
Please embody all films that may be useful to somebody in search of film
suggestions.
Preliminary Immediate:
{initial_prompt}
Film Evaluations:
{movie_reviews}
""")
formatted_prompt = VECTOR_ANSWER_PROMPT.format(
initial_prompt=INITIAL_PROMPT_TEXT,
movie_reviews=formatted_text,
)
outcome = MODEL.invoke(formatted_prompt)
print(outcome.content material)
And, the ultimate response from the graph RAG system would possibly appear like this:
Primarily based on the opinions supplied, "The Addams Household" and "Addams Household Values" are really helpful nearly as good household films. "The Addams Household" is described as a witty household comedy with sufficient humor to entertain adults, whereas "Addams Household Values" is famous for its intelligent tackle household dynamics and its entertaining moments.
Understand that this remaining response was the results of the preliminary semantic seek for opinions mentioning household films—plus expanded context from paperwork which are instantly associated to those opinions. By increasing the window of related context past easy semantic search, the LLM and general graph RAG system is ready to put collectively extra full and extra useful responses.
Attempt It Your self
The case research on this article exhibits the best way to:
- Mix unstructured and structured knowledge in your RAG pipeline
- Use metadata as a dynamic data graph with out constructing or storing one
- Enhance the depth and relevance of AI-generated responses by surfacing linked context
Briefly, that is Graph RAG in motion: including construction and relationships to make LLMs not simply retrieve, however construct context and cause extra successfully. For those who’re already storing wealthy metadata alongside your paperwork, GraphRetriever offers you a sensible option to put that metadata to work—with no extra infrastructure.
We hope this conjures up you to strive GraphRetriever by yourself knowledge—it’s all open-source—particularly for those who’re already working with paperwork which are implicitly linked by shared attributes, hyperlinks, or references.
You may discover the complete pocket book and implementation particulars right here: Graph RAG on Movie Reviews from Rotten Tomatoes.

