Tired of witnessing AI deliver ambiguous responses when it lacks access to current data? Fed up with repeatedly coding for retrieval-augmented generation (RAG) on local data? These significant challenges can be effortlessly resolved by merging RAG with MCP (Model Context Protocol). By incorporating MCP, you can link your AI assistant to external tools and APIs to execute seamless RAG. MCP represents a groundbreaking shift in how AI models interface with live data. Meanwhile, RAG serves as a valuable asset for AI models, equipping them with external knowledge they are unaware of. In this article, we will delve deeply into the combination of RAG with MCP, explore what they look like when functioning together, and guide you through a practical example.
Table of contents
- What is RAG?
- What is MCP?
- How does it enable RAG?
- Use Cases for RAG with MCP
- Steps for Performing RAG with MCP
- Step 1: Installing the dependencies
- Step 2: Creating server.py
- Step 3: Configuring Cursor for MCP
- Step 4: Testing the MCP Server
- Conclusion
- Frequently Asked Questions
What is RAG?
RAG is an AI framework that blends the strengths of conventional information retrieval systems (like search and database) with the exceptional natural language generation abilities of AI models. Its advantages include real-time and factual replies, decreased hallucinations, and context-sensitive answers. RAG functions akin to consulting a librarian for information prior to drafting a detailed report.
Discover more about RAG in this article.
What is MCP?
MCP serves as a bridge between your AI assistant and external tools. It’s an open protocol enabling LLMs to access real-world tools, APIs, or datasets accurately and efficiently. Traditional APIs and tools demand custom code for integrating them with AI models, but MCP offers a generic method to connect tools to LLMs in the simplest way possible. It provides plug-and-play tools.
Learn more about MCP in this article.
How does it enable RAG?
In RAG, MCP operates as a retrieval layer that fetches crucial pieces of information from your database based on your query. It fully standardizes how you interact with your databases. Now, there’s no need to write custom code for every RAG you develop. It enables dynamic tool usage based on the AI’s reasoning.
Use Cases for RAG with MCP
There are numerous use cases for RAG with MCP. Some examples include:
- Searching news articles for summaries
- Querying financial APIs for market updates
- Loading private documents for context-aware answers
- Retrieving weather or location-specific information before responding
- Using PDFs or database connectors to power enterprise search
Steps for Performing RAG with MCP
Now, we will implement RAG with MCP in detail. Follow these steps to create your initial MCP server executing RAG. Let’s dive into the implementation now:
Firstly, we will establish our RAG MCP server.
Step 1: Installing the dependencies
<code>pip install langchain>=0.1.0 \ langchain-community>=0.0.5 \ langchain-groq>=0.0.2 \ mcp>=1.9.1 \ chromadb>=0.4.22 \ huggingface-hub>=0.20.3 \ transformers>=4.38.0 \ sentence-transformers>=2.2.2</code>
This step will install all the necessary libraries in your system.
Step 2: Creating server.py
Now, we are defining the RAG MCP server in the server.py file. Below is the code for it. It includes a simple RAG code with an MCP connection.
<code>from mcp.server.fastmcp import FastMCP from langchain.chains import RetrievalQA from langchain.document_loaders import TextLoader from langchain.text_splitter import CharacterTextSplitter from langchain_community.vectorstores import Chroma from langchain_community.embeddings import HuggingFaceEmbeddings from langchain_groq import ChatGroq # Groq LLM <h1>Create an MCP server</h1> <p>mcp = FastMCP("RAG")</p> <h1>Set up embeddings (You can choose a different Hugging Face model if preferred)</h1> <p>embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2")</p> <h1>Set up Groq LLM</h1> <p>model = ChatGroq( model_name="llama3-8b-8192", # or another Groq-supported model groq_api_key="YOUR_GROQ_API" # Required if not set via environment variable )</p> <h1>Load documents</h1> <p>loader = TextLoader("dummy.txt") data = loader.load()</p> <h1>Document splitting</h1> <p>text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(data)</p> <h1>Vector DB</h1> <p>docsearch = Chroma.from_documents(texts, embeddings)</p> <h1>Retriever chain</h1> <p>qa = RetrievalQA.from_chain_type(llm=model, retriever=docsearch.as_retriever())</p> <p>@mcp.tool() def retrieve(prompt: str) -> str: """Get information using RAG""" return qa.invoke(prompt)</p> <p>if <strong>name</strong> == "<strong>main</strong>": mcp.run()</p></code>
Here, we are using the Groq API for accessing LLM. Ensure you have a Groq API. Dummy.txt used here is any data you have, the content of which you can modify according to your use case.
Now, we have successfully created the RAG MCP server. To test it, run it using Python in the terminal.
<code>python server.py</code>
Step 3: Configuring Cursor for MCP
Let’s configure the Cursor IDE for testing our server.
- Download Cursor from the official website http://ipnx.cn/link/2adee3823fe0b1c49ce2b4124cdcecda.
- Install it, sign up, and get to the home screen.
- Now go to the File from the header toolbar. and click on Preferences and then on Cursor Settings.
- From the cursor settings, click on MCP.
- On the MCP tab, click on Add new global MCP Server.
It will open a mcp.json file. Paste the following code into it and save the file.
Replace /path/to/python with the path to your Python executable and /path/to/server.py with your server.py path.
<code>{<p>"mcpServers": {</p> <p>"rag-server": {</p> <p>"command": "/path/to/python",</p> <p>"args": [</p> <p>"path/to/server.py"</p> <p>]</p> <p>}</p> <p>}</p> <p>}</p></code>
- Go back to the Cursor Settings, you should see the following:
If you see the previous screen, it means your server is running successfully and is connected to the Cursor IDE. If it’s showing some errors, try using the restart button in the top right corner.
We have successfully set up the MCP server in the Cursor IDE. Now, let’s test the server.
Step 4: Testing the MCP Server
Our RAG MCP server can now perform RAG and successfully retrieve the best chunks based on our query. Let’s test them.
Query: “What is Zephyria, Answer using rag-server”
Output:
Query: “What was the conflict in the planet?”
Output:
Query: “What is the capital of Zephyria?”
Output:
Conclusion
RAG, when empowered by MCP, can entirely alter how you communicate with your AI assistant. It can convert your AI from a mere text generator into a live assistant that thinks and processes information similarly to a human. Combining both can boost your productivity and enhance your efficiency over time. With just the previously mentioned steps, anyone can construct AI applications linked to the real world using RAG with MCP. Now it’s your turn to empower your LLM by setting up your own MCP tools.
Frequently Asked Questions
Q1. What is the difference between RAG and traditional LLM responses?A. Traditional LLMs generate responses based solely on their pre-trained knowledge, which may be outdated or incomplete. RAG enhances this by retrieving real-time or external data (documents, APIs) before answering, ensuring more accurate and up-to-date responses.
Q2. Why should I use MCP for RAG instead of writing custom code?A. MCP eliminates the need to hardcode every API or database integration manually. It provides a plug-and-play mechanism to expose tools that AI models can dynamically use based on context, making RAG implementation faster, scalable, and more maintainable.
Q3. Do I need to be an expert in AI or LangChain to use RAG with MCP?A. Not at all. With basic Python knowledge and following the step-by-step setup, you can create your own RAG-powered MCP server. Tools like LangChain and Cursor IDE make the integration straightforward.
The above is the detailed content of How to Perform RAG using MCP?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Remember the flood of open-source Chinese models that disrupted the GenAI industry earlier this year? While DeepSeek took most of the headlines, Kimi K1.5 was one of the prominent names in the list. And the model was quite cool.

By mid-2025, the AI “arms race” is heating up, and xAI and Anthropic have both released their flagship models, Grok 4 and Claude 4. These two models are at opposite ends of the design philosophy and deployment platform, yet they

But we probably won’t have to wait even 10 years to see one. In fact, what could be considered the first wave of truly useful, human-like machines is already here. Recent years have seen a number of prototypes and production models stepping out of t

Until the previous year, prompt engineering was regarded a crucial skill for interacting with large language models (LLMs). Recently, however, LLMs have significantly advanced in their reasoning and comprehension abilities. Naturally, our expectation

I am sure you must know about the general AI agent, Manus. It was launched a few months ago, and over the months, they have added several new features to their system. Now, you can generate videos, create websites, and do much mo

Many individuals hit the gym with passion and believe they are on the right path to achieving their fitness goals. But the results aren’t there due to poor diet planning and a lack of direction. Hiring a personal trainer al

Built on Leia’s proprietary Neural Depth Engine, the app processes still images and adds natural depth along with simulated motion—such as pans, zooms, and parallax effects—to create short video reels that give the impression of stepping into the sce

Picture something sophisticated, such as an AI engine ready to give detailed feedback on a new clothing collection from Milan, or automatic market analysis for a business operating worldwide, or intelligent systems managing a large vehicle fleet.The
