亚洲国产日韩欧美一区二区三区,精品亚洲国产成人av在线,国产99视频精品免视看7,99国产精品久久久久久久成人热,欧美日韩亚洲国产综合乱

Home Backend Development Python Tutorial Building a Document Retrieval & Q&A System with OpenAI and Streamlit

Building a Document Retrieval & Q&A System with OpenAI and Streamlit

Nov 07, 2024 pm 03:50 PM

Hello, Dev Community! ?

Today, I’m excited to walk you through my project: EzioDevIo RAG (Retrieval-Augmented Generation). This system allows users to upload PDF documents, ask questions based on their content, and receive real-time answers generated by OpenAI's GPT-3.5 Turbo model. This is particularly useful for navigating large documents or quickly extracting relevant information. ??

Building a Document Retrieval & Q&A System with OpenAI and Streamlit

You can find the complete code on my GitHub: EzioDevIo RAG Project. Let’s dive into the project and break down each step!

? Dive into the full codebase and setup instructions in the EzioDevIo RAG Project GitHub Repository!

Project Overview

What You’ll Learn

  1. How to integrate OpenAI’s language models with PDF document retrieval.
  2. How to create a user-friendly interface using Streamlit.
  3. How to containerize the application with Docker for easy deployment. Project Features
  • Upload PDFs and get information from them.
  • Ask questions based on the content of the uploaded PDFs.
  • Real-time responses generated by OpenAI’s gpt-3.5-turbo model.
  • Easy deployment with Docker for scalability.

*Here’s the final structure of our project directory: *

RAG-project/
├── .env                       # Environment variables (API key)
├── app.py                     # Streamlit app for the RAG system
├── document_loader.py         # Code for loading and processing PDF documents
├── retriever.py               # Code for indexing and retrieving documents
├── main.py                    # Main script for RAG pipeline
├── requirements.txt           # List of required libraries
├── Dockerfile                 # Dockerfile for containerizing the app
├── .gitignore                 # Ignore sensitive and unnecessary files
├── data/
│   └── uploaded_pdfs/         # Folder to store uploaded PDFs
└── images/
    └── openai_api_setup.png   # Example image for OpenAI API setup instructions

Step 1: Setting Up the Project

Prerequisites

Make sure you have the following:

  • Python 3.8 : To run the application locally.
  • OpenAI API Key: You’ll need this to access OpenAI’s models. Sign up at OpenAI API to get your key.
  • Docker: Optional, but recommended for containerizing the application for deployment.

Step 2: Clone the Repository and Set Up the Virtual Environment

2.1. Clone the Repository
Start by cloning the project repository from GitHub and navigating into the project directory.

git clone https://github.com/EzioDEVio/RAG-project.git
cd RAG-project

2.2. Set Up a Virtual Environment
To isolate project dependencies, create and activate a virtual environment. This helps prevent conflicts with other projects’ packages.

python -m venv venv
source venv/bin/activate  # On Windows, use `venv\Scripts\activate`

2.3. Install Dependencies
Install the required Python libraries listed in requirements.txt. This includes OpenAI for the language model, Streamlit for the UI, PyMuPDF for PDF handling, and FAISS for efficient similarity search.

pip install -r requirements.txt

2.4. Configure Your OpenAI API Key
Create a .env file in the project root directory. This file will store your OpenAI API key securely. Add the following line to the file, replacing your_openai_api_key_here with your actual API key:

RAG-project/
├── .env                       # Environment variables (API key)
├── app.py                     # Streamlit app for the RAG system
├── document_loader.py         # Code for loading and processing PDF documents
├── retriever.py               # Code for indexing and retrieving documents
├── main.py                    # Main script for RAG pipeline
├── requirements.txt           # List of required libraries
├── Dockerfile                 # Dockerfile for containerizing the app
├── .gitignore                 # Ignore sensitive and unnecessary files
├── data/
│   └── uploaded_pdfs/         # Folder to store uploaded PDFs
└── images/
    └── openai_api_setup.png   # Example image for OpenAI API setup instructions

? Tip: Make sure .env is added to your .gitignore file to avoid exposing your API key if you push your project to a public repository.

Step 3: Understanding the Project Structure
Here’s a quick overview of the directory structure to help you navigate the code:
Here’s a quick overview of the directory structure to help you navigate the code:

git clone https://github.com/EzioDEVio/RAG-project.git
cd RAG-project

Each file has a specific role:

  • app.py: Manages the Streamlit interface, allowing users to upload files and ask questions.
  • document_loader.py: Handles loading and processing of PDFs using PyMuPDF.
  • retriever.py: Uses FAISS to index document text and retrieve relevant sections based on user queries.
  • main.py: Ties everything together, including calling OpenAI’s API to generate responses.

Step 4: Building the Core Code
Now, let’s dive into the main components of the project.

4.1. Loading Documents (document_loader.py)
The document_loader.py file is responsible for extracting text from PDFs. Here, we use the PyMuPDF library to process each page in the PDF and store the text.

python -m venv venv
source venv/bin/activate  # On Windows, use `venv\Scripts\activate`

Explanation: This function reads all PDF files in a specified folder, extracts the text from each page, and adds the text to a list of dictionaries. Each dictionary represents a document with its text and filename.

4.2. Document Indexing and Retrieval (retriever.py)
FAISS (Facebook AI Similarity Search) helps us to perform similarity searches. We use it to create an index of the document embeddings, which allows us to retrieve relevant sections when users ask questions.

pip install -r requirements.txt

Explanation:

create_index: Converts document text into embeddings using OpenAIEmbeddings and creates an index with FAISS.
retrieve_documents: Searches for relevant document sections based on the user query.

4.3. Generating Responses (main.py)
This module processes user queries, retrieves relevant documents, and generates answers using OpenAI’s language model.

OPENAI_API_KEY=your_openai_api_key_here

Explanation:

generate_response: Creates a prompt with context from retrieved documents and the user’s query, then sends it to OpenAI’s API. The response is then returned as the answer.

Step 5: Creating the Streamlit Interface (app.py)
Streamlit provides an interactive front end, making it easy for users to upload files and ask questions.

RAG-project/
├── .env                       # Environment variables (API key)
├── app.py                     # Streamlit app for the RAG system
├── document_loader.py         # Code for loading and processing PDF documents
├── retriever.py               # Code for indexing and retrieving documents
├── main.py                    # Main script for RAG pipeline
├── requirements.txt           # List of required libraries
├── Dockerfile                 # Dockerfile for containerizing the app
├── .gitignore                 # Ignore sensitive and unnecessary files
├── data/
│   └── uploaded_pdfs/         # Folder to store uploaded PDFs
└── images/
    └── openai_api_setup.png   # Example image for OpenAI API setup instructions

Explanation:

  • This code creates a simple UI with Streamlit, allowing users to upload PDFs and type questions.
  • When users click "Get Answer," the app retrieves relevant documents and generates an answer.

Step 6: Dockerizing the Application
Docker allows you to package the app into a container, making it easy to deploy.

Dockerfile

RAG-project/
├── .env                       # Environment variables (API key)
├── app.py                     # Streamlit app for the RAG system
├── document_loader.py         # Code for loading and processing PDF documents
├── retriever.py               # Code for indexing and retrieving documents
├── main.py                    # Main script for RAG pipeline
├── requirements.txt           # List of required libraries
├── Dockerfile                 # Dockerfile for containerizing the app
├── .gitignore                 # Ignore sensitive and unnecessary files
├── data/
│   └── uploaded_pdfs/         # Folder to store uploaded PDFs
└── images/
    └── openai_api_setup.png   # Example image for OpenAI API setup instructions

Explanation:

We use a multi-stage build to keep the final image lean.
The application runs as a non-root user for security.

Running the Docker Container

  1. Build the Docker Image:
git clone https://github.com/EzioDEVio/RAG-project.git
cd RAG-project

  1. Run the Container:
python -m venv venv
source venv/bin/activate  # On Windows, use `venv\Scripts\activate`

Step 7: Setting Up CI/CD with GitHub Actions
For production readiness, add a CI/CD pipeline to build, test, and scan Docker images. You can find the .github/workflows file in the repository for this setup.

Final Thoughts
This project combines OpenAI’s language model capabilities with document retrieval to create a functional and interactive tool. If you enjoyed this project, please star the GitHub repository and follow me here on Dev Community. Let’s build more amazing projects together! ?

? View the GitHub Repository ? EzioDevIo RAG Project GitHub Repository!

The above is the detailed content of Building a Document Retrieval & Q&A System with OpenAI and Streamlit. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

PHP Tutorial
1488
72
Polymorphism in python classes Polymorphism in python classes Jul 05, 2025 am 02:58 AM

Polymorphism is a core concept in Python object-oriented programming, referring to "one interface, multiple implementations", allowing for unified processing of different types of objects. 1. Polymorphism is implemented through method rewriting. Subclasses can redefine parent class methods. For example, the spoke() method of Animal class has different implementations in Dog and Cat subclasses. 2. The practical uses of polymorphism include simplifying the code structure and enhancing scalability, such as calling the draw() method uniformly in the graphical drawing program, or handling the common behavior of different characters in game development. 3. Python implementation polymorphism needs to satisfy: the parent class defines a method, and the child class overrides the method, but does not require inheritance of the same parent class. As long as the object implements the same method, this is called the "duck type". 4. Things to note include the maintenance

Explain Python generators and iterators. Explain Python generators and iterators. Jul 05, 2025 am 02:55 AM

Iterators are objects that implement __iter__() and __next__() methods. The generator is a simplified version of iterators, which automatically implement these methods through the yield keyword. 1. The iterator returns an element every time he calls next() and throws a StopIteration exception when there are no more elements. 2. The generator uses function definition to generate data on demand, saving memory and supporting infinite sequences. 3. Use iterators when processing existing sets, use a generator when dynamically generating big data or lazy evaluation, such as loading line by line when reading large files. Note: Iterable objects such as lists are not iterators. They need to be recreated after the iterator reaches its end, and the generator can only traverse it once.

How to handle API authentication in Python How to handle API authentication in Python Jul 13, 2025 am 02:22 AM

The key to dealing with API authentication is to understand and use the authentication method correctly. 1. APIKey is the simplest authentication method, usually placed in the request header or URL parameters; 2. BasicAuth uses username and password for Base64 encoding transmission, which is suitable for internal systems; 3. OAuth2 needs to obtain the token first through client_id and client_secret, and then bring the BearerToken in the request header; 4. In order to deal with the token expiration, the token management class can be encapsulated and automatically refreshed the token; in short, selecting the appropriate method according to the document and safely storing the key information is the key.

Explain Python assertions. Explain Python assertions. Jul 07, 2025 am 12:14 AM

Assert is an assertion tool used in Python for debugging, and throws an AssertionError when the condition is not met. Its syntax is assert condition plus optional error information, which is suitable for internal logic verification such as parameter checking, status confirmation, etc., but cannot be used for security or user input checking, and should be used in conjunction with clear prompt information. It is only available for auxiliary debugging in the development stage rather than substituting exception handling.

How to iterate over two lists at once Python How to iterate over two lists at once Python Jul 09, 2025 am 01:13 AM

A common method to traverse two lists simultaneously in Python is to use the zip() function, which will pair multiple lists in order and be the shortest; if the list length is inconsistent, you can use itertools.zip_longest() to be the longest and fill in the missing values; combined with enumerate(), you can get the index at the same time. 1.zip() is concise and practical, suitable for paired data iteration; 2.zip_longest() can fill in the default value when dealing with inconsistent lengths; 3.enumerate(zip()) can obtain indexes during traversal, meeting the needs of a variety of complex scenarios.

What are python iterators? What are python iterators? Jul 08, 2025 am 02:56 AM

InPython,iteratorsareobjectsthatallowloopingthroughcollectionsbyimplementing__iter__()and__next__().1)Iteratorsworkviatheiteratorprotocol,using__iter__()toreturntheiteratorand__next__()toretrievethenextitemuntilStopIterationisraised.2)Aniterable(like

What are Python type hints? What are Python type hints? Jul 07, 2025 am 02:55 AM

TypehintsinPythonsolvetheproblemofambiguityandpotentialbugsindynamicallytypedcodebyallowingdeveloperstospecifyexpectedtypes.Theyenhancereadability,enableearlybugdetection,andimprovetoolingsupport.Typehintsareaddedusingacolon(:)forvariablesandparamete

Python FastAPI tutorial Python FastAPI tutorial Jul 12, 2025 am 02:42 AM

To create modern and efficient APIs using Python, FastAPI is recommended; it is based on standard Python type prompts and can automatically generate documents, with excellent performance. After installing FastAPI and ASGI server uvicorn, you can write interface code. By defining routes, writing processing functions, and returning data, APIs can be quickly built. FastAPI supports a variety of HTTP methods and provides automatically generated SwaggerUI and ReDoc documentation systems. URL parameters can be captured through path definition, while query parameters can be implemented by setting default values ??for function parameters. The rational use of Pydantic models can help improve development efficiency and accuracy.

See all articles