亚洲国产日韩欧美一区二区三区,精品亚洲国产成人av在线,国产99视频精品免视看7,99国产精品久久久久久久成人热,欧美日韩亚洲国产综合乱

Home Backend Development Python Tutorial A Complete Guide to LangChain in Python

A Complete Guide to LangChain in Python

Feb 10, 2025 am 08:29 AM

LangChain: A powerful Python library for building, experimenting and analyzing language models and agents

A Complete Guide to LangChain in Python

Core points:

  • LangChain is a Python library that simplifies the creation, experimentation and analysis of language models and agents, providing a wide range of functions for natural language processing.
  • It allows the creation of multifunctional agents that are able to understand and generate text and can configure specific behaviors and data sources to perform various language-related tasks.
  • LangChain provides three types of models: Large Language Model (LLM), Chat Model and Text Embedding Model, each providing unique functionality for language processing tasks.
  • It also provides features such as segmenting large text into easy-to-manage blocks, linking multiple LLM functions through chains to perform complex tasks, and integrating with various LLM and AI services outside of OpenAI.

LangChain is a powerful Python library that enables developers and researchers to create, experiment, and analyze language models and agents. It provides natural language processing (NLP) enthusiasts with a rich set of features, from building custom models to efficient manipulating text data. In this comprehensive guide, we will dig into the basic components of LangChain and demonstrate how to take advantage of its power in Python.

Environment settings:

To learn this article, create a new folder and install LangChain and OpenAI using pip:

pip3 install langchain openai

Agents:

In LangChain, an agent is an entity that can understand and generate text. These agents can configure specific behaviors and data sources and are trained to perform various language-related tasks, making them a multi-functional tool for a variety of applications.

Create LangChain agent:

Agencies can be configured to use "tools" to collect the required data and develop a good response. Please see the example below. It uses the Serp API (an internet search API) to search for information related to a question or input and to respond. It also uses the llm-math tool to perform mathematical operations—for example, converting units or finding a percentage change between two values:

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
os.environ["SERPAPI_API_KEY"] = "YOUR_SERP_API_KEY" # 獲取你的Serp API密鑰:https://serpapi.com/

OpenAI.api_key = "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"
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("How much energy did wind turbines produce worldwide in 2022?")

As you can see, after completing all the basic imports and initialization of LLM (llm = OpenAI(model="gpt-3.5-turbo", temperature=0)), the code uses tools = load_tools(["serpapi", "llm-math"], llm=llm) Load the tools required for the agent to work. It then uses the initialize_agent function to create an agent, provide it with the specified tool, and provides it with a ZERO_SHOT_REACT_DESCRIPTION description, which means it will not remember the previous problem.

Agency test example 1:

Let's test this agent with the following input:

<code>"How much energy did wind turbines produce worldwide in 2022?"</code>

A Complete Guide to LangChain in Python

As you can see, it uses the following logic:

  • Search for "wind turbine energy production worldwide 2022" using Serp Internet Search API
  • The best results for analysis
  • Get any relevant numbers
  • Use the llm-math tool to convert 906 GW to Joule because we are asking for energy, not power

Agency Test Example 2:

LangChain agent is not limited to searching the Internet. We can connect almost any data source (including our own) to the LangChain agent and ask questions about the data. Let's try to create an agent trained on a CSV dataset.

Download this Netflix movie and TV show dataset from SHIVAM BANSAL on Kaggle and move it to your directory. Now add this code to a new Python file:

pip3 install langchain openai

This code calls the create_csv_agent function and uses the netflix_titles.csv dataset. The following figure shows our test.

A Complete Guide to LangChain in Python

As shown above, its logic is to look for all occurrences of "Christian Bale" in the cast column.

We can also create a Pandas DataFrame agent like this:

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
os.environ["SERPAPI_API_KEY"] = "YOUR_SERP_API_KEY" # 獲取你的Serp API密鑰:https://serpapi.com/

OpenAI.api_key = "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"
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("How much energy did wind turbines produce worldwide in 2022?")

If we run it, we will see the result as shown below.

A Complete Guide to LangChain in Python A Complete Guide to LangChain in Python

These are just some examples. We can use almost any API or dataset with LangChain.

Models:

There are three types of models in LangChain: Large Language Model (LLM), Chat Model and Text Embedding Model. Let's explore each type of model with some examples.

Large Language Model:

LangChain provides a way to use large language models in Python to generate text output based on text input. It is not as complex as the chat model and is best suited for simple input-output language tasks. Here is an example using OpenAI:

<code>"How much energy did wind turbines produce worldwide in 2022?"</code>

As shown above, it uses the gpt-3.5-turbo model to generate output for the provided input ("Come up with a rap name for Matt Nikonorov"). In this example, I set the temperature to 0.9 to make the LLM more creative. It came up with “MC MegaMatt.” I gave it a 9/10 mark.

Chat Model:

It's fun to get the LLM model to come up with rap names, but if we want more complex answers and conversations, we need to use the chat model to improve our skills. Technically, how is the chat model different from a large language model? In the words of the LangChain document:

The chat model is a variant of the large language model. Although chat models use large language models in the background, they use slightly different interfaces. They do not use the "text input, text output" API, but use "chat messages" as the interface for input and output.

This is a simple Python chat model script:

pip3 install langchain openai

As shown above, the code first sends a SystemMessage and tells the chatbot to be friendly and informal, and then it sends a HumanMessage and tells the chatbot to convince us that Djokovich is better than Federer.

If you run this chatbot model, you will see the results shown below.

A Complete Guide to LangChain in Python

Embeddings:

Embing provides a way to convert words and numbers in blocks of text into vectors that can then be associated with other words or numbers. This may sound abstract, so let's look at an example:

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
os.environ["SERPAPI_API_KEY"] = "YOUR_SERP_API_KEY" # 獲取你的Serp API密鑰:https://serpapi.com/

OpenAI.api_key = "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"
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("How much energy did wind turbines produce worldwide in 2022?")

This will return a list of floating point numbers: [0.022762885317206383, -0.01276398915797472, 0.004815981723368168, -0.009435392916202545, 0.010824492201209068] . This is what embedding looks like.

Usage cases of embedded models:

If we want to train a chatbot or LLM to answer questions related to our data or specific text samples, we need to use embedding. Let's create a simple CSV file (embs.csv) with a "text" column containing three pieces of information:

<code>"How much energy did wind turbines produce worldwide in 2022?"</code>

Now, this is a script that will use embeds to get the question "Who was the tallest human ever?" and find the correct answer in the CSV file:

from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.agents.agent_types import AgentType
from langchain.agents import create_csv_agent
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"

agent = create_csv_agent(
    OpenAI(temperature=0),
    "netflix_titles.csv",
    verbose=True,
    agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)

agent.run("In how many movies was Christian Bale casted")

If we run this code, we will see it output "Robert Wadlow was the tallest human ever". The code finds the correct answer by getting the embedding of each piece of information and finding the embedding that is most relevant to the question "Who was the tallest human ever?". Embedded power!

Chunks:

LangChain models cannot process large texts at the same time and use them to generate responses. This is where block and text segmentation come in. Let's look at two simple ways to split text data into blocks before feeding it to LangChain.

Segment blocks by character:

To avoid sudden interruptions in blocks, we can split the text by paragraph by splitting the text at each occurrence of a newline or double newline:

from langchain.agents import create_pandas_dataframe_agent
from langchain.chat_models import ChatOpenAI
from langchain.agents.agent_types import AgentType
from langchain.llms import OpenAI
import pandas as pd
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_KEY"
df = pd.read_csv("netflix_titles.csv")

agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df, verbose=True)

agent.run("In what year were the most comedy movies released?")

Recursive segmentation block:

If we want to strictly split text by characters of a certain length, we can use RecursiveCharacterTextSplitter:

from langchain.llms import OpenAI
import os
os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"

llm = OpenAI(model="gpt-3.5-turbo", temperature=0.9)
print(llm("Come up with a rap name for Matt Nikonorov"))

Block size and overlap:

When looking at the example above, you may want to know exactly what the block size and overlapping parameters mean, and how they affect performance. This can be explained in two ways:

  • Block size determines the number of characters in each block. The larger the block size, the more data there is in the block, the longer it takes LangChain to process it and generate the output, and vice versa.
  • Block overlap is the content that shares information between blocks so that they share some context. The higher the block overlap, the more redundant our blocks are, the lower the block overlap, the less context shared between blocks. Typically, a good block overlap is 10% to 20% of the block size, although the desired block overlap varies by different text types and use cases.

Chains:

Chapters are basically multiple LLM functions linked together to perform more complex tasks that cannot be accomplished through simple LLM input-> output. Let's look at a cool example:

pip3 install langchain openai

This code enters two variables into its prompts and develops a creative answer (temperature=0.9). In this example, we ask it to come up with a good title for a horror movie about mathematics. The output after running this code is "The Calculating Curse", but this doesn't really show the full functionality of the chain.

Let's look at a more practical example:

from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import OpenAI
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
os.environ["SERPAPI_API_KEY"] = "YOUR_SERP_API_KEY" # 獲取你的Serp API密鑰:https://serpapi.com/

OpenAI.api_key = "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"
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("How much energy did wind turbines produce worldwide in 2022?")

This code may seem confusing, so let's explain it step by step.

This code reads a short biography of Nas (Hip Hop Artist) and extracts the following values ??from the text and formats them as JSON objects:

  • Artist's name
  • Artist's music genre
  • The artist's first album
  • The release year of the artist's first album

In the prompt, we also specified "Make sure to answer in the correct format" so that we always get the output in JSON format. Here is the output of this code:

<code>"How much energy did wind turbines produce worldwide in 2022?"</code>

By providing the JSON pattern to the create_structed_output_chain function, we make the chain put its output into the JSON format.

Beyond OpenAI:

Although I have been using the OpenAI model as an example of different functions of LangChain, it is not limited to the OpenAI model. We can use LangChain with many other LLM and AI services. (This is the complete list of LangChain's integrated LLMs.)

For example, we can use Cohere with LangChain. This is the documentation for the LangChain Cohere integration, but to provide a practical example, after installing Cohere using pip3 install cohere, we can write a simple Q&A code using LangChain and Cohere as follows:

from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.agents.agent_types import AgentType
from langchain.agents import create_csv_agent
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"

agent = create_csv_agent(
    OpenAI(temperature=0),
    "netflix_titles.csv",
    verbose=True,
    agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)

agent.run("In how many movies was Christian Bale casted")

The above code produces the following output:

from langchain.agents import create_pandas_dataframe_agent
from langchain.chat_models import ChatOpenAI
from langchain.agents.agent_types import AgentType
from langchain.llms import OpenAI
import pandas as pd
import os

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_KEY"
df = pd.read_csv("netflix_titles.csv")

agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df, verbose=True)

agent.run("In what year were the most comedy movies released?")

Conclusion:

In this guide, you have seen different aspects and functions of LangChain. Once you have mastered this knowledge, you can use LangChain's capabilities to perform NLP work, whether you are a researcher, developer or enthusiast.

You can find a repository on GitHub that contains all the images and Nas.txt files in this article.

I wish you a happy coding and experimenting with LangChain in Python!

The above is the detailed content of A Complete Guide to LangChain in Python. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Polymorphism in python classes Polymorphism in python classes Jul 05, 2025 am 02:58 AM

Polymorphism is a core concept in Python object-oriented programming, referring to "one interface, multiple implementations", allowing for unified processing of different types of objects. 1. Polymorphism is implemented through method rewriting. Subclasses can redefine parent class methods. For example, the spoke() method of Animal class has different implementations in Dog and Cat subclasses. 2. The practical uses of polymorphism include simplifying the code structure and enhancing scalability, such as calling the draw() method uniformly in the graphical drawing program, or handling the common behavior of different characters in game development. 3. Python implementation polymorphism needs to satisfy: the parent class defines a method, and the child class overrides the method, but does not require inheritance of the same parent class. As long as the object implements the same method, this is called the "duck type". 4. Things to note include the maintenance

Python Function Arguments and Parameters Python Function Arguments and Parameters Jul 04, 2025 am 03:26 AM

Parameters are placeholders when defining a function, while arguments are specific values ??passed in when calling. 1. Position parameters need to be passed in order, and incorrect order will lead to errors in the result; 2. Keyword parameters are specified by parameter names, which can change the order and improve readability; 3. Default parameter values ??are assigned when defined to avoid duplicate code, but variable objects should be avoided as default values; 4. args and *kwargs can handle uncertain number of parameters and are suitable for general interfaces or decorators, but should be used with caution to maintain readability.

Explain Python generators and iterators. Explain Python generators and iterators. Jul 05, 2025 am 02:55 AM

Iterators are objects that implement __iter__() and __next__() methods. The generator is a simplified version of iterators, which automatically implement these methods through the yield keyword. 1. The iterator returns an element every time he calls next() and throws a StopIteration exception when there are no more elements. 2. The generator uses function definition to generate data on demand, saving memory and supporting infinite sequences. 3. Use iterators when processing existing sets, use a generator when dynamically generating big data or lazy evaluation, such as loading line by line when reading large files. Note: Iterable objects such as lists are not iterators. They need to be recreated after the iterator reaches its end, and the generator can only traverse it once.

Python `@classmethod` decorator explained Python `@classmethod` decorator explained Jul 04, 2025 am 03:26 AM

A class method is a method defined in Python through the @classmethod decorator. Its first parameter is the class itself (cls), which is used to access or modify the class state. It can be called through a class or instance, which affects the entire class rather than a specific instance; for example, in the Person class, the show_count() method counts the number of objects created; when defining a class method, you need to use the @classmethod decorator and name the first parameter cls, such as the change_var(new_value) method to modify class variables; the class method is different from the instance method (self parameter) and static method (no automatic parameters), and is suitable for factory methods, alternative constructors, and management of class variables. Common uses include:

How to handle API authentication in Python How to handle API authentication in Python Jul 13, 2025 am 02:22 AM

The key to dealing with API authentication is to understand and use the authentication method correctly. 1. APIKey is the simplest authentication method, usually placed in the request header or URL parameters; 2. BasicAuth uses username and password for Base64 encoding transmission, which is suitable for internal systems; 3. OAuth2 needs to obtain the token first through client_id and client_secret, and then bring the BearerToken in the request header; 4. In order to deal with the token expiration, the token management class can be encapsulated and automatically refreshed the token; in short, selecting the appropriate method according to the document and safely storing the key information is the key.

What are Python magic methods or dunder methods? What are Python magic methods or dunder methods? Jul 04, 2025 am 03:20 AM

Python's magicmethods (or dunder methods) are special methods used to define the behavior of objects, which start and end with a double underscore. 1. They enable objects to respond to built-in operations, such as addition, comparison, string representation, etc.; 2. Common use cases include object initialization and representation (__init__, __repr__, __str__), arithmetic operations (__add__, __sub__, __mul__) and comparison operations (__eq__, ___lt__); 3. When using it, make sure that their behavior meets expectations. For example, __repr__ should return expressions of refactorable objects, and arithmetic methods should return new instances; 4. Overuse or confusing things should be avoided.

How does Python memory management work? How does Python memory management work? Jul 04, 2025 am 03:26 AM

Pythonmanagesmemoryautomaticallyusingreferencecountingandagarbagecollector.Referencecountingtrackshowmanyvariablesrefertoanobject,andwhenthecountreacheszero,thememoryisfreed.However,itcannothandlecircularreferences,wheretwoobjectsrefertoeachotherbuta

Describe Python garbage collection in Python. Describe Python garbage collection in Python. Jul 03, 2025 am 02:07 AM

Python's garbage collection mechanism automatically manages memory through reference counting and periodic garbage collection. Its core method is reference counting, which immediately releases memory when the number of references of an object is zero; but it cannot handle circular references, so a garbage collection module (gc) is introduced to detect and clean the loop. Garbage collection is usually triggered when the reference count decreases during program operation, the allocation and release difference exceeds the threshold, or when gc.collect() is called manually. Users can turn off automatic recycling through gc.disable(), manually execute gc.collect(), and adjust thresholds to achieve control through gc.set_threshold(). Not all objects participate in loop recycling. If objects that do not contain references are processed by reference counting, it is built-in

See all articles