亚洲国产日韩欧美一区二区三区,精品亚洲国产成人av在线,国产99视频精品免视看7,99国产精品久久久久久久成人热,欧美日韩亚洲国产综合乱

Home Web Front-end JS Tutorial A Complete Guide to LangChain in JavaScript

A Complete Guide to LangChain in JavaScript

Feb 08, 2025 am 10:24 AM

LangChainJS: A powerful framework for building AI-driven JavaScript language models and agents

A Complete Guide to LangChain in JavaScript

Core points:

  • LangChainJS is a powerful JavaScript framework that enables developers to build and experiment with AI-driven language models and agents that are seamlessly integrated into web applications.
  • This framework allows the creation of agents that can leverage various tools and data sources to perform complex language tasks such as Internet searches and mathematical calculations, thereby improving the accuracy and relevance of responses.
  • LangChain supports a variety of models, including language models for simple text output, chat models for interactive conversations, and embedding models for converting text into numeric vectors, thereby facilitating the development of various NLP applications.
  • Text data can be managed and processed efficiently through customizable chunking methods, ensuring optimal performance and contextual relevance when processing large text.
  • In addition to using the OpenAI model, LangChain is compatible with other large language models (LLMs) and AI services, providing flexibility and extension capabilities for developers exploring the integration of different AIs in their projects.

This guide will dive into the key components of LangChain and demonstrate how to leverage its power in JavaScript. LangChainJS is a common JavaScript framework that enables developers and researchers to create, experiment and analyze language models and agents. It provides natural language processing (NLP) enthusiasts with a wealth of capabilities, from building custom models to efficient manipulating text data. As a JavaScript framework, it also allows developers to easily integrate their AI applications into web applications.

Prerequisites:

To learn this article, create a new folder and install the LangChain npm package:

npm install -S langchain

After creating a new folder, use the .mjs suffix to create a new JS module file (for example test1.mjs).

Agents:

In LangChain, an agent is an entity that can understand and generate text. These agents can configure specific behaviors and data sources and are trained to perform various language-related tasks, making them a multi-functional tool for a variety of applications.

Create LangChain agent:

Agencies can be configured to use "tools" to collect the required data and develop a good response. Please see the example below. It uses the Serp API (an internet search API) to search for information related to a question or input and respond to it. It also uses the llm-math tool to perform mathematical operations—for example, converting units or finding a percentage change between two values:

npm install -S langchain

After creating model variables using modelName: "gpt-3.5-turbo" and temperature: 0, we create an executor that combines the created model with the specified tools (SerpAPI and Calculator). In the input, I asked LLM to search the internet (using SerpAPI) and find out which artist has released more albums since 2010—Nas or Boldy James—and show the percentage difference (using Calculator).

In this example, I have to explicitly tell LLM to "via search for the internet..." to get it to use the internet to get data until today, rather than using OpenAI default to 2021 only.

The output is as follows:

import { initializeAgentExecutorWithOptions } from "langchain/agents";
import { ChatOpenAI } from "langchain/chat_models/openai";
import { SerpAPI } from "langchain/tools";
import { Calculator } from "langchain/tools/calculator";

process.env["OPENAI_API_KEY"] = "YOUR_OPENAI_KEY"
process.env["SERPAPI_API_KEY"] = "YOUR_SERPAPI_KEY"

const tools = [new Calculator(), new SerpAPI()];
const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 });

const executor = await initializeAgentExecutorWithOptions(tools, model, {
  agentType: "openai-functions",
  verbose: false,
});

const result = await executor.run("通過(guò)搜索互聯(lián)網(wǎng),查找Boldy James自2010年以來(lái)發(fā)行了多少?gòu)垖?zhuān)輯,以及Nas自2010年以來(lái)發(fā)行了多少?gòu)垖?zhuān)輯?找出誰(shuí)發(fā)行了更多專(zhuān)輯,并顯示百分比差異。");
console.log(result);

Models (Models):

There are three types of models in LangChain: LLM, chat model, and text embedding model. Let's explore each type of model with some examples.

Language Model:

LangChain provides a way to use language models in JavaScript to generate text output based on text input. It is not as complex as the chat model and is best suited for simple input-output language tasks. Here is an example using OpenAI:

<code>// 輸出將取決于互聯(lián)網(wǎng)搜索結(jié)果</code>

As you can see, it uses the gpt-3.5-turbo model to list all the red berries. In this example, I set the temperature to 0 to give the LLM de facto accuracy.

Output:

import { OpenAI } from "langchain/llms/openai";

const llm = new OpenAI({
  openAIApiKey: "YOUR_OPENAI_KEY",
  model: "gpt-3.5-turbo",
  temperature: 0
});

const res = await llm.call("列出所有紅色的漿果");

console.log(res);

Chat Model:

If you want more complex answers and conversations, you need to use the chat model. Technically, how is the chat model different from a language model? In the words of LangChain documentation:

Chat model is a variant of the language model. Although chat models use language models in the background, they use slightly different interfaces. Instead of using the "text input, text output" API, they use the "chat message" as the interface for input and output.

This is a simple (quite useless but interesting) JavaScript chat model script:

<code>// 輸出將列出紅色的漿果</code>

As you can see, the code first sends a system message and tells the chatbot to become a poetic assistant who always answers with rhymes, and then it sends a human message telling the chatbot to tell me who is the better tennis player: De Jokovic, Federer or Nadal. If you run this chatbot model, you will see something like this:

import { ChatOpenAI } from "langchain/chat_models/openai";
import { PromptTemplate } from "langchain/prompts";

const chat = new ChatOpenAI({
  openAIApiKey: "YOUR_OPENAI_KEY",
  model: "gpt-3.5-turbo",
  temperature: 0
});
const prompt = PromptTemplate.fromTemplate(`你是一個(gè)詩(shī)意的助手,總是用押韻來(lái)回答:{question}`);
const runnable = prompt.pipe(chat);
const response = await runnable.invoke({ question: "誰(shuí)更好,德約科維奇、費(fèi)德勒還是納達(dá)爾?" });
console.log(response);

Embeddings:

Embing model provides a way to convert words and numbers in text into vectors that can then be associated with other words or numbers. This may sound abstract, so let's look at an example:

<code>// 輸出將是一個(gè)用押韻回答的問(wèn)題</code>

This will return a long list of floating point numbers:

import { OpenAIEmbeddings } from "langchain/embeddings/openai";

process.env["OPENAI_API_KEY"] = "YOUR_OPENAI_KEY"

const embeddings = new OpenAIEmbeddings();
const res = await embeddings.embedQuery("誰(shuí)創(chuàng)造了萬(wàn)維網(wǎng)?");
console.log(res)

This is what embedding looks like. There are so many floating point numbers in just six words!

This embed can then be used to associate the input text with potential answers, related text, names, etc.

Let's look at a use case for embedded models now...

Now, this is a script that will use embeds to get the question "What is the heaviest animal?" and find the correct answer from the list of possible answers provided:

npm install -S langchain

Chunks:

LangChain models cannot process large texts and use them to generate responses. This is where chunking and text segmentation come into play. Let me show you two simple ways to split text data into chunks before feeding it to LangChain.

Segment by character:

To avoid sudden interruptions in chunking, you can split the text by paragraph by splitting each occurrence of a newline character:

import { initializeAgentExecutorWithOptions } from "langchain/agents";
import { ChatOpenAI } from "langchain/chat_models/openai";
import { SerpAPI } from "langchain/tools";
import { Calculator } from "langchain/tools/calculator";

process.env["OPENAI_API_KEY"] = "YOUR_OPENAI_KEY"
process.env["SERPAPI_API_KEY"] = "YOUR_SERPAPI_KEY"

const tools = [new Calculator(), new SerpAPI()];
const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0 });

const executor = await initializeAgentExecutorWithOptions(tools, model, {
  agentType: "openai-functions",
  verbose: false,
});

const result = await executor.run("通過(guò)搜索互聯(lián)網(wǎng),查找Boldy James自2010年以來(lái)發(fā)行了多少?gòu)垖?zhuān)輯,以及Nas自2010年以來(lái)發(fā)行了多少?gòu)垖?zhuān)輯?找出誰(shuí)發(fā)行了更多專(zhuān)輯,并顯示百分比差異。");
console.log(result);

This is a useful way to split text. However, you can use any character as a chunking separator, not just n.

Recursively segmenting chunking:

If you want to strictly divide text by characters of a certain length, you can use RecursiveCharacterTextSplitter:

<code>// 輸出將取決于互聯(lián)網(wǎng)搜索結(jié)果</code>

In this example, the text is divided every 100 characters, and the chunks overlap to 15 characters.

Block size and overlap:

By looking at these examples, you may have begun to wonder what the chunking size and overlapping parameters mean and how they affect performance. OK, let me explain two things briefly.

  • The chunk size determines the number of characters in each chunk. The larger the chunk size, the more data there is in the chunk, the longer it takes LangChain to process it and generate the output, and vice versa.
  • Block overlap is the content that shares information between blocks so that they share some context. The higher the chunk overlap, the more redundant your chunks will be; the lower the chunk overlap, the less context shared between chunks. Typically, a good chunking overlap is about 10% to 20% of the chunking size, although the desired chunking overlap varies by different text types and use cases.

Chains:

Chapter is basically multiple LLM functions linked together to perform more complex tasks, otherwise it cannot be done through simple LLM input-> output. Let's look at a cool example:

import { OpenAI } from "langchain/llms/openai";

const llm = new OpenAI({
  openAIApiKey: "YOUR_OPENAI_KEY",
  model: "gpt-3.5-turbo",
  temperature: 0
});

const res = await llm.call("列出所有紅色的漿果");

console.log(res);

Beyond OpenAI:

Even if I have been using the OpenAI model as an example of different functions of LangChain, it is not limited to the OpenAI model. You can use LangChain with numerous other LLM and AI services. You can find a complete list of LangChain and JavaScript integrated LLMs in their documentation.

For example, you can use Cohere with LangChain. After installing Cohere, using npm install cohere-ai, you can create a simple Q&A code using LangChain and Cohere, as shown below:

<code>// 輸出將列出紅色的漿果</code>

Output:

import { ChatOpenAI } from "langchain/chat_models/openai";
import { PromptTemplate } from "langchain/prompts";

const chat = new ChatOpenAI({
  openAIApiKey: "YOUR_OPENAI_KEY",
  model: "gpt-3.5-turbo",
  temperature: 0
});
const prompt = PromptTemplate.fromTemplate(`你是一個(gè)詩(shī)意的助手,總是用押韻來(lái)回答:{question}`);
const runnable = prompt.pipe(chat);
const response = await runnable.invoke({ question: "誰(shuí)更好,德約科維奇、費(fèi)德勒還是納達(dá)爾?" });
console.log(response);

Conclusion:

In this guide, you have seen different aspects and functions of LangChain in JavaScript. You can easily develop AI-powered web applications in JavaScript using LangChain and experiment with LLM. Be sure to refer to the LangChainJS documentation for more details about specific features.

I wish you happy coding and experimenting with LangChain in JavaScript! If you like this article, you may also want to read articles about using LangChain with Python.

The above is the detailed content of A Complete Guide to LangChain in JavaScript. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

PHP Tutorial
1488
72
How does garbage collection work in JavaScript? How does garbage collection work in JavaScript? Jul 04, 2025 am 12:42 AM

JavaScript's garbage collection mechanism automatically manages memory through a tag-clearing algorithm to reduce the risk of memory leakage. The engine traverses and marks the active object from the root object, and unmarked is treated as garbage and cleared. For example, when the object is no longer referenced (such as setting the variable to null), it will be released in the next round of recycling. Common causes of memory leaks include: ① Uncleared timers or event listeners; ② References to external variables in closures; ③ Global variables continue to hold a large amount of data. The V8 engine optimizes recycling efficiency through strategies such as generational recycling, incremental marking, parallel/concurrent recycling, and reduces the main thread blocking time. During development, unnecessary global references should be avoided and object associations should be promptly decorated to improve performance and stability.

How to make an HTTP request in Node.js? How to make an HTTP request in Node.js? Jul 13, 2025 am 02:18 AM

There are three common ways to initiate HTTP requests in Node.js: use built-in modules, axios, and node-fetch. 1. Use the built-in http/https module without dependencies, which is suitable for basic scenarios, but requires manual processing of data stitching and error monitoring, such as using https.get() to obtain data or send POST requests through .write(); 2.axios is a third-party library based on Promise. It has concise syntax and powerful functions, supports async/await, automatic JSON conversion, interceptor, etc. It is recommended to simplify asynchronous request operations; 3.node-fetch provides a style similar to browser fetch, based on Promise and simple syntax

JavaScript Data Types: Primitive vs Reference JavaScript Data Types: Primitive vs Reference Jul 13, 2025 am 02:43 AM

JavaScript data types are divided into primitive types and reference types. Primitive types include string, number, boolean, null, undefined, and symbol. The values are immutable and copies are copied when assigning values, so they do not affect each other; reference types such as objects, arrays and functions store memory addresses, and variables pointing to the same object will affect each other. Typeof and instanceof can be used to determine types, but pay attention to the historical issues of typeofnull. Understanding these two types of differences can help write more stable and reliable code.

React vs Angular vs Vue: which js framework is best? React vs Angular vs Vue: which js framework is best? Jul 05, 2025 am 02:24 AM

Which JavaScript framework is the best choice? The answer is to choose the most suitable one according to your needs. 1.React is flexible and free, suitable for medium and large projects that require high customization and team architecture capabilities; 2. Angular provides complete solutions, suitable for enterprise-level applications and long-term maintenance; 3. Vue is easy to use, suitable for small and medium-sized projects or rapid development. In addition, whether there is an existing technology stack, team size, project life cycle and whether SSR is needed are also important factors in choosing a framework. In short, there is no absolutely the best framework, the best choice is the one that suits your needs.

JavaScript time object, someone builds an eactexe, faster website on Google Chrome, etc. JavaScript time object, someone builds an eactexe, faster website on Google Chrome, etc. Jul 08, 2025 pm 02:27 PM

Hello, JavaScript developers! Welcome to this week's JavaScript news! This week we will focus on: Oracle's trademark dispute with Deno, new JavaScript time objects are supported by browsers, Google Chrome updates, and some powerful developer tools. Let's get started! Oracle's trademark dispute with Deno Oracle's attempt to register a "JavaScript" trademark has caused controversy. Ryan Dahl, the creator of Node.js and Deno, has filed a petition to cancel the trademark, and he believes that JavaScript is an open standard and should not be used by Oracle

Understanding Immediately Invoked Function Expressions (IIFE) in JavaScript Understanding Immediately Invoked Function Expressions (IIFE) in JavaScript Jul 04, 2025 am 02:42 AM

IIFE (ImmediatelyInvokedFunctionExpression) is a function expression executed immediately after definition, used to isolate variables and avoid contaminating global scope. It is called by wrapping the function in parentheses to make it an expression and a pair of brackets immediately followed by it, such as (function(){/code/})();. Its core uses include: 1. Avoid variable conflicts and prevent duplication of naming between multiple scripts; 2. Create a private scope to make the internal variables invisible; 3. Modular code to facilitate initialization without exposing too many variables. Common writing methods include versions passed with parameters and versions of ES6 arrow function, but note that expressions and ties must be used.

Handling Promises: Chaining, Error Handling, and Promise Combinators in JavaScript Handling Promises: Chaining, Error Handling, and Promise Combinators in JavaScript Jul 08, 2025 am 02:40 AM

Promise is the core mechanism for handling asynchronous operations in JavaScript. Understanding chain calls, error handling and combiners is the key to mastering their applications. 1. The chain call returns a new Promise through .then() to realize asynchronous process concatenation. Each .then() receives the previous result and can return a value or a Promise; 2. Error handling should use .catch() to catch exceptions to avoid silent failures, and can return the default value in catch to continue the process; 3. Combinators such as Promise.all() (successfully successful only after all success), Promise.race() (the first completion is returned) and Promise.allSettled() (waiting for all completions)

What is the cache API and how is it used with Service Workers? What is the cache API and how is it used with Service Workers? Jul 08, 2025 am 02:43 AM

CacheAPI is a tool provided by the browser to cache network requests, which is often used in conjunction with ServiceWorker to improve website performance and offline experience. 1. It allows developers to manually store resources such as scripts, style sheets, pictures, etc.; 2. It can match cache responses according to requests; 3. It supports deleting specific caches or clearing the entire cache; 4. It can implement cache priority or network priority strategies through ServiceWorker listening to fetch events; 5. It is often used for offline support, speed up repeated access speed, preloading key resources and background update content; 6. When using it, you need to pay attention to cache version control, storage restrictions and the difference from HTTP caching mechanism.

See all articles