Large language models (LLMs) have surged in popularity, with the tool-calling feature dramatically expanding their capabilities beyond simple text generation. Now, LLMs can handle complex automation tasks such as dynamic UI creation and autonomous actions. Trained on massive datasets, these models excel at understanding and producing structured data, making them ideal for precise tool-calling applications. This has fueled their widespread adoption in AI-driven software development, where tool-calling – from basic functions to sophisticated agents – is now central. This article explores the fundamentals of LLM tool calling and demonstrates how to implement it using open-source tools to build powerful agents.
Key Learning Objectives
- Grasp the concept of LLM tools.
- Understand the fundamentals of tool calling and its applications.
- Explore tool-calling implementations in OpenAI (ChatCompletions API, Assistants API, parallel tool calling, and structured output), Anthropic models, and LangChain.
- Learn to construct effective AI agents using open-source resources.
*This article is part of the***Data Science Blogathon.
Table of Contents
- What are Tools?
- What is Tool Calling?
- How Does Tool Calling Work?
- Example Use Cases
- Tool Calling with OpenAI Models
- Utilizing the Assistant API
- Parallel Function Calling
- Structured Output
- Tool Calling with Anthropic Claude
- Tool Calling with LangChain
- Schema Definition with Pydantic
- Building Agents with Tool Calling
- Introducing Composio
- Building a GitHub Agent
- Frequently Asked Questions
What are Tools?
Tools are mechanisms allowing LLMs to interact with external systems. These tools are functions accessible to the LLM, executed independently when the LLM deems their use necessary. A typical tool definition includes:
- Name: A descriptive function/tool name.
- Description: A detailed tool explanation.
- Parameters: A JSON schema defining the function/tool parameters.
What is Tool Calling?
Tool calling enables the model to generate responses matching a user-defined function schema. When the LLM decides a tool is needed, it produces a structured output conforming to the tool's argument schema. For example, given a get_weather
function schema, a query about a city's weather would return a formatted schema of function arguments, enabling execution to retrieve the weather data. Importantly, the LLM doesn't execute the tool; it generates the structured input for external execution.
How Does Tool Calling Work?
Companies like OpenAI and Anthropic have trained models to select appropriate tools based on context. Each provider handles tool invocation and responses differently. Generally:
- Define Tools and Provide a Prompt: Define tools with names, descriptions, and structured schemas, along with the user's prompt (e.g., "What's the weather in London?").
- LLM Tool Selection: The LLM assesses tool necessity. If so, it halts text generation and generates a JSON-formatted response with tool parameter values.
- Extract, Execute, and Return: Extract parameters, run the function, and return outputs to the LLM.
- Answer Generation: The LLM uses tool outputs to formulate the final answer.
Example Use Cases
- Action Enablement: Connect LLMs to applications (Gmail, GitHub, Discord) to automate actions (sending emails, creating pull requests, sending messages).
- Data Provision: Fetch data from knowledge bases (web, Wikipedia, APIs) to provide specific information to LLMs.
- Dynamic UIs: Update application UIs based on user input.
The following sections detail tool-calling approaches in OpenAI, Anthropic, and LangChain. Open-source models (like Llama 3) and inference providers (like Groq) also support tool calling.
(The remainder of the article would continue with detailed explanations of tool calling in OpenAI, Anthropic, LangChain, building agents, Composio, and a GitHub agent example, mirroring the structure and content of the original input but with rephrased sentences and vocabulary.)
The above is the detailed content of Tool Calling in LLMs. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Investing is booming, but capital alone isn’t enough. With valuations rising and distinctiveness fading, investors in AI-focused venture funds must make a key decision: Buy, build, or partner to gain an edge? Here’s how to evaluate each option—and pr

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And

Remember the flood of open-source Chinese models that disrupted the GenAI industry earlier this year? While DeepSeek took most of the headlines, Kimi K1.5 was one of the prominent names in the list. And the model was quite cool.

Let’s talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). For those readers who h

By mid-2025, the AI “arms race” is heating up, and xAI and Anthropic have both released their flagship models, Grok 4 and Claude 4. These two models are at opposite ends of the design philosophy and deployment platform, yet they

For example, if you ask a model a question like: “what does (X) person do at (X) company?” you may see a reasoning chain that looks something like this, assuming the system knows how to retrieve the necessary information:Locating details about the co

The Senate voted 99-1 Tuesday morning to kill the moratorium after a last-minute uproar from advocacy groups, lawmakers and tens of thousands of Americans who saw it as a dangerous overreach. They didn’t stay quiet. The Senate listened.States Keep Th

Clinical trials are an enormous bottleneck in drug development, and Kim and Reddy thought the AI-enabled software they’d been building at Pi Health could help do them faster and cheaper by expanding the pool of potentially eligible patients. But the
