How to use legacy LangChain Agents (AgentExecutor)
This guide assumes familiarity with the following concepts:
By themselves, language models can’t take actions - they just output text. Agents are systems that use an LLM as a reasoning engine to determine which actions to take and what the inputs to those actions should be. The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish.
In this tutorial we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it.
This section will cover building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we’d recommend checking out LangGraph.
Concepts
Concepts we will cover are: - Using language
models, in particular their tool calling
ability - Creating a Retriever to expose
specific information to our agent - Using a Search
Tool to look up things online -
Chat History
, which allows a chatbot
to “remember” past interactions and take them into account when
responding to followup questions. - Debugging and tracing your
application using LangSmith
Setup
Jupyter Notebook
This guide (and most of the other guides in the documentation) uses Jupyter notebooks and assumes the reader is as well. Jupyter notebooks are perfect for learning how to work with LLM systems because oftentimes things can go wrong (unexpected output, API down, etc) and going through guides in an interactive environment is a great way to better understand them.
This and other tutorials are perhaps most conveniently run in a Jupyter notebook. See here for instructions on how to install.
Installation
To install LangChain (and cheerio
for the web loader) run:
- npm
- yarn
- pnpm
npm i langchain cheerio
yarn add langchain cheerio
pnpm add langchain cheerio
For more details, see our Installation guide.
LangSmith
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with LangSmith.
After you sign up at the link above, make sure to set your environment variables to start logging traces:
export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_API_KEY="..."
# Reduce tracing latency if you are not in a serverless environment
# export LANGCHAIN_CALLBACKS_BACKGROUND=true
Define tools
We first need to create the tools we want to use. We will use two tools: Tavily (to search online) and then a retriever over a local index we will create
Tavily
We have a built-in tool in LangChain to easily use Tavily search engine as tool. Note that this requires an API key - they have a free tier, but if you don’t have one or don’t want to create one, you can always ignore this step.
Once you create your API key, you will need to export that as:
export TAVILY_API_KEY="..."
import "cheerio"; // This is required in notebooks to use the `CheerioWebBaseLoader`
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";
const search = new TavilySearchResults({
maxResults: 2,
});
await search.invoke("what is the weather in SF");
`[{"title":"Weather in San Francisco","url":"https://www.weatherapi.com/","content":"{'location': {'n`... 1347 more characters
Retriever
We will also create a retriever over some data of our own. For a deeper explanation of each step here, see this tutorial.
import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings } from "@langchain/openai";
import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";
const loader = new CheerioWebBaseLoader(
"https://docs.smith.langchain.com/overview"
);
const docs = await loader.load();
const splitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 200,
});
const documents = await splitter.splitDocuments(docs);
const vectorStore = await MemoryVectorStore.fromDocuments(
documents,
new OpenAIEmbeddings()
);
const retriever = vectorStore.asRetriever();
(await retriever.invoke("how to upload a dataset"))[0];
Document {
pageContent: 'description="A sample dataset in LangSmith.")client.create_examples( inputs=[ {"postfix": '... 891 more characters,
metadata: {
source: "https://docs.smith.langchain.com/overview",
loc: { lines: { from: 4, to: 4 } }
}
}
Now that we have populated our index that we will do doing retrieval over, we can easily turn it into a tool (the format needed for an agent to properly use it)
import { createRetrieverTool } from "langchain/tools/retriever";
const retrieverTool = await createRetrieverTool(retriever, {
name: "langsmith_search",
description:
"Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",
});
Tools
Now that we have created both, we can create a list of tools that we will use downstream.
const tools = [search, retrieverTool];
Using Language Models
Next, let’s learn how to use a language model by to call tools. LangChain supports many different language models that you can use interchangably - select the one you want to use below!
Pick your chat model:
- OpenAI
- Anthropic
- FireworksAI
- MistralAI
- Groq
- VertexAI
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
Add environment variables
OPENAI_API_KEY=your-api-key
Instantiate the model
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({ model: "gpt-4" });
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
Add environment variables
ANTHROPIC_API_KEY=your-api-key
Instantiate the model
import { ChatAnthropic } from "@langchain/anthropic";
const model = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
Add environment variables
FIREWORKS_API_KEY=your-api-key
Instantiate the model
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const model = new ChatFireworks({
model: "accounts/fireworks/models/llama-v3p1-70b-instruct",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
Add environment variables
MISTRAL_API_KEY=your-api-key
Instantiate the model
import { ChatMistralAI } from "@langchain/mistralai";
const model = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
Add environment variables
GROQ_API_KEY=your-api-key
Instantiate the model
import { ChatGroq } from "@langchain/groq";
const model = new ChatGroq({
model: "mixtral-8x7b-32768",
temperature: 0
});
Install dependencies
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
Add environment variables
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
Instantiate the model
import { ChatVertexAI } from "@langchain/google-vertexai";
const model = new ChatVertexAI({
model: "gemini-1.5-flash",
temperature: 0
});
You can call the language model by passing in a list of messages. By
default, the response is a content
string.
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({ model: "gpt-4", temperature: 0 });
import { HumanMessage } from "@langchain/core/messages";
const response = await model.invoke([new HumanMessage("hi!")]);
response.content;
"Hello! How can I assist you today?"
We can now see what it is like to enable this model to do tool calling.
In order to enable that we use .bind
to give the language model
knowledge of these tools
const modelWithTools = model.bindTools(tools);
We can now call the model. Let’s first call it with a normal message,
and see how it responds. We can look at both the content
field as well
as the tool_calls
field.
const response = await modelWithTools.invoke([new HumanMessage("Hi!")]);
console.log(`Content: ${response.content}`);
console.log(`Tool calls: ${response.tool_calls}`);
Content: Hello! How can I assist you today?
Tool calls:
Now, let’s try calling it with some input that would expect a tool to be called.
const response = await modelWithTools.invoke([
new HumanMessage("What's the weather in SF?"),
]);
console.log(`Content: ${response.content}`);
console.log(`Tool calls: ${JSON.stringify(response.tool_calls, null, 2)}`);
Content:
Tool calls: [
{
"name": "tavily_search_results_json",
"args": {
"input": "current weather in San Francisco"
},
"id": "call_VcSjZAZkEOx9lcHNZNXAjXkm"
}
]
We can see that there’s now no content, but there is a tool call! It wants us to call the Tavily Search tool.
This isn’t calling that tool yet - it’s just telling us to. In order to actually calll it, we’ll want to create our agent.
Create the agent
Now that we have defined the tools and the LLM, we can create the agent. We will be using a tool calling agent - for more information on this type of agent, as well as other options, see this guide.
We can first choose the prompt we want to use to guide the agent:
import { ChatPromptTemplate } from "@langchain/core/prompts";
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant"],
["placeholder", "{chat_history}"],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);
console.log(prompt.promptMessages);
[
SystemMessagePromptTemplate {
lc_serializable: true,
lc_kwargs: {
prompt: PromptTemplate {
lc_serializable: true,
lc_kwargs: {
inputVariables: [],
templateFormat: "f-string",
template: "You are a helpful assistant"
},
lc_runnable: true,
name: undefined,
lc_namespace: [ "langchain_core", "prompts", "prompt" ],
inputVariables: [],
outputParser: undefined,
partialVariables: undefined,
templateFormat: "f-string",
template: "You are a helpful assistant",
validateTemplate: true
}
},
lc_runnable: true,
name: undefined,
lc_namespace: [ "langchain_core", "prompts", "chat" ],
inputVariables: [],
additionalOptions: {},
prompt: PromptTemplate {
lc_serializable: true,
lc_kwargs: {
inputVariables: [],
templateFormat: "f-string",
template: "You are a helpful assistant"
},
lc_runnable: true,
name: undefined,
lc_namespace: [ "langchain_core", "prompts", "prompt" ],
inputVariables: [],
outputParser: undefined,
partialVariables: undefined,
templateFormat: "f-string",
template: "You are a helpful assistant",
validateTemplate: true
},
messageClass: undefined,
chatMessageClass: undefined
},
MessagesPlaceholder {
lc_serializable: true,
lc_kwargs: { variableName: "chat_history", optional: true },
lc_runnable: true,
name: undefined,
lc_namespace: [ "langchain_core", "prompts", "chat" ],
variableName: "chat_history",
optional: true
},
HumanMessagePromptTemplate {
lc_serializable: true,
lc_kwargs: {
prompt: PromptTemplate {
lc_serializable: true,
lc_kwargs: {
inputVariables: [Array],
templateFormat: "f-string",
template: "{input}"
},
lc_runnable: true,
name: undefined,
lc_namespace: [ "langchain_core", "prompts", "prompt" ],
inputVariables: [ "input" ],
outputParser: undefined,
partialVariables: undefined,
templateFormat: "f-string",
template: "{input}",
validateTemplate: true
}
},
lc_runnable: true,
name: undefined,
lc_namespace: [ "langchain_core", "prompts", "chat" ],
inputVariables: [ "input" ],
additionalOptions: {},
prompt: PromptTemplate {
lc_serializable: true,
lc_kwargs: {
inputVariables: [ "input" ],
templateFormat: "f-string",
template: "{input}"
},
lc_runnable: true,
name: undefined,
lc_namespace: [ "langchain_core", "prompts", "prompt" ],
inputVariables: [ "input" ],
outputParser: undefined,
partialVariables: undefined,
templateFormat: "f-string",
template: "{input}",
validateTemplate: true
},
messageClass: undefined,
chatMessageClass: undefined
},
MessagesPlaceholder {
lc_serializable: true,
lc_kwargs: { variableName: "agent_scratchpad", optional: true },
lc_runnable: true,
name: undefined,
lc_namespace: [ "langchain_core", "prompts", "chat" ],
variableName: "agent_scratchpad",
optional: true
}
]
Now, we can initalize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our conceptual guide.
Note that we are passing in the model
, not modelWithTools
. That is
because createToolCallingAgent
will call .bind
for us under the
hood.
import { createToolCallingAgent } from "langchain/agents";
const agent = await createToolCallingAgent({ llm: model, tools, prompt });
Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools).
import { AgentExecutor } from "langchain/agents";
const agentExecutor = new AgentExecutor({
agent,
tools,
});
Run the agent
We can now run the agent on a few queries! Note that for now, these are all stateless queries (it won’t remember previous interactions).
First up, let’s how it responds when there’s no need to call a tool:
await agentExecutor.invoke({ input: "hi!" });
{ input: "hi!", output: "Hello! How can I assist you today?" }
In order to see exactly what is happening under the hood (and to make sure it’s not calling a tool) we can take a look at the LangSmith trace
Let’s now try it out on an example where it should be invoking the retriever
await agentExecutor.invoke({ input: "how can langsmith help with testing?" });
{
input: "how can langsmith help with testing?",
output: "LangSmith can be a valuable tool for testing in several ways:\n" +
"\n" +
"1. **Logging Traces**: LangSmith prov"... 960 more characters
}
Let’s take a look at the LangSmith trace to make sure it’s actually calling that.
Now let’s try one where it needs to call the search tool:
await agentExecutor.invoke({ input: "whats the weather in sf?" });
{
input: "whats the weather in sf?",
output: "The current weather in San Francisco, California is partly cloudy with a temperature of 12.2°C (54.0"... 176 more characters
}
We can check out the LangSmith trace to make sure it’s calling the search tool effectively.
Adding in memory
As mentioned earlier, this agent is stateless. This means it does not
remember previous interactions. To give it memory we need to pass in
previous chat_history
.
Note: The input variable needs to be called chat_history
because
of the prompt we are using. If we use a different prompt, we could
change the variable name.
// Here we pass in an empty list of messages for chat_history because it is the first message in the chat
await agentExecutor.invoke({ input: "hi! my name is bob", chat_history: [] });
{
input: "hi! my name is bob",
chat_history: [],
output: "Hello Bob! How can I assist you today?"
}
import { AIMessage, HumanMessage } from "@langchain/core/messages";
await agentExecutor.invoke({
chat_history: [
new HumanMessage("hi! my name is bob"),
new AIMessage("Hello Bob! How can I assist you today?"),
],
input: "what's my name?",
});
{
chat_history: [
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "hi! my name is bob",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "hi! my name is bob",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "Hello Bob! How can I assist you today?",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Hello Bob! How can I assist you today?",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
}
],
input: "what's my name?",
output: "Your name is Bob. How can I assist you further?"
}
If we want to keep track of these messages automatically, we can wrap this in a RunnableWithMessageHistory.
Because we have multiple inputs, we need to specify two things:
inputMessagesKey
: The input key to use to add to the conversation history.historyMessagesKey
: The key to add the loaded messages into.
For more information on how to use this, see this guide.
import { ChatMessageHistory } from "@langchain/community/stores/message/in_memory";
import { BaseChatMessageHistory } from "@langchain/core/chat_history";
import { RunnableWithMessageHistory } from "@langchain/core/runnables";
const store = {};
function getMessageHistory(sessionId: string): BaseChatMessageHistory {
if (!(sessionId in store)) {
store[sessionId] = new ChatMessageHistory();
}
return store[sessionId];
}
const agentWithChatHistory = new RunnableWithMessageHistory({
runnable: agentExecutor,
getMessageHistory,
inputMessagesKey: "input",
historyMessagesKey: "chat_history",
});
await agentWithChatHistory.invoke(
{ input: "hi! I'm bob" },
{ configurable: { sessionId: "<foo>" } }
);
{
input: "hi! I'm bob",
chat_history: [
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "hi! I'm bob",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "hi! I'm bob",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "Hello, Bob! How can I assist you today?",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Hello, Bob! How can I assist you today?",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
}
],
output: "Hello, Bob! How can I assist you today?"
}
await agentWithChatHistory.invoke(
{ input: "what's my name?" },
{ configurable: { sessionId: "<foo>" } }
);
{
input: "what's my name?",
chat_history: [
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "hi! I'm bob",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "hi! I'm bob",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "Hello, Bob! How can I assist you today?",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Hello, Bob! How can I assist you today?",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
},
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "what's my name?",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "what's my name?",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "Your name is Bob. How can I assist you further?",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Your name is Bob. How can I assist you further?",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
}
],
output: "Your name is Bob. How can I assist you further?"
}
Example LangSmith trace: https://smith.langchain.com/public/98c8d162-60ae-4493-aa9f-992d87bd0429/r
Next steps
That’s a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there’s lot to learn!
This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we’d recommend checking out LangGraph.
You can also see this guide to help migrate to LangGraph.