Skip to content

Commit

Permalink
docs[patch]: Update LangGraph docs (langchain-ai#4873)
Browse files Browse the repository at this point in the history
* Update LangGraph docs

* Format
  • Loading branch information
jacoblee93 authored Mar 25, 2024
1 parent cf08fa7 commit e3319d9
Showing 1 changed file with 17 additions and 14 deletions.
31 changes: 17 additions & 14 deletions docs/core_docs/docs/langgraph.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -396,7 +396,10 @@ Let's define the nodes, as well as a function to decide how what conditional edg
```typescript
import { FunctionMessage } from "@langchain/core/messages";
import { AgentAction } from "@langchain/core/agents";
import type { RunnableConfig } from "@langchain/core/runnables";
import {
ChatPromptTemplate,
MessagesPlaceholder,
} from "@langchain/core/prompts";

// Define the function that determines whether to continue or not
const shouldContinue = (state: { messages: Array<BaseMessage> }) => {
Expand Down Expand Up @@ -428,33 +431,33 @@ const _getAction = (state: { messages: Array<BaseMessage> }): AgentAction => {
// We construct an AgentAction from the function_call
return {
tool: lastMessage.additional_kwargs.function_call.name,
toolInput: JSON.stringify(
toolInput: JSON.parse(
lastMessage.additional_kwargs.function_call.arguments
),
log: "",
};
};

// Define the function that calls the model
const callModel = async (
state: { messages: Array<BaseMessage> },
config?: RunnableConfig
) => {
const callModel = async (state: { messages: Array<BaseMessage> }) => {
const { messages } = state;
const response = await newModel.invoke(messages, config);
// You can use a prompt here to tweak model behavior.
// You can also just pass messages to the model directly.
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant."],
new MessagesPlaceholder("messages"),
]);
const response = await prompt.pipe(newModel).invoke({ messages });
// We return a list, because this will get added to the existing list
return {
messages: [response],
};
};

const callTool = async (
state: { messages: Array<BaseMessage> },
config?: RunnableConfig
) => {
const callTool = async (state: { messages: Array<BaseMessage> }) => {
const action = _getAction(state);
// We call the tool_executor and get back a response
const response = await toolExecutor.invoke(action, config);
const response = await toolExecutor.invoke(action);
// We use the response to create a FunctionMessage
const functionMessage = new FunctionMessage({
content: response,
Expand Down Expand Up @@ -532,7 +535,7 @@ const inputs = {
const result = await app.invoke(inputs);
```

See a LangSmith trace of this run [here](https://smith.langchain.com/public/2562d46e-da94-4c9d-9b14-3759a26aec9b/r).
See a LangSmith trace of this run [here](https://smith.langchain.com/public/144af8a3-b496-43aa-ba9d-f0d5894196e2/r).

This may take a little bit - it's making a few calls behind the scenes.
In order to start seeing some intermediate results as they happen, we can use streaming - see below for more information on that.
Expand All @@ -555,7 +558,7 @@ for await (const output of await app.stream(inputs)) {
}
```

See a LangSmith trace of this run [here](https://smith.langchain.com/public/9afacb13-b9dc-416e-abbe-6ed2a0811afe/r).
See a LangSmith trace of this run [here](https://smith.langchain.com/public/968cd1bf-0db2-410f-a5b4-0e73066cf06e/r).

## Running Examples

Expand Down

0 comments on commit e3319d9

Please sign in to comment.