LangGraph: Revolutionize your AI agents

LangGraph: Revolutionize your AI agents LangGraph: Revolutionize your AI agents

LangGraphlink image 55

Disclaimer: This post has been translated to English using a machine translation model. Please, let me know if you find any mistakes.

LangGraph is a low-level orchestration framework for building controllable agents

While LangChain provides integrations and components to accelerate the development of LLM applications, the LangGraph library enables the orchestration of agents, offering customizable architectures, long-term memory, and human in the loop to reliably handle complex tasks.

In this post, we are going to disable LangSmith, which is a graph debugging tool. We will disable it to avoid adding more complexity to the post and focus solely on LangGraph

How does LangGraph work?link image 56

LangGraph is based on three components:

  • Nodes: Represent the processing units of the application, such as calling an LLM or a tool. They are Python functions that run when the node is called.* Take the state as input* Perform some operation* Return the updated state* Edges: Represent the transitions between nodes. They define the logic of how the graph will be executed, that is, which node will run after another. They can be:* Directs: Go from one node to another* Conditionals: Depend on a condition* State: Represents the state of the application, that is, it contains all the necessary information for the application. It is maintained during the execution of the application. It is defined by the user, so you need to think carefully about what will be stored in it.

LangGraph concept

All LangGraph graphs start from a START node and end at an END node.

Installation of LangGraphlink image 57

To install LangGraph, you can use pip:

pip install -U langgraph```
or install from Conda:
```bash
conda install langgraph```

Installation of Hugging Face and Anthropic Moduleslink image 58

We are going to use a language model from Hugging Face, so we need to install its langgraph package.

pip install langchain-huggingface```

For one part we are going to use Sonnet 3.7, then we will explain why. So we also install the Anthropic package.

pip install langchain_anthropic```

Hugging Face API Keylink image 59

We are going to use Qwen/Qwen2.5-72B-Instruct through Hugging Face Inference Endpoints, so we need an API KEY.

To be able to use the Inference Endpoints of HuggingFace, the first thing you need is to have an account on HuggingFace. Once you have one, you need to go to Access tokens in your profile settings and generate a new token. We need to give it a name. In my case, I'm going to call it langgraph and enable the permission Make calls to inference providers. It will create a token that we need to copy.

To manage the token, we are going to create a file in the same path where we are working called .env and we will put the token we have copied into the file in the following way:

HUGGINGFACE_LANGGRAPH="hf_...."```

Now, to obtain the token, we need to have dotenv installed, which we install through

pip install python-dotenv```
We run the following
	
import os
import dotenv
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
Copy

Now that we have a token, we create a client. For this, we need to have the huggingface_hub library installed. We install it using conda or pip.

pip install --upgrade huggingface_hub```
o
``` bash
conda install -c conda-forge huggingface_hub```

Now we have to choose which model we are going to use. You can see the available models on the Supported models page of the Inference Endpoints documentation from Hugging Face. We are going to use Qwen2.5-72B-Instruct which is a very good model.

	
MODEL = "Qwen/Qwen2.5-72B-Instruct"
Copy

Now we can create the client

	
from huggingface_hub import InferenceClient
client = InferenceClient(api_key=HUGGINGFACE_TOKEN, model=MODEL)
client
Copy
	
<InferenceClient(model='Qwen/Qwen2.5-72B-Instruct', timeout=None)>

We do a test to see if it works

	
message = [
{opening_brace} "role": "user", "content": "Hola, qué tal?" {closing_brace}
]
stream = client.chat.completions.create(
messages=message,
temperature=0.5,
max_tokens=1024,
top_p=0.7,
stream=False
)
response = stream.choices[0].message.content
print(response)
Copy
	
¡Hola! Estoy bien, gracias por preguntar. ¿Cómo estás tú? ¿En qué puedo ayudarte hoy?

API KEY of Anthropiclink image 60

Create a basic chatbotlink image 61

We are going to create a simple chatbot using LangGraph. This chatbot will respond directly to the user's messages. Although it is simple, it will serve to see the basic concepts of building graphs with LangGraph.

As its name suggests, LangGraph is a library for handling graphs. So we start by creating a graph StateGraph. A StateGraph defines the structure of our chatbot as a state machine. We will add nodes to our graph to represent the llms, tools, and functions, where the llms can make use of these tools and functions; and we add edges to specify how the bot should transition between those nodes.

So we start by creating a StateGraph that needs a State class to handle the graph state. Since we are now going to create a simple chatbot, we only need to handle a list of messages in the state.

	
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
class State(TypedDict):
# Messages have the type "list". The `add_messages` function
# in the annotation defines how this state key should be updated
# (in this case, it appends messages to the list, rather than overwriting them)
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
Copy

The function add_messages combines two lists of messages. New message lists will arrive, so they will be added to the existing message list. Each message list contains an ID, so they are added with this ID. This ensures that messages are only added, not replaced, unless a new message has the same ID as an existing one, in which case it is replaced. add_messages is a reducer function, it is a function responsible for updating the state.

The graph graph_builder that we have created receives a state State and returns a new state State. Additionally, it updates the list of messages.

Concept>> When defining a graph, the first step is to define its State. The State includes the graph schema and the reducer functions that handle state updates.>> In our example, State is of type TypedDict (typed dictionary) with one key: messages.>> add_messages is a reducer function used to add new messages to the list instead of overwriting them in the list. If a state key does not have a reducer function, each value coming from that key will overwrite the previous values.>> add_messages is a reducer function of langgraph, but we will be able to create our own

Now we are going to add the chatbot node to the graph. Nodes represent units of work. Usually, they are regular Python functions.

We add a node with the add_node method that receives the name of the node and the function that will be executed.

So we are going to create an LLM with HuggingFace, then we will create a chat model with LangChain that will reference the created LLM. Once we have defined a chat model, we define the function that will be executed in the node of our graph. That function will make a call to the created chat model and return the result.

Lastly, we are going to add a node with the chatbot function to the graph

	
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from huggingface_hub import login
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
# Create the LLM model
login(token=HUGGINGFACE_TOKEN) # Login to HuggingFace to use the model
MODEL = "Qwen/Qwen2.5-72B-Instruct"
model = HuggingFaceEndpoint(
repo_id=MODEL,
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
# Create the chat model
llm = ChatHuggingFace(llm=model)
# Define the chatbot function
def chatbot_function(state: State):
return {opening_brace}"messages": [llm.invoke(state["messages"])]}
# The first argument is the unique node name
# The second argument is the function or object that will be called whenever
# the node is used.
graph_builder.add_node("chatbot_node", chatbot_function)
Copy
	
<langgraph.graph.state.StateGraph at 0x130548440>

We have used ChatHuggingFace which is a chat of the type BaseChatModel that is a base chat type of LangChain. Once we created the BaseChatModel, we created the function chatbot_function that will run when the node is executed. And finally, we created the node chatbot_node and indicated that it has to execute the function chatbot_function.

Notice>> The node function chatbot_function takes the state State as input and returns a dictionary that contains an update to the list messages for the key messages. This is the basic pattern for all functions of the node LangGraph.

The reducer function of our graph add_messages will add the response messages from the llm to any message that is already in the state.

Next, we add an entry node. This tells our graph where to start its work each time we run it.

	
from langgraph.graph import START
graph_builder.add_edge(START, "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x130548440>

Similarly, we add a finish node. This indicates to the graph that each time this node is executed, it can finish the job.

	
from langgraph.graph import END
graph_builder.add_edge("chatbot_node", END)
Copy
	
<langgraph.graph.state.StateGraph at 0x130548440>

We have imported START and END which can be found in constants and are the first and the last node of our graph. Normally they are virtual nodes

Finally, we need to compile our graph. To do this, we use the graph construction method compile(). This creates a CompiledGraph that we can use to run our application.

	
graph = graph_builder.compile()
Copy

We can visualize the graph using the get_graph method and one of the "drawing" methods, such as draw_ascii or draw_mermaid_png. Each drawing method requires additional dependencies.

from IPython.display import Image, display
      
      try:
          display(Image(graph.get_graph().draw_mermaid_png()))
      except Exception as e:
          print(f"Error al visualizar el grafo: {e}")
      
image uv 1

Now we can test the chatbot!

Tip>> In the following code block, you can exit the chat loop at any time by typing quit, exit, or q.

	
# Colors for the terminal
COLOR_GREEN = "\033[32m"
COLOR_YELLOW = "\033[33m"
COLOR_RESET = "\033[0m"
def stream_graph_updates(user_input: str):
for event in graph.stream({opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace}):
for value in event.values():
print(f"{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}user_input{closing_brace}")
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}value['messages'][-1].content{closing_brace}")
while True:
try:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print(f"{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}user_input{closing_brace}")
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}Goodbye!")
break
events =stream_graph_updates(user_input)
except:
# fallback if input() is not available
user_input = "What do you know about LangGraph?"
print("User: " + user_input)
stream_graph_updates(user_input)
break
Copy
	
User: Hello
Assistant: Hello! It's nice to meet you. How can I assist you today? Whether you have questions, need information, or just want to chat, I'm here to help!
User: How are you doing?
Assistant: I'm just a computer program, so I don't have feelings, but I'm here and ready to help you with any questions or tasks you have! How can I assist you today?
User: Me well, I'm making a post about LangGraph, what do you think?
Assistant: LangGraph is an intriguing topic, especially if you're delving into the realm of graph-based models and their applications in natural language processing (NLP). LangGraph, as I understand, is a framework or tool that leverages graph theory to improve or provide a new perspective on NLP tasks such as text classification, information extraction, and semantic analysis. By representing textual information as graphs (nodes for entities and edges for relationships), it can offer a more nuanced understanding of the context and semantics in language data.
If you're making a post about it, here are a few points you might consider:
1. **Introduction to LangGraph**: Start with a brief explanation of what LangGraph is and its core principles. How does it model language or text differently compared to traditional NLP approaches? What unique advantages does it offer by using graph-based methods?
2. **Applications of LangGraph**: Discuss some of the key applications where LangGraph has been or can be applied. This could include improving the accuracy of sentiment analysis, enhancing machine translation, or optimizing chatbot responses to be more contextually aware.
3. **Technical Innovations**: Highlight any technical innovations or advancements that LangGraph brings to the table. This could be about new algorithms, more efficient data structures, or novel ways of training models on graph data.
4. **Challenges and Limitations**: It's also important to address the challenges and limitations of using graph-based methods in NLP. Performance, scalability, and the current state of the technology can be discussed here.
5. **Future Prospects**: Wrap up with a look into the future of LangGraph and graph-based NLP in general. What are the upcoming trends, potential areas of growth, and how might these tools start impacting broader technology landscapes?
Each section can help frame your post in a way that's informative and engaging for your audience, whether they're technical experts or casual readers looking for an introduction to this intriguing area of NLP.
User: q
Assistant: Goodbye!

Congratulations! You have built your first chatbot using LangGraph. This bot can engage in basic conversation by taking user input and generating responses using the LLM we defined.

We have been writing the code step by step, and it might not have been very clear. This was done to explain each part of the code, but now we are going to rewrite it, but in a different order, which looks clearer. That is, now that there's no need to explain each part of the code, we will group it differently to make it more clear.

from typing import Annotated
      from typing_extensions import TypedDict
      
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from huggingface_hub import login
      
      from IPython.display import Image, display
      
      import os
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      import dotenv
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      # State
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Function
      def chatbot_function(state: State):
          return {"messages": [llm.invoke(state["messages"])]}
      
      # Start to build the graph
      graph_builder = StateGraph(State)
      
      # Add nodes to the graph
      graph_builder.add_node("chatbot_node", chatbot_function)
      
      # Add edges
      graph_builder.add_edge(START, "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile()
      
      # Display the graph
      try:
          display(Image(graph.get_graph().draw_mermaid_png()))
      except Exception as e:
          print(f"Error al visualizar el grafo: {e}")
      
image uv 2

Morelink image 62

All the more blocks are there if you want to delve deeper into LangGraph. If not, you can read everything without reading the more blocks.

State Typinglink image 63

We have seen how to create an agent with a typed state using TypedDict, but we can create it with another typed type.

Typing with TypeDictlink image 64

It's the form we've seen before, we type the state as a dictionary using Python's TypeDict typing. We pass a key and a value for each variable in our state.

from typing_extensions import TypedDictfrom typing import Annotatedfrom langgraph.graph.message import add_messagesfrom langgraph.graph import StateGraph
class State(TypedDict):messages: Annotated[list, add_messages]```

To access the messages, we do it as with any dictionary, using state["messages"]

Typing with dataclasslink image 65

Another option is to use the Python dataclass typing

from dataclasses import dataclassfrom typing import Annotatedfrom langgraph.graph.message import add_messagesfrom langgraph.graph import StateGraph
@dataclassclass State:messages: Annotated[list, add_messages]```

As can be seen, it is similar to typing with dictionaries, but now, since the state is a class, we access the messages through state.messages

Typing with Pydanticlink image 66

Pydantic is a widely used library for type hinting data in Python. It offers the possibility to add type checking. We are going to check that the message starts with 'User', 'Assistant', or 'System'.

from pydantic import BaseModel, field_validator, ValidationErrorfrom typing import Annotatedfrom langgraph.graph.message import add_messages
class State(BaseModel):messages: Annotated[list, add_messages] # Should start by 'User', 'Assistant' or 'System'
@field_validator('messages')@classmethoddef validate_messages(cls, value):# Ensure the messages start with `User`, `Assistant` or `System`if not value.startswith["'User'"] and not value.startswith["'Assistant'"] and not value.startswith["'System'"]:raise ValueError("Message must start with 'User', 'Assistant' or 'System'")return value
try:state = PydanticState(messages=["Hello"])except ValidationError as e:print("Validation Error:", e)```

Reducerslink image 67

As we have said, we need to use a function of type Reducer to indicate how to update the state, since otherwise the state values will be overwritten.

Let's see an example of a graph where we don't use a Reducer function to indicate how to update the state

from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from IPython.display import Image, display
      
      class State(TypedDict):
          foo: int
      
      def node_1(state):
          print("---Node 1---")
          return {"foo": state['foo'] + 1}
      
      def node_2(state):
          print("---Node 2---")
          return {"foo": state['foo'] + 1}
      
      def node_3(state):
          print("---Node 3---")
          return {"foo": state['foo'] + 1}
      
      # Build graph
      builder = StateGraph(State)
      builder.add_node("node_1", node_1)
      builder.add_node("node_2", node_2)
      builder.add_node("node_3", node_3)
      
      # Logic
      builder.add_edge(START, "node_1")
      builder.add_edge("node_1", "node_2")
      builder.add_edge("node_1", "node_3")
      builder.add_edge("node_2", END)
      builder.add_edge("node_3", END)
      
      # Add
      graph = builder.compile()
      
      # View
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 3

As we can see, we have defined a graph in which node 1 is executed first, followed by nodes 2 and 3. Let's run it to see what happens.

	
from langgraph.errors import InvalidUpdateError
try:
graph.invoke({"foo" : 1})
except InvalidUpdateError as e:
print(f"InvalidUpdateError occurred: {e}")
Copy
	
---Node 1---
---Node 2---
---Node 3---
InvalidUpdateError occurred: At key 'foo': Can receive only one value per step. Use an Annotated key to handle multiple values.
For troubleshooting, visit: https://python.langchain.com/docs/troubleshooting/errors/INVALID_CONCURRENT_GRAPH_UPDATE

We get an error because first node 1 modifies the value of foo and then nodes 2 and 3 try to modify the value of foo in parallel, which results in an error.

So to avoid that, we use a function of type Reducer to indicate how to modify the state

Predefined reducerslink image 68

We use the Annotated type to specify that it is a function of type Reducer. And we use the add operator to add a value to a list.

from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from IPython.display import Image, display
      from operator import add
      from typing import Annotated
      
      class State(TypedDict):
          foo: Annotated[list[int], add]
      
      def node_1(state):
          print("---Node 1---")
          return {"foo": [state['foo'][-1] + 1]}
      
      def node_2(state):
          print("---Node 2---")
          return {"foo": [state['foo'][-1] + 1]}
      
      def node_3(state):
          print("---Node 3---")
          return {"foo": [state['foo'][-1] + 1]}
      
      # Build graph
      builder = StateGraph(State)
      builder.add_node("node_1", node_1)
      builder.add_node("node_2", node_2)
      builder.add_node("node_3", node_3)
      
      # Logic
      builder.add_edge(START, "node_1")
      builder.add_edge("node_1", "node_2")
      builder.add_edge("node_1", "node_3")
      builder.add_edge("node_2", END)
      builder.add_edge("node_3", END)
      
      # Add
      graph = builder.compile()
      
      # View
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 4

We run it again to see what happens

graph.invoke({"foo" : [1]})
      
---Node 1---
      ---Node 2---
      ---Node 3---
      
Out[8]:
{'foo': [1, 2, 3, 3]}

As we see, we initialize the value of foo to 1, which is added to a list. Then node 1 adds 1 and appends it as a new value in the list, that is, it adds a 2. Finally, nodes 2 and 3 add one to the last value in the list, meaning both nodes get a 3 and both nodes append it to the end of the list, which is why the resulting list has two 3s at the end.

Let's consider the case where one branch has more nodes than another

from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from IPython.display import Image, display
      from operator import add
      from typing import Annotated
      
      class State(TypedDict):
          foo: Annotated[list[int], add]
      
      def node_1(state):
          print("---Node 1---")
          return {"foo": [state['foo'][-1] + 1]}
      
      def node_2_1(state):
          print("---Node 2_1---")
          return {"foo": [state['foo'][-1] + 1]}
      
      def node_2_2(state):
          print("---Node 2_2---")
          return {"foo": [state['foo'][-1] + 1]}
      
      def node_3(state):
          print("---Node 3---")
          return {"foo": [state['foo'][-1] + 1]}
      
      # Build graph
      builder = StateGraph(State)
      builder.add_node("node_1", node_1)
      builder.add_node("node_2_1", node_2_1)
      builder.add_node("node_2_2", node_2_2)
      builder.add_node("node_3", node_3)
      
      # Logic
      builder.add_edge(START, "node_1")
      builder.add_edge("node_1", "node_2_1")
      builder.add_edge("node_1", "node_3")
      builder.add_edge("node_2_1", "node_2_2")
      builder.add_edge("node_2_2", END)
      builder.add_edge("node_3", END)
      
      # Add
      graph = builder.compile()
      
      # View
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 5

If we now run the graph

graph.invoke({"foo" : [1]})
      
---Node 1---
      ---Node 2_1---
      ---Node 3---
      ---Node 2_2---
      
Out[3]:
{'foo': [1, 2, 3, 3, 4]}

What has happened is that first node 1 was executed, followed by node 2_1, then in parallel nodes 2_2 and 3, and finally the END node.

Since we have defined foo as a list of integers, and it is typed, if we initialize the state with None we get an error

	
try:
graph.invoke({"foo" : None})
except TypeError as e:
print(f"TypeError occurred: {e}")
Copy
	
TypeError occurred: can only concatenate list (not "NoneType") to list

Let's see how to fix it with custom reducers

Custom reducerslink image 69

Sometimes we can't use a predefined Reducer and we have to create our own.

from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from IPython.display import Image, display
      from typing import Annotated
      
      def reducer_function(current_list, new_item: list | None):
          if current_list is None:
              current_list = []
      
          if new_item is not None:
              return current_list + new_item
          return current_list
      
      class State(TypedDict):
          foo: Annotated[list[int], reducer_function]
      
      def node_1(state):
          print("---Node 1---")
          if len(state['foo']) == 0:
              return {'foo': [0]}
          return {"foo": [state['foo'][-1] + 1]}
      
      def node_2(state):
          print("---Node 2---")
          return {"foo": [state['foo'][-1] + 1]}
      
      def node_3(state):
          print("---Node 3---")
          return {"foo": [state['foo'][-1] + 1]}
      
      # Build graph
      builder = StateGraph(State)
      builder.add_node("node_1", node_1)
      builder.add_node("node_2", node_2)
      builder.add_node("node_3", node_3)
      
      # Logic
      builder.add_edge(START, "node_1")
      builder.add_edge("node_1", "node_2")
      builder.add_edge("node_1", "node_3")
      builder.add_edge("node_2", END)
      builder.add_edge("node_3", END)
      
      # Add
      graph = builder.compile()
      
      # View
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 6

If we now initialize the graph with a value of None, it won't give us an error.

	
try:
graph.invoke({"foo" : None})
except TypeError as e:
print(f"TypeError occurred: {e}")
Copy
	
---Node 1---
---Node 2---
---Node 3---

Multiple stateslink image 70

Private stateslink image 71

Suppose we want to hide state variables for whatever reason, because some variables only add noise or because we want to keep some variable private.

If we want to have a private state, we simply create it.

from typing_extensions import TypedDict
      from IPython.display import Image, display
      from langgraph.graph import StateGraph, START, END
      
      class OverallState(TypedDict):
          public_var: int
      
      class PrivateState(TypedDict):
          private_var: int
      
      def node_1(state: OverallState) -> PrivateState:
          print("---Node 1---")
          return {"private_var": state['public_var'] + 1}
      
      def node_2(state: PrivateState) -> OverallState:
          print("---Node 2---")
          return {"public_var": state['private_var'] + 1}
      
      # Build graph
      builder = StateGraph(OverallState)
      builder.add_node("node_1", node_1)
      builder.add_node("node_2", node_2)
      
      # Logic
      builder.add_edge(START, "node_1")
      builder.add_edge("node_1", "node_2")
      builder.add_edge("node_2", END)
      
      # Add
      graph = builder.compile()
      
      # View
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 7

As we can see, we have created the private state PrivateState and the public state OverallState. Each one has a private variable and a public variable. First, node 1 is executed, which modifies the private variable and returns it. Then, node 2 is executed, which modifies the public variable and returns it. Let's run the graph to see what happens.

graph.invoke({"public_var" : 1})
      
---Node 1---
      ---Node 2---
      
Out[2]:
{'public_var': 3}

As we can see when running the graph, we pass the public variable public_var and get another public variable public_var at the output with the modified value, but the private variable private_var was never accessed.

Input and output stateslink image 72

We can define the input and output variables of the graph. Although internally the state may have more variables, we define which variables are inputs to the graph and which variables are outputs.

from typing_extensions import TypedDict
      from IPython.display import Image, display
      from langgraph.graph import StateGraph, START, END
      
      class InputState(TypedDict):
          question: str
      
      class OutputState(TypedDict):
          answer: str
      
      class OverallState(TypedDict):
          question: str
          answer: str
          notes: str
      
      def thinking_node(state: InputState):
          return {"answer": "bye", "notes": "... his is name is Lance"}
      
      def answer_node(state: OverallState) -> OutputState:
          return {"answer": "bye Lance"}
      
      graph = StateGraph(OverallState, input=InputState, output=OutputState)
      
      graph.add_node("answer_node", answer_node)
      graph.add_node("thinking_node", thinking_node)
      graph.add_edge(START, "thinking_node")
      graph.add_edge("thinking_node", "answer_node")
      graph.add_edge("answer_node", END)
      
      graph = graph.compile()
      
      # View
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 8

In this case, the state has 3 variables, question, answer and notes. However, we define question as the input to the graph and answer as the output of the graph. Therefore, the internal state can have more variables, but they are not taken into account when invoking the graph. Let's run the graph to see what happens

	
graph.invoke({"question":"hi"})
Copy
	
{'answer': 'bye Lance'}

As we can see, we have added question to the graph and obtained answer at the output.

Context Handlinglink image 73

Let's revisit the code of the basic chatbot

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Define the chatbot function
      def chatbot_function(state: State):
          return {"messages": [llm.invoke(state["messages"])]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("chatbot_node", chatbot_function)
      
      # Connect nodes
      graph_builder.add_edge(START, "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile()
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 9

Let's create a context that we will pass to the model

	
from langchain_core.messages import AIMessage, HumanMessage
messages = [AIMessage(f"So you said you were researching ocean mammals?", name="Bot")]
messages.append(HumanMessage(f"Yes, I know about whales. But what others should I learn about?", name="Lance"))
for m in messages:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
Name: Bot
So you said you were researching ocean mammals?
================================ Human Message =================================
Name: Lance
Yes, I know about whales. But what others should I learn about?

If we pass it to the graph, we will get the output

	
output = graph.invoke({'messages': messages})
for m in output['messages']:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
Name: Bot
So you said you were researching ocean mammals?
================================ Human Message =================================
Name: Lance
Yes, I know about whales. But what others should I learn about?
================================== Ai Message ==================================
That's a great topic! Besides whales, there are several other fascinating ocean mammals you might want to learn about. Here are a few:
1. **Dolphins**: Highly intelligent and social, dolphins are found in all oceans of the world. They are known for their playful behavior and communication skills.
2. **Porpoises**: Similar to dolphins but generally smaller and stouter, porpoises are less social and more elusive. They are found in coastal waters around the world.
3. **Seals and Sea Lions**: These are semi-aquatic mammals that can be found in both Arctic and Antarctic regions, as well as in more temperate waters. They are known for their sleek bodies and flippers, and they differ in their ability to walk on land (sea lions can "walk" on their flippers, while seals can only wriggle or slide).
4. **Walruses**: Known for their large tusks and whiskers, walruses are found in the Arctic. They are well-adapted to cold waters and have a thick layer of blubber to keep them warm.
5. **Manatees and Dugongs**: These gentle, herbivorous mammals are often called "sea cows." They live in shallow, coastal areas and are found in tropical and subtropical regions. Manatees are found in the Americas, while dugongs are found in the Indo-Pacific region.
6. **Otters**: While not fully aquatic, sea otters spend most of their lives in the water and are excellent swimmers. They are known for their dense fur, which keeps them warm in cold waters.
7. **Polar Bears**: Although primarily considered land animals, polar bears are excellent swimmers and spend a significant amount of time in the water, especially when hunting for seals.
Each of these mammals has unique adaptations and behaviors that make them incredibly interesting to study. If you have any specific questions or topics you'd like to explore further, feel free to ask!

As we can see now in the output, we have an additional message. If this continues to grow, there will come a point when we have a very long context, which will mean a higher token expenditure, potentially leading to greater economic costs, and also resulting in increased latency. Moreover, with very long contexts, LLMs start to perform worse. In the latest models, as of the writing of this post, performance of the LLM starts to decline above 8k context tokens.

So we are going to look at several ways to manage this

Modify the context with Reducer functionslink image 74

We have seen that with Reducer functions we can modify the state messages.

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import RemoveMessage
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Nodes
      def filter_messages(state: State):
          # Delete all but the 2 most recent messages
          delete_messages = [RemoveMessage(id=m.id) for m in state["messages"][:-2]]
          return {"messages": delete_messages}
      
      def chat_model_node(state: State):    
          return {"messages": [llm.invoke(state["messages"])]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("filter_messages_node", filter_messages)
      graph_builder.add_node("chatbot_node", chat_model_node)
      
      # Connecto nodes
      graph_builder.add_edge(START, "filter_messages_node")
      graph_builder.add_edge("filter_messages_node", "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile()
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 10

As we see in the graph, first we filter the messages and then pass the result to the model.

We recreate a context that we will pass to the model, but now with more messages

	
from langchain_core.messages import AIMessage, HumanMessage
messages = [AIMessage(f"So you said you were researching ocean mammals?", name="Bot")]
messages.append(HumanMessage(f"Yes, I know about whales. But what others should I learn about?", name="Lance"))
messages.append(AIMessage(f"I know about sharks too", name="Bot"))
messages.append(HumanMessage(f"What others should I learn about?", name="Lance"))
messages.append(AIMessage(f"I know about dolphins too", name="Bot"))
messages.append(HumanMessage(f"Tell me more about dolphins", name="Lance"))
for m in messages:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
Name: Bot
So you said you were researching ocean mammals?
================================ Human Message =================================
Name: Lance
Yes, I know about whales. But what others should I learn about?
================================== Ai Message ==================================
Name: Bot
I know about sharks too
================================ Human Message =================================
Name: Lance
What others should I learn about?
================================== Ai Message ==================================
Name: Bot
I know about dolphins too
================================ Human Message =================================
Name: Lance
Tell me more about dolphins

If we pass it to the graph, we will get the output

	
output = graph.invoke({'messages': messages})
for m in output['messages']:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
Name: Bot
I know about dolphins too
================================ Human Message =================================
Name: Lance
Tell me more about dolphins
================================== Ai Message ==================================
Dolphins are highly intelligent marine mammals that are part of the family Delphinidae, which includes about 40 species. They are found in oceans worldwide, from tropical to temperate regions, and are known for their agility and playful behavior. Here are some interesting facts about dolphins:
1. **Social Behavior**: Dolphins are highly social animals and often live in groups called pods, which can range from a few individuals to several hundred. Social interactions are complex and include cooperative behaviors, such as hunting and defending against predators.
2. **Communication**: Dolphins communicate using a variety of sounds, including clicks, whistles, and body language. These sounds can be used for navigation (echolocation), communication, and social bonding. Each dolphin has a unique signature whistle that helps identify it to others in the pod.
3. **Intelligence**: Dolphins are considered one of the most intelligent animals on Earth. They have large brains and display behaviors such as problem-solving, mimicry, and even the use of tools. Some studies suggest that dolphins can recognize themselves in mirrors, indicating a level of self-awareness.
4. **Diet**: Dolphins are carnivores and primarily feed on fish and squid. They use echolocation to locate and catch their prey. Some species, like the bottlenose dolphin, have been observed using teamwork to herd fish into tight groups, making them easier to catch.
5. **Reproduction**: Dolphins typically give birth to a single calf after a gestation period of about 10 to 12 months. Calves are born tail-first and are immediately helped to the surface for their first breath by their mother or another dolphin. Calves nurse for up to two years and remain dependent on their mothers for a significant period.
6. **Conservation**: Many dolphin species are threatened by human activities such as pollution, overfishing, and habitat destruction. Some species, like the Indo-Pacific humpback dolphin and the Amazon river dolphin, are endangered. Conservation efforts are crucial to protect these animals and their habitats.
7. **Human Interaction**: Dolphins have a long history of interaction with humans, often appearing in mythology and literature. In some cultures, they are considered sacred or bring good luck. Today, dolphins are popular in marine parks and are often the focus of eco-tourism activities, such as dolphin-watching tours.
Dolphins continue to fascinate scientists and the general public alike, with ongoing research into their behavior, communication, and social structures providing new insights into these remarkable creatures.

As can be seen, the filtering function has removed all messages except the last two, and those two messages have been passed as context to the LLM.

Trimming messageslink image 75

Another solution is to trim each message from the list of messages that have too many tokens, a token limit is set and the message that exceeds this limit is removed.

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import trim_messages
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Nodes
      def trim_messages_node(state: State):
          # Trim the messages based on the specified parameters
          trimmed_messages = trim_messages(
              state["messages"],
              max_tokens=100,       # Maximum tokens allowed in the trimmed list
              strategy="last",     # Keep the latest messages
              token_counter=llm,   # Use the LLM's tokenizer to count tokens
              allow_partial=True,  # Allow cutting messages mid-way if needed
          )
      
          # Print the trimmed messages to see the effect of trim_messages
          print("--- trimmed messages (input to LLM) ---")
          for m in trimmed_messages:
              m.pretty_print()
          print("------------------------------------------------")
      
          # Invoke the LLM with the trimmed messages
          response = llm.invoke(trimmed_messages)
      
          # Return the LLM's response in the correct state format
          return {"messages": [response]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("trim_messages_node", trim_messages_node)
      
      # Connecto nodes
      graph_builder.add_edge(START, "trim_messages_node")
      graph_builder.add_edge("trim_messages_node", END)
      
      # Compile the graph
      graph = graph_builder.compile()
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 11

As we see in the graph, we first filter the messages and then pass the result to the model.

We recreate a context that we will pass to the model, but now with more messages

	
from langchain_core.messages import AIMessage, HumanMessage
messages = [AIMessage(f"So you said you were researching ocean mammals?", name="Bot")]
messages.append(HumanMessage(f"Yes, I know about whales. But what others should I learn about?", name="Lance"))
messages.append(AIMessage(f"""I know about sharks too. They are very dangerous, but they are also very beautiful.
Sometimes have been seen in the wild, but they are not very common. In the wild, they are very dangerous, but they are also very beautiful.
They live in the sea and in the ocean. They can travel long distances and can be found in many parts of the world.
Often they live in groups of 20 or more, but they are not very common.
They should eat a lot of food. Normally they eat a lot of fish.
The white shark is the largest of the sharks and is the most dangerous.
The great white shark is the most famous of the sharks and is the most dangerous.
The tiger shark is the most aggressive of the sharks and is the most dangerous.
The hammerhead shark is the most beautiful of the sharks and is the most dangerous.
The mako shark is the fastest of the sharks and is the most dangerous.
The bull shark is the most common of the sharks and is the most dangerous.
""", name="Bot"))
messages.append(HumanMessage(f"What others should I learn about?", name="Lance"))
messages.append(AIMessage(f"I know about dolphins too", name="Bot"))
messages.append(HumanMessage(f"Tell me more about dolphins", name="Lance"))
for m in messages:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
Name: Bot
So you said you were researching ocean mammals?
================================ Human Message =================================
Name: Lance
Yes, I know about whales. But what others should I learn about?
================================== Ai Message ==================================
Name: Bot
I know about sharks too. They are very dangerous, but they are also very beautiful.
Sometimes have been seen in the wild, but they are not very common. In the wild, they are very dangerous, but they are also very beautiful.
They live in the sea and in the ocean. They can travel long distances and can be found in many parts of the world.
Often they live in groups of 20 or more, but they are not very common.
They should eat a lot of food. Normally they eat a lot of fish.
The white shark is the largest of the sharks and is the most dangerous.
The great white shark is the most famous of the sharks and is the most dangerous.
The tiger shark is the most aggressive of the sharks and is the most dangerous.
The hammerhead shark is the most beautiful of the sharks and is the most dangerous.
The mako shark is the fastest of the sharks and is the most dangerous.
The bull shark is the most common of the sharks and is the most dangerous.
================================ Human Message =================================
Name: Lance
What others should I learn about?
================================== Ai Message ==================================
Name: Bot
I know about dolphins too
================================ Human Message =================================
Name: Lance
Tell me more about dolphins

If we pass it to the graph, we will get the output

	
output = graph.invoke({'messages': messages})
Copy
	
--- trimmed messages (input to LLM) ---
================================== Ai Message ==================================
Name: Bot
The tiger shark is the most aggressive of the sharks and is the most dangerous.
The hammerhead shark is the most beautiful of the sharks and is the most dangerous.
The mako shark is the fastest of the sharks and is the most dangerous.
The bull shark is the most common of the sharks and is the most dangerous.
================================ Human Message =================================
Name: Lance
What others should I learn about?
================================== Ai Message ==================================
Name: Bot
I know about dolphins too
================================ Human Message =================================
Name: Lance
Tell me more about dolphins
------------------------------------------------

As can be seen, the context passed to the LLM has been truncated. The message, which was very long and had many tokens, has been cut off. Let's see the output of the LLM.

	
for m in output['messages']:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
Name: Bot
So you said you were researching ocean mammals?
================================ Human Message =================================
Name: Lance
Yes, I know about whales. But what others should I learn about?
================================== Ai Message ==================================
Name: Bot
I know about sharks too. They are very dangerous, but they are also very beautiful.
Sometimes have been seen in the wild, but they are not very common. In the wild, they are very dangerous, but they are also very beautiful.
They live in the sea and in the ocean. They can travel long distances and can be found in many parts of the world.
Often they live in groups of 20 or more, but they are not very common.
They should eat a lot of food. Normally they eat a lot of fish.
The white shark is the largest of the sharks and is the most dangerous.
The great white shark is the most famous of the sharks and is the most dangerous.
The tiger shark is the most aggressive of the sharks and is the most dangerous.
The hammerhead shark is the most beautiful of the sharks and is the most dangerous.
The mako shark is the fastest of the sharks and is the most dangerous.
The bull shark is the most common of the sharks and is the most dangerous.
================================ Human Message =================================
Name: Lance
What others should I learn about?
================================== Ai Message ==================================
Name: Bot
I know about dolphins too
================================ Human Message =================================
Name: Lance
Tell me more about dolphins
================================== Ai Message ==================================
Certainly! Dolphins are intelligent marine mammals that are part of the family Delphinidae, which includes nearly 40 species. Here are some interesting facts about dolphins:
1. **Intelligence**: Dolphins are known for their high intelligence and have large brains relative to their body size. They exhibit behaviors that suggest social complexity, self-awareness, and problem-solving skills. For example, they can recognize themselves in mirrors, a trait shared by only a few other species.
2. **Communication**: Dolphins communicate using a variety of clicks, whistles, and body language. Each dolphin has a unique "signature whistle" that helps identify it to others, similar to a human name. They use echolocation to navigate and locate prey by emitting clicks and interpreting the echoes that bounce back.
3. **Social Structure**: Dolphins are highly social animals and often live in groups called pods. These pods can vary in size from a few individuals to several hundred. Within these groups, dolphins form complex social relationships and often cooperate to hunt and protect each other from predators.
4. **Habitat**: Dolphins are found in all the world's oceans and in some rivers. Different species have adapted to various environments, from tropical waters to the cooler regions of the open sea. Some species, like the Amazon river dolphin (also known as the boto), live in freshwater rivers.
5. **Diet**: Dolphins are carnivores and primarily eat fish, squid, and crustaceans. Their diet can vary depending on the species and their habitat. Some species, like the killer whale (which is actually a large dolphin), can even hunt larger marine mammals.
6. **Reproduction**: Dolphins have a long gestation period, typically around 10 to 12 months. Calves are born tail-first and are nursed by their mothers for up to two years. Dolphins often form strong bonds with their offspring and other members of their pod.
7. **Conservation**: Many species of dolphins face threats such as pollution, overfishing, and entanglement in fishing nets. Conservation efforts are ongoing to protect these animals and their habitats. Organizations like the International Union for Conservation of Nature (IUCN) and the World Wildlife Fund (WWF) work to raise awareness and implement conservation measures.
8. **Cultural Significance**: Dolphins have been a source of fascination and inspiration for humans for centuries. They appear in myths, legends, and art across many cultures and are often seen as symbols of intelligence, playfulness, and freedom.
Dolphins are truly remarkable creatures with a lot to teach us about social behavior, communication, and the complexities of marine ecosystems. If you have any specific questions or want to know more about a particular species, feel free to ask!

With a truncated context, the LLM continues to answer

Modification of context and message trimminglink image 76

Let's combine the two previous techniques, we will modify the context and trim the messages.

from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import RemoveMessage, trim_messages
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Nodes
      def filter_messages(state: State):
          # Delete all but the 2 most recent messages
          delete_messages = [RemoveMessage(id=m.id) for m in state["messages"][:-2]]
          return {"messages": delete_messages}
      
      def trim_messages_node(state: State):
          # print the messages
          print("--- messages (input to trim_messages) ---")
          for m in state["messages"]:
              m.pretty_print()
          print("------------------------------------------------")
      
          # Trim the messages based on the specified parameters
          trimmed_messages = trim_messages(
              state["messages"],
              max_tokens=100,       # Maximum tokens allowed in the trimmed list
              strategy="last",     # Keep the latest messages
              token_counter=llm,   # Use the LLM's tokenizer to count tokens
              allow_partial=True,  # Allow cutting messages mid-way if needed
          )
      
          # Print the trimmed messages to see the effect of trim_messages
          print("--- trimmed messages (input to LLM) ---")
          for m in trimmed_messages:
              m.pretty_print()
          print("------------------------------------------------")
      
          # Invoke the LLM with the trimmed messages
          response = llm.invoke(trimmed_messages)
      
          # Return the LLM's response in the correct state format
          return {"messages": [response]}
      
      def chat_model_node(state: State):    
          return {"messages": [llm.invoke(state["messages"])]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("filter_messages_node", filter_messages)
      graph_builder.add_node("chatbot_node", chat_model_node)
      graph_builder.add_node("trim_messages_node", trim_messages_node)
      
      # Connecto nodes
      graph_builder.add_edge(START, "filter_messages_node")
      graph_builder.add_edge("filter_messages_node", "trim_messages_node")
      graph_builder.add_edge("trim_messages_node", "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile()
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 12

Now we filter to keep the last two messages, then trim the context so that it doesn't use too many tokens, and finally pass the result to the model.

We create a context to pass it to the graph

	
from langchain_core.messages import AIMessage, HumanMessage
messages = [AIMessage(f"So you said you were researching ocean mammals?", name="Bot")]
messages.append(HumanMessage(f"Yes, I know about whales. But what others should I learn about?", name="Lance"))
messages.append(AIMessage(f"I know about dolphins too", name="Bot"))
messages.append(HumanMessage(f"What others should I learn about?", name="Lance"))
messages.append(AIMessage(f"""I know about sharks too. They are very dangerous, but they are also very beautiful.
Sometimes have been seen in the wild, but they are not very common. In the wild, they are very dangerous, but they are also very beautiful.
They live in the sea and in the ocean. They can travel long distances and can be found in many parts of the world.
Often they live in groups of 20 or more, but they are not very common.
They should eat a lot of food. Normally they eat a lot of fish.
The white shark is the largest of the sharks and is the most dangerous.
The great white shark is the most famous of the sharks and is the most dangerous.
The tiger shark is the most aggressive of the sharks and is the most dangerous.
The hammerhead shark is the most beautiful of the sharks and is the most dangerous.
The mako shark is the fastest of the sharks and is the most dangerous.
The bull shark is the most common of the sharks and is the most dangerous.
""", name="Bot"))
messages.append(HumanMessage(f"What others should I learn about?", name="Lance"))
for m in messages:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
Name: Bot
So you said you were researching ocean mammals?
================================ Human Message =================================
Name: Lance
Yes, I know about whales. But what others should I learn about?
================================== Ai Message ==================================
Name: Bot
I know about dolphins too
================================ Human Message =================================
Name: Lance
What others should I learn about?
================================== Ai Message ==================================
Name: Bot
I know about sharks too. They are very dangerous, but they are also very beautiful.
Sometimes have been seen in the wild, but they are not very common. In the wild, they are very dangerous, but they are also very beautiful.
They live in the sea and in the ocean. They can travel long distances and can be found in many parts of the world.
Often they live in groups of 20 or more, but they are not very common.
They should eat a lot of food. Normally they eat a lot of fish.
The white shark is the largest of the sharks and is the most dangerous.
The great white shark is the most famous of the sharks and is the most dangerous.
The tiger shark is the most aggressive of the sharks and is the most dangerous.
The hammerhead shark is the most beautiful of the sharks and is the most dangerous.
The mako shark is the fastest of the sharks and is the most dangerous.
The bull shark is the most common of the sharks and is the most dangerous.
================================ Human Message =================================
Name: Lance
What others should I learn about?

We pass it to the graph and get the output

	
output = graph.invoke({'messages': messages})
Copy
	
--- messages (input to trim_messages) ---
================================== Ai Message ==================================
Name: Bot
I know about sharks too. They are very dangerous, but they are also very beautiful.
Sometimes have been seen in the wild, but they are not very common. In the wild, they are very dangerous, but they are also very beautiful.
They live in the sea and in the ocean. They can travel long distances and can be found in many parts of the world.
Often they live in groups of 20 or more, but they are not very common.
They should eat a lot of food. Normally they eat a lot of fish.
The white shark is the largest of the sharks and is the most dangerous.
The great white shark is the most famous of the sharks and is the most dangerous.
The tiger shark is the most aggressive of the sharks and is the most dangerous.
The hammerhead shark is the most beautiful of the sharks and is the most dangerous.
The mako shark is the fastest of the sharks and is the most dangerous.
The bull shark is the most common of the sharks and is the most dangerous.
================================ Human Message =================================
Name: Lance
What others should I learn about?
------------------------------------------------
--- trimmed messages (input to LLM) ---
================================ Human Message =================================
Name: Lance
What others should I learn about?
------------------------------------------------

As we can see, we are only left with the last message, this is because the filtering function returned the last two messages, but the trimming function removed the second-to-last message for having more than 100 tokens.

Let's see what we have at the output of the model

	
for m in output['messages']:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
Name: Bot
I know about sharks too. They are very dangerous, but they are also very beautiful.
Sometimes have been seen in the wild, but they are not very common. In the wild, they are very dangerous, but they are also very beautiful.
They live in the sea and in the ocean. They can travel long distances and can be found in many parts of the world.
Often they live in groups of 20 or more, but they are not very common.
They should eat a lot of food. Normally they eat a lot of fish.
The white shark is the largest of the sharks and is the most dangerous.
The great white shark is the most famous of the sharks and is the most dangerous.
The tiger shark is the most aggressive of the sharks and is the most dangerous.
The hammerhead shark is the most beautiful of the sharks and is the most dangerous.
The mako shark is the fastest of the sharks and is the most dangerous.
The bull shark is the most common of the sharks and is the most dangerous.
================================ Human Message =================================
Name: Lance
What others should I learn about?
================================== Ai Message ==================================
Certainly! To provide a more tailored response, it would be helpful to know what areas or topics you're interested in. However, here’s a general list of areas that are often considered valuable for personal and professional development:
1. **Technology & Digital Skills**:
- Programming languages (Python, JavaScript, etc.)
- Web development (HTML, CSS, React, etc.)
- Data analysis and visualization (SQL, Tableau, Power BI)
- Machine learning and AI
- Cloud computing (AWS, Azure, Google Cloud)
2. **Business & Entrepreneurship**:
- Marketing (digital marketing, SEO, content marketing)
- Project management
- Financial literacy
- Leadership and management
-Startup and venture capital
3. **Science & Engineering**:
- Biology and genetics
- Physics and materials science
- Environmental science and sustainability
- Robotics and automation
- Aerospace engineering
4. **Health & Wellness**:
- Nutrition and dietetics
- Mental health and psychology
- Exercise science
- Yoga and mindfulness
- Traditional and alternative medicine
5. **Arts & Humanities**:
- Creative writing and storytelling
- Music and sound production
- Visual arts and design (graphic design, photography)
- Philosophy and ethics
- History and cultural studies
6. **Communication & Languages**:
- Public speaking and presentation skills
- Conflict resolution and negotiation
- Learning a new language (Spanish, Mandarin, French, etc.)
- Writing and editing
7. **Personal Development**:
- Time management and productivity
- Mindfulness and stress management
- Goal setting and motivation
- Personal finance and budgeting
- Critical thinking and problem solving
8. **Social & Environmental Impact**:
- Social entrepreneurship
- Community organizing and activism
- Sustainable living practices
- Climate change and environmental policy
If you have a specific area of interest or a particular goal in mind, feel free to share, and I can provide more detailed recommendations!
================================== Ai Message ==================================

We have filtered the state so much that the LLM does not have enough context. Later, we will see a way to solve this by adding a summary of the conversation to the state.

Streaming Modeslink image 77

Synchronous streaminglink image 78

In this case, we will receive the complete result of the LLM once it has finished generating the text.

To explain synchronous streaming modes, let's first create a basic graph.

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import HumanMessage
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Nodes
      def chat_model_node(state: State):
          # Return the LLM's response in the correct state format
          return {"messages": [llm.invoke(state["messages"])]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("chatbot_node", chat_model_node)
      
      # Connecto nodes
      graph_builder.add_edge(START, "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile()
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 13

Now we have two ways to obtain the result of the LLM, one is through the updates mode and the other through the values mode. updates vs values While updates gives us each new result, values gives us the entire history of results.

Updateslink image 79
	
for chunk in graph.stream({"messages": [HumanMessage(content="hi! I'm Máximo")]}, stream_mode="updates"):
print(chunk['chatbot_node']['messages'][-1].content)
Copy
	
Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.
Valueslink image 80
	
for chunk in graph.stream({"messages": [HumanMessage(content="hi! I'm Máximo")]}, stream_mode="values"):
print(chunk['messages'][-1].content)
Copy
	
hi! I'm Máximo
Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.
Asynchronous streaminglink image 81

Now we are going to receive the result of the LLM token by token. For this, we have to add streaming=True when creating the HuggingFace LLM and we have to change the chatbot node function to be asynchronous.

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import HumanMessage
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
          streaming=True,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Nodes
      async def chat_model_node(state: State):
          async for token in llm.astream_log(state["messages"]):
              yield {"messages": [token]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("chatbot_node", chat_model_node)
      
      # Connecto nodes
      graph_builder.add_edge(START, "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile()
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 14

As can be seen, the function has been made asynchronous and turned into a generator since the yield returns a value and pauses the execution of the function until it is called again.

We are going to run the graph asynchronously and see the types of events that are generated.

try:
          async for event in graph.astream_events({"messages": [HumanMessage(content="hi! I'm Máximo")]}, version="v2"):
              print(f"event: {event}")
      except Exception as e:
          print(f"Error: {e}")
      
event: {'event': 'on_chain_start', 'data': {'input': {'messages': [HumanMessage(content="hi! I'm Máximo", additional_kwargs={}, response_metadata={})]}}, 'name': 'LangGraph', 'tags': [], 'run_id': 'c9c40a00-157a-4229-a0d1-fda00e7bfd34', 'metadata': {}, 'parent_ids': []}
      event: {'event': 'on_chain_start', 'data': {'input': {'input': [HumanMessage(content="hi! I'm Máximo", additional_kwargs={}, response_metadata={}, id='6469501c-07b0-42e4-a3e6-f133ace1860c')]}}, 'name': 'chatbot_node', 'tags': ['graph:step:1'], 'run_id': '638828c0-4add-4141-b6b6-484446100237', 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34']}
      event: {'event': 'on_chain_start', 'data': {}, 'name': 'chatbot_node', 'tags': ['seq:step:1'], 'run_id': '15247b1a-1cd6-4863-9402-66499f921244', 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237']}
      event: {'event': 'on_chat_model_start', 'data': {'input': {'input': [[HumanMessage(content="hi! I'm Máximo", additional_kwargs={}, response_metadata={}, id='6469501c-07b0-42e4-a3e6-f133ace1860c')]]}}, 'name': 'ChatHuggingFace', 'tags': [], 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chain_stream', 'run_id': '15247b1a-1cd6-4863-9402-66499f921244', 'name': 'chatbot_node', 'tags': ['seq:step:1'], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd'}, 'data': {'chunk': {'input': [RunLogPatch(}'op': 'replace',
        'path': '',
        'value': {'final_output': None,
                  'id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3',
                  'logs': {},
                  'name': 'ChatHuggingFace',
                  'streamed_output': [],
                  'type': 'llm'}})]}}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='Hello', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' Má', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='ximo', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='!', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' It', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content="'s", additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' nice', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' to', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' meet', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' you', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='.', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' How', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' can', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' I', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' assist', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' you', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' today', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='?', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' Feel', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' free', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' to', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' ask', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' me', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' any', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' questions', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      
/Users/macm1/miniforge3/envs/langgraph/lib/python3.13/site-packages/huggingface_hub/inference/_generated/_async_client.py:2308: FutureWarning: `stop_sequences` is a deprecated argument for `text_generation` task and will be removed in version '0.28.0'. Use `stop` instead.
        warnings.warn(
      
event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' or', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' let', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' me', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' know', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' if', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' you', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' need', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' help', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' with', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' anything', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' specific', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='.', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='<|im_end|>', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_end', 'data': }'output': AIMessage(content="Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.<|im_end|>", additional_kwargs={}, response_metadata={}, id='run-74dfdbb9-4c2d-4a08-ad7d-795b5953cae3-0'), 'input': {'input': [[HumanMessage(content="hi! I'm Máximo", additional_kwargs={}, response_metadata={}, id='6469501c-07b0-42e4-a3e6-f133ace1860c')]]}}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chain_stream', 'run_id': '15247b1a-1cd6-4863-9402-66499f921244', 'name': 'chatbot_node', 'tags': ['seq:step:1'], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd'}, 'data': {'chunk': {'input': [RunLogPatch(}'op': 'add',
        'path': '/streamed_output/-',
        'value': AIMessage(content="Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.<|im_end|>", additional_kwargs={}, response_metadata={}, id='run-74dfdbb9-4c2d-4a08-ad7d-795b5953cae3-0')},
       }'op': 'replace',
        'path': '/final_output',
        'value': AIMessage(content="Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.<|im_end|>", additional_kwargs={}, response_metadata={}, id='run-74dfdbb9-4c2d-4a08-ad7d-795b5953cae3-0')})]}}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237']}
      event: {'event': 'on_chain_end', 'data': }'output': {'input': [RunLogPatch(}'op': 'add',
        'path': '/streamed_output/-',
        'value': AIMessage(content="Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.<|im_end|>", additional_kwargs={}, response_metadata={}, id='run-74dfdbb9-4c2d-4a08-ad7d-795b5953cae3-0')},
       }'op': 'replace',
        'path': '/final_output',
        'value': AIMessage(content="Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.<|im_end|>", additional_kwargs={}, response_metadata={}, id='run-74dfdbb9-4c2d-4a08-ad7d-795b5953cae3-0')})]}, 'input': {'input': [HumanMessage(content="hi! I'm Máximo", additional_kwargs={}, response_metadata={}, id='6469501c-07b0-42e4-a3e6-f133ace1860c')]}}, 'run_id': '15247b1a-1cd6-4863-9402-66499f921244', 'name': 'chatbot_node', 'tags': ['seq:step:1'], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237']}
      event: {'event': 'on_chain_stream', 'run_id': '638828c0-4add-4141-b6b6-484446100237', 'name': 'chatbot_node', 'tags': ['graph:step:1'], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd'}, 'data': {'chunk': {'input': [RunLogPatch(}'op': 'add',
        'path': '/streamed_output/-',
        'value': AIMessage(content="Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.<|im_end|>", additional_kwargs={}, response_metadata={}, id='run-74dfdbb9-4c2d-4a08-ad7d-795b5953cae3-0')},
       }'op': 'replace',
        'path': '/final_output',
        'value': AIMessage(content="Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.<|im_end|>", additional_kwargs={}, response_metadata={}, id='run-74dfdbb9-4c2d-4a08-ad7d-795b5953cae3-0')})]}}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34']}
      event: {'event': 'on_chain_end', 'data': }'output': {'input': [RunLogPatch(}'op': 'add',
        'path': '/streamed_output/-',
        'value': AIMessage(content="Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.<|im_end|>", additional_kwargs={}, response_metadata={}, id='run-74dfdbb9-4c2d-4a08-ad7d-795b5953cae3-0')},
       }'op': 'replace',
        'path': '/final_output',
        'value': AIMessage(content="Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.<|im_end|>", additional_kwargs={}, response_metadata={}, id='run-74dfdbb9-4c2d-4a08-ad7d-795b5953cae3-0')})]}, 'input': {'input': [HumanMessage(content="hi! I'm Máximo", additional_kwargs={}, response_metadata={}, id='6469501c-07b0-42e4-a3e6-f133ace1860c')]}}, 'run_id': '638828c0-4add-4141-b6b6-484446100237', 'name': 'chatbot_node', 'tags': ['graph:step:1'], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34']}
      Error: Unsupported message type: <class 'langchain_core.tracers.log_stream.RunLogPatch'>
      For troubleshooting, visit: https://python.langchain.com/docs/troubleshooting/errors/MESSAGE_COERCION_FAILURE 
      

As can be seen, the tokens arrive with the event on_chat_model_stream, so we are going to capture it and print it.

try:
          async for event in graph.astream_events({"messages": [HumanMessage(content="hi! I'm Máximo")]}, version="v2"):
              if event["event"] == "on_chat_model_stream":
                  print(event["data"]["chunk"].content, end=" | ", flush=True)
      except Exception as e:
          pass
      
/Users/macm1/miniforge3/envs/langgraph/lib/python3.13/site-packages/huggingface_hub/inference/_generated/_async_client.py:2308: FutureWarning: `stop_sequences` is a deprecated argument for `text_generation` task and will be removed in version '0.28.0'. Use `stop` instead.
        warnings.warn(
      
Hello |  Má | ximo | ! |  It | 's |  nice |  to |  meet |  you | . |  How |  can |  I |  assist |  you |  today | ? |  Feel |  free |  to |  ask |  me |  any |  questions |  or |  let |  me |  know |  if |  you |  need |  help |  with |  anything |  specific | . | <|im_end|> | 

Subgraphslink image 82

We have previously seen how to fork a graph so that nodes run in parallel, but let's consider the case where what we want to run in parallel are subgraphs. So, let's see how to do it.

Let's see how to create a log management graph that will have a subgraph for log summary and another subgraph for error analysis in the logs. graph with subgraphs So what we are going to do is first define each of the subgraphs separately and then add them to the main graph.

Subgraph for log error analysislink image 83

We import the necessary libraries

	
from IPython.display import Image, display
from langgraph.graph import StateGraph, START, END
from operator import add
from typing_extensions import TypedDict
from typing import List, Optional, Annotated
Copy

We create a class with the structure of the logs

	
# The structure of the logs
class Log(TypedDict):
id: str
question: str
docs: Optional[List]
answer: str
grade: Optional[int]
grader: Optional[str]
feedback: Optional[str]
Copy

We now create two classes, one with the structure of the log errors and another with the analysis that will be reported to the output.

	
# Failure Analysis Sub-graph
class FailureAnalysisState(TypedDict):
cleaned_logs: List[Log]
failures: List[Log]
fa_summary: str
processed_logs: List[str]
class FailureAnalysisOutputState(TypedDict):
fa_summary: str
processed_logs: List[str]
Copy

We now create the functions for the nodes, one will obtain the failures in the logs, for this it will search for logs that have any value in the grade field. Another will generate a summary of the failures. Additionally, we will add prints to be able to see what is happening internally.

	
def get_failures(state):
""" Get logs that contain a failure """
cleaned_logs = state["cleaned_logs"]
print(f" debug get_failures: cleaned_logs: {cleaned_logs}")
failures = [log for log in cleaned_logs if "grade" in log]
print(f" debug get_failures: failures: {failures}")
return {"failures": failures}
def generate_summary(state):
""" Generate summary of failures """
failures = state["failures"]
print(f" debug generate_summary: failures: {failures}")
fa_summary = "Poor quality retrieval of documentation."
print(f" debug generate_summary: fa_summary: {fa_summary}")
processed_logs = [f"failure-analysis-on-log-{failure['id']}" for failure in failures]
print(f" debug generate_summary: processed_logs: {processed_logs}")
return {"fa_summary": fa_summary, "processed_logs": processed_logs}
Copy

Finally, we create the graph, add the nodes and the edges and compile it.

fa_builder = StateGraph(FailureAnalysisState,output=FailureAnalysisOutputState)
      
      fa_builder.add_node("get_failures", get_failures)
      fa_builder.add_node("generate_summary", generate_summary)
      
      fa_builder.add_edge(START, "get_failures")
      fa_builder.add_edge("get_failures", "generate_summary")
      fa_builder.add_edge("generate_summary", END)
      
      graph = fa_builder.compile()
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 15

Let's create a test log

	
failure_log = {opening_brace}
"id": "1",
"question": "What is the meaning of life?",
"docs": None,
"answer": "42",
"grade": 1,
"grader": "AI",
"feedback": "Good job!"
}
Copy

We run the graph with the test log. Since the function get_failures takes the key cleaned_logs from the state, we have to pass the log to the graph under that same key.

graph.invoke({"cleaned_logs": [failure_log]})
      
	 debug get_failures: cleaned_logs: [{'id': '1', 'question': 'What is the meaning of life?', 'docs': None, 'answer': '42', 'grade': 1, 'grader': 'AI', 'feedback': 'Good job!'}]
      	 debug get_failures: failures: [{'id': '1', 'question': 'What is the meaning of life?', 'docs': None, 'answer': '42', 'grade': 1, 'grader': 'AI', 'feedback': 'Good job!'}]
      	 debug generate_summary: failures: [{'id': '1', 'question': 'What is the meaning of life?', 'docs': None, 'answer': '42', 'grade': 1, 'grader': 'AI', 'feedback': 'Good job!'}]
      	 debug generate_summary: fa_summary: Poor quality retrieval of documentation.
      	 debug generate_summary: processed_logs: ['failure-analysis-on-log-1']
      
Out[16]:
{'fa_summary': 'Poor quality retrieval of documentation.',
       'processed_logs': ['failure-analysis-on-log-1']}

It can be seen that it has found the test log, as it has a value of 1 in the grade field and then generated a summary of the failures.

Let's define the entire subgraph together again so it looks clearer and also to remove the prints we added for debugging.

from IPython.display import Image, display
      from langgraph.graph import StateGraph, START, END
      
      from operator import add
      from typing_extensions import TypedDict
      from typing import List, Optional, Annotated
      
      # The structure of the logs
      class Log(TypedDict):
          id: str
          question: str
          docs: Optional[List]
          answer: str
          grade: Optional[int]
          grader: Optional[str]
          feedback: Optional[str]
      
      # Failure clases
      class FailureAnalysisState(TypedDict):
          cleaned_logs: List[Log]
          failures: List[Log]
          fa_summary: str
          processed_logs: List[str]
      
      class FailureAnalysisOutputState(TypedDict):
          fa_summary: str
          processed_logs: List[str]
      
      # Functions
      def get_failures(state):
          """ Get logs that contain a failure """
          cleaned_logs = state["cleaned_logs"]
          failures = [log for log in cleaned_logs if "grade" in log]
          return {"failures": failures}
      
      def generate_summary(state):
          """ Generate summary of failures """
          failures = state["failures"]
          fa_summary = "Poor quality retrieval of documentation."
          processed_logs = [f"failure-analysis-on-log-{failure['id']}" for failure in failures]
          return {"fa_summary": fa_summary, "processed_logs": processed_logs}
      
      # Build the graph
      fa_builder = StateGraph(FailureAnalysisState,output=FailureAnalysisOutputState)
      
      fa_builder.add_node("get_failures", get_failures)
      fa_builder.add_node("generate_summary", generate_summary)
      
      fa_builder.add_edge(START, "get_failures")
      fa_builder.add_edge("get_failures", "generate_summary")
      fa_builder.add_edge("generate_summary", END)
      
      graph = fa_builder.compile()
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 16

If we run it again now, we get the same result, but without the prints.

	
graph.invoke({"cleaned_logs": [failure_log]})
Copy
	
{'fa_summary': 'Poor quality retrieval of documentation.',
'processed_logs': ['failure-analysis-on-log-1']}
Subgraph of log summarylink image 84

Now we create the log summary subgraph. In this case, there's no need to recreate the class with the log structure, so we create the classes with the structure for the log summaries and the output structure.

	
# Summarization subgraph
class QuestionSummarizationState(TypedDict):
cleaned_logs: List[Log]
qs_summary: str
report: str
processed_logs: List[str]
class QuestionSummarizationOutputState(TypedDict):
report: str
processed_logs: List[str]
Copy

Now we define the functions of the nodes, one will generate the summary of the logs and another will "send the summary to Slack".

	
def generate_summary(state):
cleaned_logs = state["cleaned_logs"]
print(f" debug generate_summary: cleaned_logs: {cleaned_logs}")
summary = "Questions focused on ..."
print(f" debug generate_summary: summary: {summary}")
processed_logs = [f"summary-on-log-{log['id']}" for log in cleaned_logs]
print(f" debug generate_summary: processed_logs: {processed_logs}")
return {"qs_summary": summary, "processed_logs": processed_logs}
def send_to_slack(state):
qs_summary = state["qs_summary"]
print(f" debug send_to_slack: qs_summary: {qs_summary}")
report = "foo bar baz"
print(f" debug send_to_slack: report: {report}")
return {"report": report}
Copy

Finally, we create the graph, add the nodes and the edges and compile it.

# Build the graph
      qs_builder = StateGraph(QuestionSummarizationState,output=QuestionSummarizationOutputState)
      
      qs_builder.add_node("generate_summary", generate_summary)
      qs_builder.add_node("send_to_slack", send_to_slack)
      
      qs_builder.add_edge(START, "generate_summary")
      qs_builder.add_edge("generate_summary", "send_to_slack")
      qs_builder.add_edge("send_to_slack", END)
      
      graph = qs_builder.compile()
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 17

We try again with the log we created earlier.

graph.invoke({"cleaned_logs": [failure_log]})
      
	 debug generate_summary: cleaned_logs: [{'id': '1', 'question': 'What is the meaning of life?', 'docs': None, 'answer': '42', 'grade': 1, 'grader': 'AI', 'feedback': 'Good job!'}]
      	 debug generate_summary: summary: Questions focused on ...
      	 debug generate_summary: processed_logs: ['summary-on-log-1']
      	 debug send_to_slack: qs_summary: Questions focused on ...
      	 debug send_to_slack: report: foo bar baz
      
Out[25]:
{'report': 'foo bar baz', 'processed_logs': ['summary-on-log-1']}

We rewrite the subgraph, all together for greater clarity and without the prints.

# Summarization clases
      class QuestionSummarizationState(TypedDict):
          cleaned_logs: List[Log]
          qs_summary: str
          report: str
          processed_logs: List[str]
      
      class QuestionSummarizationOutputState(TypedDict):
          report: str
          processed_logs: List[str]
      
      # Functions
      def generate_summary(state):
          cleaned_logs = state["cleaned_logs"]
          summary = "Questions focused on ..."
          processed_logs = [f"summary-on-log-{log['id']}" for log in cleaned_logs]
          return {"qs_summary": summary, "processed_logs": processed_logs}
      
      def send_to_slack(state):
          qs_summary = state["qs_summary"]
          report = "foo bar baz"
          return {"report": report}
      
      # Build the graph
      qs_builder = StateGraph(QuestionSummarizationState,output=QuestionSummarizationOutputState)
      
      qs_builder.add_node("generate_summary", generate_summary)
      qs_builder.add_node("send_to_slack", send_to_slack)
      
      qs_builder.add_edge(START, "generate_summary")
      qs_builder.add_edge("generate_summary", "send_to_slack")
      qs_builder.add_edge("send_to_slack", END)
      
      graph = qs_builder.compile()
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 18

We run the graph again with the test log.

	
graph.invoke({"cleaned_logs": [failure_log]})
Copy
	
{'report': 'foo bar baz', 'processed_logs': ['summary-on-log-1']}
Main graphlink image 85

Now that we have the two subgraphs, we can create the main graph that will use them. To do this, we create the EntryGraphState class, which will hold the state of the two subgraphs.

	
# Entry Graph
class EntryGraphState(TypedDict):
raw_logs: List[Log]
cleaned_logs: List[Log]
fa_summary: str # This will only be generated in the FA sub-graph
report: str # This will only be generated in the QS sub-graph
processed_logs: Annotated[List[int], add] # This will be generated in BOTH sub-graphs
Copy

We create a log cleaning function, which will be a node that runs before the two subgraphs and provides them with clean logs through the key cleaned_logs, which is what the two subgraphs take from the state.

	
def clean_logs(state):
# Get logs
raw_logs = state["raw_logs"]
# Data cleaning raw_logs -> docs
cleaned_logs = raw_logs
return {opening_brace}"cleaned_logs": cleaned_logs{closing_brace}
Copy

Now we create the main graph

	
# Build the graph
entry_builder = StateGraph(EntryGraphState)
Copy

We add the nodes. To add a subgraph as a node, what we do is add its compilation.

	
# Add nodes
entry_builder.add_node("clean_logs", clean_logs)
entry_builder.add_node("question_summarization", qs_builder.compile())
entry_builder.add_node("failure_analysis", fa_builder.compile())
Copy
	
<langgraph.graph.state.StateGraph at 0x107985ef0>

From here on out, we add the edges and compile it.

	
# Add edges
entry_builder.add_edge(START, "clean_logs")
entry_builder.add_edge("clean_logs", "failure_analysis")
entry_builder.add_edge("clean_logs", "question_summarization")
entry_builder.add_edge("failure_analysis", END)
entry_builder.add_edge("question_summarization", END)
# Compile the graph
graph = entry_builder.compile()
Copy
	
Adding an edge to a graph that has already been compiled. This will not be reflected in the compiled graph.
Adding an edge to a graph that has already been compiled. This will not be reflected in the compiled graph.
Adding an edge to a graph that has already been compiled. This will not be reflected in the compiled graph.
Adding an edge to a graph that has already been compiled. This will not be reflected in the compiled graph.
Adding an edge to a graph that has already been compiled. This will not be reflected in the compiled graph.

Finally, we display the graph. We add xray=1 to show the internal state of the graph.

# Setting xray to 1 will show the internal structure of the nested graph
      display(Image(graph.get_graph(xray=1).draw_mermaid_png()))
      
image uv 19

If we hadn't added xray=1, the graph would look like this

display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 20

Now we create two test logs, in one there will be an error (a value in grade) and in the other there won't be.

	
# Dummy logs
question_answer = Log(
id="1",
question="How can I import ChatOllama?",
answer="To import ChatOllama, use: 'from langchain_community.chat_models import ChatOllama.'",
)
question_answer_feedback = Log(
id="2",
question="How can I use Chroma vector store?",
answer="To use Chroma, define: rag_chain = create_retrieval_chain(retriever, question_answer_chain).",
grade=0,
grader="Document Relevance Recall",
feedback="The retrieved documents discuss vector stores in general, but not Chroma specifically",
)
raw_logs = [question_answer,question_answer_feedback]
Copy

We pass it to the main graph

	
graph.invoke({"raw_logs": raw_logs})
Copy
	
{'raw_logs': [{'id': '1',
'question': 'How can I import ChatOllama?',
'answer': "To import ChatOllama, use: 'from langchain_community.chat_models import ChatOllama.'"},
{'id': '2',
'question': 'How can I use Chroma vector store?',
'answer': 'To use Chroma, define: rag_chain = create_retrieval_chain(retriever, question_answer_chain).',
'grade': 0,
'grader': 'Document Relevance Recall',
'feedback': 'The retrieved documents discuss vector stores in general, but not Chroma specifically'}],
'cleaned_logs': [{'id': '1',
'question': 'How can I import ChatOllama?',
'answer': "To import ChatOllama, use: 'from langchain_community.chat_models import ChatOllama.'"},
{'id': '2',
'question': 'How can I use Chroma vector store?',
'answer': 'To use Chroma, define: rag_chain = create_retrieval_chain(retriever, question_answer_chain).',
'grade': 0,
'grader': 'Document Relevance Recall',
'feedback': 'The retrieved documents discuss vector stores in general, but not Chroma specifically'}],
'fa_summary': 'Poor quality retrieval of documentation.',
'report': 'foo bar baz',
'processed_logs': ['failure-analysis-on-log-2',
'summary-on-log-1',
'summary-on-log-2']}

Just like before, we write the entire graph to see it more clearly

# Entry Graph
      class EntryGraphState(TypedDict):
          raw_logs: List[Log]
          cleaned_logs: List[Log]
          fa_summary: str # This will only be generated in the FA sub-graph
          report: str # This will only be generated in the QS sub-graph
          processed_logs:  Annotated[List[int], add] # This will be generated in BOTH sub-graphs
      
      # Functions
      def clean_logs(state):
          # Get logs
          raw_logs = state["raw_logs"]
          # Data cleaning raw_logs -> docs 
          cleaned_logs = raw_logs
          return {"cleaned_logs": cleaned_logs}
      
      # Build the graph
      entry_builder = StateGraph(EntryGraphState)
      
      # Add nodes
      entry_builder.add_node("clean_logs", clean_logs)
      entry_builder.add_node("question_summarization", qs_builder.compile())
      entry_builder.add_node("failure_analysis", fa_builder.compile())
      
      # Add edges
      entry_builder.add_edge(START, "clean_logs")
      entry_builder.add_edge("clean_logs", "failure_analysis")
      entry_builder.add_edge("clean_logs", "question_summarization")
      entry_builder.add_edge("failure_analysis", END)
      entry_builder.add_edge("question_summarization", END)
      
      # Compile the graph
      graph = entry_builder.compile()
      
      # Setting xray to 1 will show the internal structure of the nested graph
      display(Image(graph.get_graph(xray=1).draw_mermaid_png()))
      
image uv 21

We pass the test logs to the main graph

	
graph.invoke({"raw_logs": raw_logs})
Copy
	
{'raw_logs': [{'id': '1',
'question': 'How can I import ChatOllama?',
'answer': "To import ChatOllama, use: 'from langchain_community.chat_models import ChatOllama.'"},
{'id': '2',
'question': 'How can I use Chroma vector store?',
'answer': 'To use Chroma, define: rag_chain = create_retrieval_chain(retriever, question_answer_chain).',
'grade': 0,
'grader': 'Document Relevance Recall',
'feedback': 'The retrieved documents discuss vector stores in general, but not Chroma specifically'}],
'cleaned_logs': [{'id': '1',
'question': 'How can I import ChatOllama?',
'answer': "To import ChatOllama, use: 'from langchain_community.chat_models import ChatOllama.'"},
{'id': '2',
'question': 'How can I use Chroma vector store?',
'answer': 'To use Chroma, define: rag_chain = create_retrieval_chain(retriever, question_answer_chain).',
'grade': 0,
'grader': 'Document Relevance Recall',
'feedback': 'The retrieved documents discuss vector stores in general, but not Chroma specifically'}],
'fa_summary': 'Poor quality retrieval of documentation.',
'report': 'foo bar baz',
'processed_logs': ['failure-analysis-on-log-2',
'summary-on-log-1',
'summary-on-log-2']}

Dynamic brancheslink image 86

So far we have created static nodes and edges, but there are times when we don't know if we will need a branch until the graph is executed. For this, we can use the SEND method of langgraph, which allows us to create branches dynamically.

To see it, we are going to create a graph that generates jokes about some topics, but since we don't know in advance how many topics we want to generate jokes for, using the SEND method, we will dynamically create branches so that if there are still topics left to generate, a new branch will be created.

Note: We will be doing this section using Sonnet 3.7, as the HuggingFace integration does not have the with_structured_output functionality that provides a structured output with a defined structure.

First we import the necessary libraries.

	
import operator
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import END, StateGraph, START
from langchain_anthropic import ChatAnthropic
import os
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
import dotenv
dotenv.load_dotenv()
ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
from IPython.display import Image
Copy

We create the classes with the structure of the state.

	
class OverallState(TypedDict):
topic: str
subjects: list
jokes: Annotated[list, operator.add]
best_selected_joke: str
class JokeState(TypedDict):
subject: str
Copy

We create the LLM

	
# Create the LLM model
llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
Copy

We create the function that will generate the themes.

We are going to use with_structured_output so that the LLM generates an output with a structure defined by us, and we will define that structure using the Subjects class, which is a BaseModel class from Pydantic.

	
from pydantic import BaseModel
class Subjects(BaseModel):
subjects: list[str]
subjects_prompt = """Generate a list of 3 sub-topics that are all related to this overall topic: {topic}."""
def generate_topics(state: OverallState):
prompt = subjects_prompt.format(topic=state["topic"])
response = llm.with_structured_output(Subjects).invoke(prompt)
return {"subjects": response.subjects}
Copy

Now we define the function that will generate the jokes.

	
class Joke(BaseModel):
joke: str
joke_prompt = """Generate a joke about {subject}"""
def generate_joke(state: JokeState):
prompt = joke_prompt.format(subject=state["subject"])
response = llm.with_structured_output(Joke).invoke(prompt)
return {opening_brace}"jokes": [response.joke]}
Copy

And finally, the function that will select the best joke.

	
class BestJoke(BaseModel):
id: int
best_joke_prompt = """Below are a bunch of jokes about {topic}. Select the best one! Return the ID of the best one, starting 0 as the ID for the first joke. Jokes: \n\n {jokes}"""
def best_joke(state: OverallState):
jokes = "\n\n".join(state["jokes"])
prompt = best_joke_prompt.format(topic=state["topic"], jokes=jokes)
response = llm.with_structured_output(BestJoke).invoke(prompt)
return {opening_brace}"best_selected_joke": state["jokes"][response.id]}
Copy

Now we are going to create a function that decides whether to create a new branch with SEND or not, and to decide this it will check if there are any topics left to generate.

	
from langgraph.constants import Send
def continue_to_jokes(state: OverallState):
return [Send("generate_joke", {opening_brace}"subject": s}) for s in state["subjects"]]
Copy

We build the graph, add the nodes and the edges.

# Build the graph
      graph = StateGraph(OverallState)
      
      # Add nodes
      graph.add_node("generate_topics", generate_topics)
      graph.add_node("generate_joke", generate_joke)
      graph.add_node("best_joke", best_joke)
      
      # Add edges
      graph.add_edge(START, "generate_topics")
      graph.add_conditional_edges("generate_topics", continue_to_jokes, ["generate_joke"])
      graph.add_edge("generate_joke", "best_joke")
      graph.add_edge("best_joke", END)
      
      # Compile the graph
      app = graph.compile()
      
      # Display the graph
      Image(app.get_graph().draw_mermaid_png())
      
Out[8]:
image uv 22

As can be seen, the edge between generate_topics and generate_joke is represented with a dashed line, indicating that it is a dynamic branch.

We now create a dictionary with the key topic, which is needed by the generate_topics node to generate the topics, and pass it to the graph.

	
# Call the graph: here we call it to generate a list of jokes
for state in app.stream({"topic": "animals"}):
print(state)
Copy
	
{'generate_topics': {'subjects': ['Marine Animals', 'Endangered Species', 'Animal Behavior']}}
{'generate_joke': {'jokes': ["Why don't cats play poker in the wild? Too many cheetahs!"]}}
{'generate_joke': {'jokes': ["Why don't sharks eat clownfish? Because they taste funny!"]}}
{'generate_joke': {'jokes': ["Why don't endangered species tell jokes? Because they're afraid of dying out from laughter!"]}}
{'best_joke': {'best_selected_joke': "Why don't cats play poker in the wild? Too many cheetahs!"}}

We recreate the graph with all the code together for greater clarity.

import operator
      from typing import Annotated
      from typing_extensions import TypedDict
      from pydantic import BaseModel
      
      from langgraph.graph import END, StateGraph, START
      from langgraph.constants import Send
      
      from langchain_anthropic import ChatAnthropic
      
      import os
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      import dotenv
      dotenv.load_dotenv()
      ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
      
      from IPython.display import Image
      
      # Prompts we will use
      subjects_prompt = """Generate a list of 3 sub-topics that are all related to this overall topic: {topic}."""
      joke_prompt = """Generate a joke about {subject}"""
      best_joke_prompt = """Below are a bunch of jokes about {topic}. Select the best one! Return the ID of the best one, starting 0 as the ID for the first joke. Jokes: \n\n  {jokes}"""
      
      # Create the LLM model
      llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
      
      class Subjects(BaseModel):
          subjects: list[str]
      
      class BestJoke(BaseModel):
          id: int
          
      class OverallState(TypedDict):
          topic: str
          subjects: list
          jokes: Annotated[list, operator.add]
          best_selected_joke: str
      
      class JokeState(TypedDict):
          subject: str
      
      class Joke(BaseModel):
          joke: str
      
      def generate_topics(state: OverallState):
          prompt = subjects_prompt.format(topic=state["topic"])
          response = llm.with_structured_output(Subjects).invoke(prompt)
          return {"subjects": response.subjects}
      
      def continue_to_jokes(state: OverallState):
          return [Send("generate_joke", {"subject": s}) for s in state["subjects"]]
      
      def generate_joke(state: JokeState):
          prompt = joke_prompt.format(subject=state["subject"])
          response = llm.with_structured_output(Joke).invoke(prompt)
          return {"jokes": [response.joke]}
      
      def best_joke(state: OverallState):
          jokes = "\n\n".join(state["jokes"])
          prompt = best_joke_prompt.format(topic=state["topic"], jokes=jokes)
          response = llm.with_structured_output(BestJoke).invoke(prompt)
          return {"best_selected_joke": state["jokes"][response.id]}
      
      # Build the graph
      graph = StateGraph(OverallState)
      
      # Add nodes
      graph.add_node("generate_topics", generate_topics)
      graph.add_node("generate_joke", generate_joke)
      graph.add_node("best_joke", best_joke)
      
      # Add edges
      graph.add_edge(START, "generate_topics")
      graph.add_conditional_edges("generate_topics", continue_to_jokes, ["generate_joke"])
      graph.add_edge("generate_joke", "best_joke")
      graph.add_edge("best_joke", END)
      
      # Compile the graph
      app = graph.compile()
      
      # Display the graph
      Image(app.get_graph().draw_mermaid_png())
      
Out[1]:
image uv 23

We run it again, but this time, instead of animals, we will use cars

	
for state in app.stream({"topic": "cars"}):
print(state)
Copy
	
{'generate_topics': {'subjects': ['Car Maintenance and Repair', 'Electric and Hybrid Vehicles', 'Automotive Design and Engineering']}}
{'generate_joke': {'jokes': ["Why don't electric cars tell jokes? They're afraid of running out of charge before they get to the punchline!"]}}
{'generate_joke': {'jokes': ["Why don't automotive engineers play hide and seek? Because good luck hiding when you're always making a big noise about torque!"]}}
{'generate_joke': {'jokes': ["Why don't cars ever tell their own jokes? Because they always exhaust themselves during the delivery! Plus, their timing belts are always a little off."]}}
{'best_joke': {'best_selected_joke': "Why don't electric cars tell jokes? They're afraid of running out of charge before they get to the punchline!"}}

Improve the chatbot with toolslink image 87

To handle some queries, our chatbot cannot respond from its knowledge, so we are going to integrate a web search tool. Our bot can use this tool to find relevant information and provide better answers.

Requirementslink image 88

Before we start, we need to install the Tavily search engine, which is a web search tool that allows us to look up information on the web.

pip install -U tavily-python langchain_community```

After that, we need to create an API KEY, write it in our .env file, and load it into a variable.

	
import dotenv
import os
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
Copy

Chatbot with toolslink image 89

First we create the state and the LLM

	
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from huggingface_hub import login
import json
import os
from IPython.display import Image, display
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
import dotenv
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
class State(TypedDict):
messages: Annotated[list, add_messages]
# Create the LLM
login(token=HUGGINGFACE_TOKEN) # Login to HuggingFace to use the model
MODEL = "Qwen/Qwen2.5-72B-Instruct"
model = HuggingFaceEndpoint(
repo_id=MODEL,
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
# Create the chat model
llm = ChatHuggingFace(llm=model)
Copy

Now, we define the web search tool using TavilySearchResults

	
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
tool = TavilySearchResults(api_wrapper=wrapper, max_results=2)
Copy

We test the tool, we are going to do an internet search.

tool.invoke("What was the result of Real Madrid's at last match in the Champions League?")
      
Failed to multipart ingest runs: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      
Out[3]:
[{'title': 'HIGHLIGHTS | Real Madrid 3-2 Leganés | LaLiga 2024/25 - YouTube',
        'url': 'https://www.youtube.com/watch?v=Np-Kwz4RDpY',
        'content': "20:14 · Go to channel · RONALDO'S LAST MATCH WITH REAL MADRID: THE MOST THRILLING FINAL EVER! ... Champions League 1/4 Final | PES. Football",
        'score': 0.65835214},
       {'title': 'Real Madrid | History | UEFA Champions League',
        'url': 'https://www.uefa.com/uefachampionsleague/history/clubs/50051--real-madrid/',
        'content': '1955/56 P W D L Final 7 5 0 2\nUEFA Champions League [...] 2010/11 P W D L Semi-finals 12 8 3 1\n2009/10 P W D L Round of 16 8 4 2 2\n2000s\n2008/09 P W D L Round of 16 8 4 0 4\n2007/08 P W D L Round of 16 8 3 2 3\n2006/07 P W D L Round of 16 8 4 2 2\n2005/06 P W D L Round of 16 8 3 2 3\n2004/05 P W D L Round of 16 10 6 2 2\n2003/04 P W D L Quarter-finals 10 6 3 1\n2002/03 P W D L Semi-finals 16 7 5 4\n2001/02 P W D L Final 17 12 3 2\n2000/01 P W D L Semi-finals 16 9 2 5\n1990s\n1999/00 P W D L Final 17 10 3 4\n1998/99 P W D L Quarter-finals 8 4 1 3 [...] 1969/70 P W D L Second round 4 2 0 2\n1968/69 P W D L Second round 4 3 0 1\n1967/68 P W D L Semi-finals 8 2 4 2\n1966/67 P W D L Quarter-finals 4 1 0 3\n1965/66 P W D L Final 9 5 2 2\n1964/65 P W D L Quarter-finals 6 4 1 1\n1963/64 P W D L Final 9 7 0 2\n1962/63 P W D L Preliminary round 2 0 1 1\n1961/62 P W D L Final 10 8 0 2\n1960/61 P W D L First round 2 0 1 1\n1950s\n1959/60 P W D L Final 7 6 0 1\n1958/59 P W D L Final 8 5 2 1\n1957/58 P W D L Final 7 5 1 1\n1956/57 P W D L Final 8 6 1 1',
        'score': 0.6030211}]

The results are summaries of pages that our chatbot can use to answer questions.

We create a list of tools because our graph needs to define the tools through a list.

	
tools_list = [tool]
Copy

Now that we have the list of tools we create a llm_with_tools

	
# Modification: tell the LLM which tools it can call
llm_with_tools = llm.bind_tools(tools_list)
Copy

We define the function that will go in the chat bot node

	
# Define the chatbot function
def chatbot_function(state: State):
return {opening_brace}"messages": [llm_with_tools.invoke(state["messages"])]}
Copy

We need to create a function to execute the tools_list if they are called. We add the tools_list to a new node.

Later we will do this with the ToolNode method of LangGraph, but first we will build it ourselves to understand how it works.

We are going to implement the BasicToolNode class, which checks the most recent message in the state and calls the tools_list if the message contains tool_calls. It is based on the support for tool_calling in LLMs, which is available in Anthropic, HuggingFace, Google Gemini, OpenAI, and several other LLM providers.

	
from langchain_core.messages import ToolMessage
class BasicToolNode:
"""A node that runs the tools requested in the last AIMessage."""
def __init__(self, tools: list) -> None:
"""
Initialize the tools
Args:
tools (list): The tools to use
Returns:
None
"""
# Initialize the tools
self.tools_by_name = {opening_brace}tool.name: tool for tool in tools{closing_brace}
def __call__(self, inputs: dict):
"""
Call the node
Args:
inputs (dict): The inputs to the node
Returns:
dict: The outputs of the node
"""
# Get the last message
if messages := inputs.get("messages", []):
message = messages[-1]
else:
raise ValueError("No message found in input")
# Execute the tools
outputs = []
for tool_call in message.tool_calls:
tool_result = self.tools_by_name[tool_call["name"]].invoke(
tool_call["args"]
)
outputs.append(
ToolMessage(
content=json.dumps(tool_result),
name=tool_call["name"],
tool_call_id=tool_call["id"],
)
)
return {opening_brace}"messages": outputs{closing_brace}
basic_tool_node = BasicToolNode(tools=tools_list)
Copy

We have used ToolMessage that passes the result of running a tool back to the LLM. ToolMessage contains the result of an invocation of a tool. That is, as soon as we have the result of using a Tool, we pass it to the LLM for processing.

With the basic_tool_node object (which is an object of the class BasicToolNode that we have created), we can now make the LLM execute tools

Now, just like we did when building a basic chatbot, we are going to create the graph and add nodes to it.

	
# Create graph
graph_builder = StateGraph(State)
# Add the chatbot node
graph_builder.add_node("chatbot_node", chatbot_function)
graph_builder.add_node("tools_node", basic_tool_node)
Copy
	
<langgraph.graph.state.StateGraph at 0x14996cd70>

When the LLM receives a message, since it knows the tools available to it, it will decide whether to respond or use a tool. So, we are going to create a routing function that will execute a tool if the LLM decides to use it, or otherwise terminate the graph execution.

	
def route_tools_function(
state: State,
):
"""
Use in the conditional_edge to route to the ToolNode if the last message
has tool calls. Otherwise, route to the end.
"""
# Get last message
if isinstance(state, list):
ai_message = state[-1]
elif messages := state.get("messages", []):
ai_message = messages[-1]
else:
raise ValueError(f"No messages found in input state to tool_edge: {state}")
# Router in function of last message
if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
return "tools_node"
return END
Copy

We add the edges.

We need to add a special edge using add_conditional_edges, which will create a conditional node. Connect the chatbot_node with the routing function we created earlier, route_tools_function. With this node, if we get the string tools_node as output from route_tools_function, it will route the graph to the tools_node, but if we receive END, it will route the graph to the END node and terminate the execution of the graph.

Later, we will replace this with the built-in method tools_condition, but for now we implement it ourselves to see how it works.

Finally, another edge is added that connects tools_node with chatbot_node, so that when a tool finishes executing, the graph returns to the LLM node.

	
# Add edges
graph_builder.add_edge(START, "chatbot_node")
graph_builder.add_conditional_edges(
"chatbot_node",
route_tools_function,
# The following dictionary lets you tell the graph to interpret the condition's outputs as a specific node
# It defaults to the identity function, but if you
# want to use a node named something else apart from "tools",
# You can update the value of the dictionary to something else
# e.g., "tools": "my_tools"
{opening_brace}"tools_node": "tools_node", END: END},
)
graph_builder.add_edge("tools_node", "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x14996cd70>

We compile the node and represent it

graph = graph_builder.compile()
      
      try:
          display(Image(graph.get_graph().draw_mermaid_png()))
      except Exception as e:
          print(f"Error al visualizar el grafo: {e}")
      
image uv 24

Now we can ask the bot questions outside of its training data

	
# Colors for the terminal
COLOR_GREEN = "\033[32m"
COLOR_YELLOW = "\033[33m"
COLOR_RESET = "\033[0m"
def stream_graph_updates(user_input: str):
for event in graph.stream({opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace}):
for value in event.values():
print(f"{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}user_input{closing_brace}")
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}value['messages'][-1].content{closing_brace}")
while True:
try:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print(f"{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}user_input{closing_brace}")
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}Goodbye!")
break
stream_graph_updates(user_input)
except:
# fallback if input() is not available
user_input = "What do you know about LangGraph?"
print("User: " + user_input)
stream_graph_updates(user_input)
break
Copy
	
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid 3-2 Leganes: Goals and highlights - LaLiga 24/25 | Marca", "url": "https://www.marca.com/en/soccer/laliga/r-madrid-leganes/2025/03/29/01_0101_20250329_186_957-live.html", "content": "While their form has varied throughout the campaign there is no denying Real Madrid are a force at home in LaLiga this season, as they head into Saturday's match having picked up 34 points from 13 matches. As for Leganes they currently sit 18th in the table, though they are level with Alaves for 17th as both teams look to stay in the top flight. [...] The two teams have already played twice this season, with Real Madrid securing a 3-0 win in the reverse league fixture. They also met in the quarter-finals of the Copa del Rey, a game Real won 3-2. Real Madrid vs Leganes LIVE - Latest Updates Match ends, Real Madrid 3, Leganes 2. Second Half ends, Real Madrid 3, Leganes 2. Foul by Vinícius Júnior (Real Madrid). Seydouba Cissé (Leganes) wins a free kick in the defensive half. [...] Goal! Real Madrid 1, Leganes 1. Diego García (Leganes) left footed shot from very close range. Attempt missed. Óscar Rodríguez (Leganes) left footed shot from the centre of the box. Goal! Real Madrid 1, Leganes 0. Kylian Mbappé (Real Madrid) converts the penalty with a right footed shot. Penalty Real Madrid. Arda Güler draws a foul in the penalty area. Penalty conceded by Óscar Rodríguez (Leganes) after a foul in the penalty area. Delay over. They are ready to continue.", "score": 0.8548001}, {opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Game Analysis - ESPN", "url": "https://www.espn.com/soccer/report/_/gameId/704946", "content": "Real Madrid Leganés Mbappé nets twice to keep Real Madrid's title hopes alive Real Madrid vs. Leganés - Game Highlights Watch the Game Highlights from Real Madrid vs. Leganés, 03/30/2025 Real Madrid's Kylian Mbappé struck twice to help his side come from behind to claim a hard-fought 3-2 home win over relegation-threatened Leganes on Saturday to move the second-placed reigning champions level on points with leaders Barcelona. [...] Leganes pushed for an equaliser but fell to a third consecutive defeat to sit 18th on 27 points, level with Alaves who are one place higher in the safety zone on goal difference. "We have done a tremendous job. We leave with our heads held high because we were fighting until the end to score here," Leganes striker Garcia said. "Ultimately, it was down to the details that they took it. We played a very serious game and now we have to think about next week." Game Information", "score": 0.82220376}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid vs Leganes 3-2 | Highlights & All Goals - YouTube", "url": "https://www.youtube.com/watch?v=ngBWsjmeHEk", "content": "Real Madrid secured a dramatic 3-2 victory over Leganes in an intense La Liga showdown on 29 March 2025! ⚽ Watch all the goals and", "score": 0.5157425}, {opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Game Analysis - ESPN", "url": "https://www.espn.com/soccer/report/_/gameId/704946", "content": ""We know what we always have to do: win. We started well, in the opposition half, and we scored a goal. Then we didn't play well for 20 minutes and conceded two goals," said Mbappé. "But we know that if we play well we'll score and in the second half we scored two goals. We won the game and we're very happy. "We worked on [the set piece] a few weeks ago with the staff. I knew I could shoot this way, I saw the space. I asked the others to let me shoot and it worked out well." [...] Leganes pushed for an equaliser but fell to a third consecutive defeat to sit 18th on 27 points, level with Alaves who are one place higher in the safety zone on goal difference. "We have done a tremendous job. We leave with our heads held high because we were fighting until the end to score here," Leganes striker Garcia said. "Ultimately, it was down to the details that they took it. We played a very serious game and now we have to think about next week." Game Information [...] However, Leganes responded almost immediately as Diego Garcia tapped in a loose ball at the far post to equalise in the following minute before Rodriguez set up Dani Raba to slot past goalkeeper Andriy Lunin in the 41st. Real midfielder Jude Bellingham brought the scores level two minutes after the break, sliding the ball into the net after a rebound off the crossbar. Mbappé then bagged the winner with a brilliant curled free kick in the 76th minute for his second.", "score": 0.50944775}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Game Analysis - ESPN", "url": "https://www.espn.com/soccer/report/_/gameId/704946", "content": "Real Madrid Leganés Mbappé nets twice to keep Real Madrid's title hopes alive Real Madrid vs. Leganés - Game Highlights Watch the Game Highlights from Real Madrid vs. Leganés, 03/30/2025 Real Madrid's Kylian Mbappé struck twice to help his side come from behind to claim a hard-fought 3-2 home win over relegation-threatened Leganes on Saturday to move the second-placed reigning champions level on points with leaders Barcelona. [...] Leganes pushed for an equaliser but fell to a third consecutive defeat to sit 18th on 27 points, level with Alaves who are one place higher in the safety zone on goal difference. "We have done a tremendous job. We leave with our heads held high because we were fighting until the end to score here," Leganes striker Garcia said. "Ultimately, it was down to the details that they took it. We played a very serious game and now we have to think about next week." Game Information [...] However, Leganes responded almost immediately as Diego Garcia tapped in a loose ball at the far post to equalise in the following minute before Rodriguez set up Dani Raba to slot past goalkeeper Andriy Lunin in the 41st. Real midfielder Jude Bellingham brought the scores level two minutes after the break, sliding the ball into the net after a rebound off the crossbar. Mbappé then bagged the winner with a brilliant curled free kick in the 76th minute for his second.", "score": 0.93666285}, {opening_brace}"title": "MBAPPE BRACE Leganes vs. Real Madrid - ESPN FC - YouTube", "url": "https://www.youtube.com/watch?v=0xwUhzx19_4", "content": "MBAPPE BRACE 🔥 Leganes vs. Real Madrid | LALIGA Highlights | ESPN FC ESPN FC 6836 likes 550646 views 29 Mar 2025 Watch these highlights as Kylian Mbappe scores 2 goals to give Real Madrid the 3-2 victory over Leganes in their LALIGA matchup. ✔ Subscribe to ESPN+: http://espnplus.com/soccer/youtube ✔ Subscribe to ESPN FC on YouTube: http://bit.ly/SUBSCRIBEtoESPNFC 790 comments", "score": 0.92857105}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "(VIDEO) All Goals from Real Madrid vs Leganes in La Liga", "url": "https://www.beinsports.com/en-us/soccer/la-liga/articles-video/-video-all-goals-from-real-madrid-vs-leganes-in-la-liga-2025-03-29?ess=", "content": "Real Madrid will host CD Leganes this Saturday, March 29, 2025, at the Santiago Bernabéu in a Matchday 29 clash of LaLiga EA Sports.", "score": 0.95628047}, {opening_brace}"title": "Real Madrid v Leganes | March 29, 2025 | Goal.com US", "url": "https://www.goal.com/en-us/match/real-madrid-vs-leganes/sZTw_SnjyKCcntxKHHQI7", "content": "Latest news, stats and live commentary for the LaLiga's meeting between Real Madrid v Leganes on the March 29, 2025.", "score": 0.9522955}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: Real Madrid faced Leganes in La Liga this weekend and came away with a 3-2 victory at the Santiago Bernabéu. The match was intense, with Kylian Mbappé scoring twice for Real Madrid, including a curled free kick in the 76th minute that proved to be the winner. Leganes managed to take the lead briefly with goals from Diego García and Dani Raba, but Real Madrid leveled through Jude Bellingham before Mbappé's second goal secured the win. This result keeps Real Madrid's title hopes alive, moving them level on points with leaders Barcelona.
User: Which players played the match?
Assistant: The question is too vague and doesn't provide context such as the sport, league, or specific match in question. Could you please provide more details?
User: q
Assistant: Goodbye!

As you can see, I first asked him how the Real Madrid did in their last La Liga match against Leganés.As it is something current, he has decided to use the search tool, which has yielded the result.

However, when I asked him which players played, he didn't know what I was talking about, that's because the conversation context is not being maintained. So the next thing we are going to do is add a memory to the agent so it can keep track of the conversation context.

Let's write everything together so it's more readable

	
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from huggingface_hub import login
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import ToolMessage
from IPython.display import Image, display
import json
import os
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
import dotenv
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
# State
class State(TypedDict):
messages: Annotated[list, add_messages]
# Tools
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
tool = TavilySearchResults(api_wrapper=wrapper, max_results=2)
tools_list = [tool]
# Create the LLM model
login(token=HUGGINGFACE_TOKEN) # Login to HuggingFace to use the model
MODEL = "Qwen/Qwen2.5-72B-Instruct"
model = HuggingFaceEndpoint(
repo_id=MODEL,
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
# Create the chat model
llm = ChatHuggingFace(llm=model)
# Create the LLM with tools
llm_with_tools = llm.bind_tools(tools_list)
# BasicToolNode class
class BasicToolNode:
"""A node that runs the tools requested in the last AIMessage."""
def __init__(self, tools: list) -> None:
"""
Initialize the tools
Args:
tools (list): The tools to use
Returns:
None
"""
# Initialize the tools
self.tools_by_name = {opening_brace}tool.name: tool for tool in tools{closing_brace}
def __call__(self, inputs: dict):
"""
Call the node
Args:
inputs (dict): The inputs to the node
Returns:
dict: The outputs of the node
"""
# Get the last message
if messages := inputs.get("messages", []):
message = messages[-1]
else:
raise ValueError("No message found in input")
# Execute the tools
outputs = []
for tool_call in message.tool_calls:
tool_result = self.tools_by_name[tool_call["name"]].invoke(
tool_call["args"]
)
outputs.append(
ToolMessage(
content=json.dumps(tool_result),
name=tool_call["name"],
tool_call_id=tool_call["id"],
)
)
return {opening_brace}"messages": outputs{closing_brace}
basic_tool_node = BasicToolNode(tools=tools_list)
# Functions
def chatbot_function(state: State):
return {opening_brace}"messages": [llm_with_tools.invoke(state["messages"])]}
# Route function
def route_tools_function(state: State):
"""
Use in the conditional_edge to route to the ToolNode if the last message
has tool calls. Otherwise, route to the end.
"""
# Get last message
if isinstance(state, list):
ai_message = state[-1]
elif messages := state.get("messages", []):
ai_message = messages[-1]
else:
raise ValueError(f"No messages found in input state to tool_edge: {state}")
# Router in function of last message
if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
return "tools_node"
return END
# Start to build the graph
graph_builder = StateGraph(State)
# Add nodes to the graph
graph_builder.add_node("chatbot_node", chatbot_function)
graph_builder.add_node("tools_node", basic_tool_node)
# Add edges
graph_builder.add_edge(START, "chatbot_node")
graph_builder.add_conditional_edges(
"chatbot_node",
route_tools_function,
{opening_brace}
"tools_node": "tools_node",
END: END
},
)
graph_builder.add_edge("tools_node", "chatbot_node")
# Compile the graph
graph = graph_builder.compile()
# Display the graph
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception as e:
print(f"Error al visualizar el grafo: {e}")
Copy
	
Error al visualizar el grafo: Failed to reach https://mermaid.ink/ API while trying to render your graph after 1 retries. To resolve this issue:
1. Check your internet connection and try again
2. Try with higher retry settings: `draw_mermaid_png(..., max_retries=5, retry_delay=2.0)`
3. Use the Pyppeteer rendering method which will render your graph locally in a browser: `draw_mermaid_png(..., draw_method=MermaidDrawMethod.PYPPETEER)`

We run the graph

	
# Colors for the terminal
COLOR_GREEN = "\033[32m"
COLOR_YELLOW = "\033[33m"
COLOR_RESET = "\033[0m"
def stream_graph_updates(user_input: str):
for event in graph.stream({opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace}):
for value in event.values():
print(f"{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}user_input{closing_brace}")
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}value['messages'][-1].content{closing_brace}")
while True:
try:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print(f"{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}user_input{closing_brace}")
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}Goodbye!")
break
stream_graph_updates(user_input)
except:
# fallback if input() is not available
user_input = "What do you know about LangGraph?"
print("User: " + user_input)
stream_graph_updates(user_input)
break
Copy
	
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid 3-2 Leganes: Mbappe, Bellingham inspire comeback to ...", "url": "https://www.nbcsports.com/soccer/news/how-to-watch-real-madrid-vs-leganes-live-stream-link-tv-team-news-prediction", "content": "Real Madrid fought back to beat struggling Leganes 3-2 at the Santiago Bernabeu on Saturday as Kylian Mbappe scored twice and Jude", "score": 0.78749067}, {opening_brace}"title": "Real Madrid vs Leganes 3-2: LaLiga – as it happened - Al Jazeera", "url": "https://www.aljazeera.com/sports/liveblog/2025/3/29/live-real-madrid-vs-leganes-laliga", "content": "Defending champions Real Madrid beat 3-2 Leganes in Spain's LaLiga. The match at Santiago Bernabeu in Madrid, Spain saw Real trail 2-1 at half-", "score": 0.7485182}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid vs Leganés: Spanish La Liga stats & head-to-head - BBC", "url": "https://www.bbc.com/sport/football/live/cm2ndndvdgmt", "content": "Mbappe scores winner as Real Madrid survive Leganes scare Match Summary Sat 29 Mar 2025 ‧ Spanish La Liga Real Madrid 3 , Leganés 2 at Full time Real MadridReal MadridReal Madrid 3 2 LeganésLeganésLeganés Full time FT Half Time Real Madrid 1 , Leganés 2 HT 1-2 Key Events Real Madrid K. Mbappé (32' pen, 76')Penalty 32 minutes, Goal 76 minutes J. Bellingham (47')Goal 47 minutes Leganés Diego García (34')Goal 34 minutes Dani Raba (41')Goal 41 minutes [...] Good nightpublished at 22:14 Greenwich Mean Time 29 March 22:14 GMT 29 March Thanks for joining us, that was a great game. See you again soon for more La Liga action. 13 2 Share close panel Share page Copy link About sharing Postpublished at 22:10 Greenwich Mean Time 29 March 22:10 GMT 29 March FT: Real Madrid 3-2 Leganes [...] Postpublished at 22:02 Greenwich Mean Time 29 March 22:02 GMT 29 March FT: Real Madrid 3-2 Leganes Over to you, Barcelona. Hansi Flick's side face Girona tomorrow (15:15 BST) and have the chance to regain their three point lead if they are victorious. 18 6 Share close panel Share page Copy link About sharing", "score": 0.86413884}, {opening_brace}"title": "Real Madrid 3 - 2 CD Leganés (03/29) - Game Report - 365Scores", "url": "https://www.365scores.com/en-us/football/match/laliga-11/cd-leganes-real-madrid-131-9242-11", "content": "The game between Real Madrid and CD Leganés ended with a score of Real Madrid 3 - 2 CD Leganés. On 365Scores, you can check all the head-to-head results between", "score": 0.8524574}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Final Score - ESPN", "url": "https://www.espn.com/soccer/match/_/gameId/704946/leganes-real-madrid", "content": "Game Information Santiago Bernabéu 8:00 PM, March 29, 2025Coverage: ESPN Deportes/ESPN+ Madrid, Spain Attendance: 73,641 [...] Match Commentary -Match ends, Real Madrid 3, Leganes 2.90'+9'Second Half ends, Real Madrid 3, Leganes 2.90'+7'Seydouba Cissé (Leganes) wins a free kick in the defensive half. Full Commentary Match Stats RMALEG Possession 70.7% 29.3% Shots on Goal 10 4 Shot Attempts 24 10 Yellow Cards 1 4 Corner Kicks 8 3 Saves 2 6 4-2-3-1 13 Lunin * 20 García * 22 Rüdiger * 35 Asencio * 17 Vázquez 6 Camavinga * 10 Modric 21 Díaz 5 Bellingham * 15 Güler 9 Mbappé [...] | Rayo Vallecano | 35 | 12 | 11 | 12 | -5 | 47 | | Mallorca | 35 | 13 | 8 | 14 | -7 | 47 | | Valencia | 35 | 11 | 12 | 12 | -8 | 45 | | Osasuna | 35 | 10 | 15 | 10 | -8 | 45 | | Real Sociedad | 35 | 12 | 7 | 16 | -9 | 43 | | Getafe | 35 | 10 | 9 | 16 | -3 | 39 | | Espanyol | 35 | 10 | 9 | 16 | -9 | 39 | | Girona | 35 | 10 | 8 | 17 | -12 | 38 | | Sevilla | 35 | 9 | 11 | 15 | -10 | 38 | | Alavés | 35 | 8 | 11 | 16 | -12 | 35 | | Leganés | 35 | 7 | 13 | 15 | -18 | 34 |", "score": 0.93497354}, {opening_brace}"title": "Real Madrid v Leganes | March 29, 2025 | Goal.com US", "url": "https://www.goal.com/en-us/match/real-madrid-vs-leganes/sZTw_SnjyKCcntxKHHQI7", "content": "Latest news, stats and live commentary for the LaLiga's meeting between Real Madrid v Leganes on the March 29, 2025.", "score": 0.921929}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Final Score - ESPN", "url": "https://www.espn.com/soccer/match/_/gameId/704946/leganes-real-madrid", "content": "Game Information Santiago Bernabéu 8:00 PM, March 29, 2025Coverage: ESPN Deportes/ESPN+ Madrid, Spain Attendance: 73,641 [...] Match Commentary -Match ends, Real Madrid 3, Leganes 2.90'+9'Second Half ends, Real Madrid 3, Leganes 2.90'+7'Seydouba Cissé (Leganes) wins a free kick in the defensive half. Full Commentary Match Stats RMALEG Possession 70.7% 29.3% Shots on Goal 10 4 Shot Attempts 24 10 Yellow Cards 1 4 Corner Kicks 8 3 Saves 2 6 4-2-3-1 13 Lunin * 20 García * 22 Rüdiger * 35 Asencio * 17 Vázquez 6 Camavinga * 10 Modric 21 Díaz 5 Bellingham * 15 Güler 9 Mbappé [...] Mbappé nets twice to maintain Madrid title hopes ------------------------------------------------ Kylian Mbappé struck twice to guide Real Madrid to a 3-2 home win over relegation-threatened Leganes on Saturday. Mar 29, 2025, 10:53 pm - Reuters Match Timeline Real Madrid Leganés KO 32 34 41 HT 47 62 62 62 65 66 72 74 76 81 83 86 89 FT", "score": 0.96213967}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Final Score - ESPN", "url": "https://www.espn.com/soccer/match/_/gameId/704946/leganes-real-madrid", "content": "Game Information Santiago Bernabéu 8:00 PM, March 29, 2025Coverage: ESPN Deportes/ESPN+ Madrid, Spain Attendance: 73,641 [...] Match Commentary -Match ends, Real Madrid 3, Leganes 2.90'+9'Second Half ends, Real Madrid 3, Leganes 2.90'+7'Seydouba Cissé (Leganes) wins a free kick in the defensive half. Full Commentary Match Stats RMALEG Possession 70.7% 29.3% Shots on Goal 10 4 Shot Attempts 24 10 Yellow Cards 1 4 Corner Kicks 8 3 Saves 2 6 4-2-3-1 13 Lunin * 20 García * 22 Rüdiger * 35 Asencio * 17 Vázquez 6 Camavinga * 10 Modric 21 Díaz 5 Bellingham * 15 Güler 9 Mbappé [...] -550 o3.5 +105 -1.5 -165 LEGLeganésLeganés (6-9-14) (6-9-14, 27 pts) u3.5 -120 +950 u3.5 -135", "score": 0.9635647}, {opening_brace}"title": "Real Madrid v Leganes | March 29, 2025 | Goal.com US", "url": "https://www.goal.com/en-us/match/real-madrid-vs-leganes/sZTw_SnjyKCcntxKHHQI7", "content": "Latest news, stats and live commentary for the LaLiga's meeting between Real Madrid v Leganes on the March 29, 2025.", "score": 0.95921934}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Final Score - ESPN", "url": "https://www.espn.com/soccer/match/_/gameId/704946/leganes-real-madrid", "content": "Real Madrid 3-2 Leganés (Mar 29, 2025) Final Score - ESPN Real Madrid -Match ends, Real Madrid 3, Leganes 2.90'+9'Second Half ends, Real Madrid 3, Leganes 2.90'+7'Seydouba Cissé (Leganes) wins a free kick in the defensive half. Freedom from Property StressJohn buys bay area houses | [Sponsored](https://popup.taboola.com/en/?template=colorbox&utm_source=espnnetwork-espn&utm_medium=referral&utm_content=thumbs-feed-01-b:gamepackage-thumbnails-3x1-b%20|%20Card%201:)[Sponsored](https://popup.taboola.com/en/?template=colorbox&utm_source=espnnetwork-espn&utm_medium=referral&utm_content=thumbs-feed-01-b:gamepackage-thumbnails-3x1-b%20|%20Card%201:) Get Offer Brand-New 2-Bedroom Senior Apartment in Mountain View: You Won't Believe the Price2-Bedroom Senior Apartment | [Sponsored](https://popup.taboola.com/en/?template=colorbox&utm_source=espnnetwork-espn&utm_medium=referral&utm_content=thumbs-feed-01-b:gamepackage-thumbnails-3x1-b%20|%20Card%201:)[Sponsored](https://popup.taboola.com/en/?template=colorbox&utm_source=espnnetwork-espn&utm_medium=referral&utm_content=thumbs-feed-01-b:gamepackage-thumbnails-3x1-b%20|%20Card%201:) Read More | Real Madrid | 35 | 23 | 6 | 6 | +35 | 75 | Real Madrid woes continue as Vinícius Júnior injury confirmed ------------------------------------------------------------- Injuries to Vinícius Júnior and Lucas Vázquez added to Real Madrid's problems on Monday. To learn more, visit "Do Not Sell or Share My Personal Information" and "Targeted Advertising" Opt-Out Rights.", "score": 0.98565}, {opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Game Analysis - ESPN", "url": "https://www.espn.com/soccer/report/_/gameId/704946", "content": "Real Madrid's Kylian Mbappé struck twice to help his side come from behind to claim a hard-fought 3-2 home win over relegation-threatened", "score": 0.98277}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid 3 - 2 CD Leganés (03/29) - Game Report - 365Scores", "url": "https://www.365scores.com/en-us/football/match/laliga-11/cd-leganes-real-madrid-131-9242-11", "content": "The game between Real Madrid and CD Leganés in the Regular Season of LaLiga, held on Saturday, March 29, 2025 at Estadio Santiago Bernabéu, ended with a score", "score": 0.96686727}, {opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Final Score - ESPN", "url": "https://www.espn.com/soccer/match/_/gameId/704946/leganes-real-madrid", "content": "Game Information Santiago Bernabéu 8:00 PM, March 29, 2025Coverage: ESPN Deportes/ESPN+ Madrid, Spain Attendance: 73,641 [...] -550 o3.5 +105 -1.5 -165 LEGLeganésLeganés (6-9-14) (6-9-14, 27 pts) u3.5 -120 +950 u3.5 -135 [...] Referees: Pablo González Fuertes", "score": 0.9595845}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: Real Madrid faced CD Leganés in a La Liga match on Saturday, March 29, 2025, at the Estadio Santiago Bernabéu. The match was a thrilling encounter, with Real Madrid coming from behind to secure a 3-2 victory.
Key points from the match include:
- **Scoreline**: Real Madrid 3, Leganés 2.
- **Goals**:
- **Real Madrid**: Kylian Mbappé scored twice, including a penalty, and Jude Bellingham also found the net.
- **Leganés**: Goals were scored by Diego García and Dani Raba.
- **Attendance**: The match was played in front of 73,641 spectators.
- **Key Moments**:
- Real Madrid trailed 2-1 at half-time but mounted a comeback in the second half.
- Mbappé's penalty in the 32nd minute and his second goal in the 76th minute were crucial in turning the game around.
- Bellingham's goal in the 47th minute shortly after the break tied the game.
This victory is significant for Real Madrid as they continue their push for the La Liga title, while Leganés remains in a difficult position, fighting against relegation.
User: Which players played the match?
Assistant: I'm sorry, but I need more information to answer your question. Could you please specify which match you're referring to, including the sport, the teams, or any other relevant details? This will help me provide you with the correct information.
User: q
Assistant: Goodbye!

We see again that the problem is that it does not remember the context of the conversation.

Add memory to the chatbot - short-term memory, memory within the threadlink image 90

Our chatbot can now use tools to answer users' questions, but it doesn't remember the context of previous interactions. This limits its ability to have coherent and multi-turn conversations.

LangGraph solves this problem through persistent checkpoints or checkpoints. If we provide a checkpointer when compiling the graph and a thread_id when calling the graph, LangGraph automatically saves the state after each iteration in the conversation. When we invoke the graph again using the same thread_id, the graph will load its saved state, allowing the chatbot to continue where it left off.

We will see later that this checkpointing is much more powerful than simple chat memory: it allows saving and resuming complex states at any time for error recovery, workflows with human in the loop, interactions over time, and more. But before we get to all of that, let's add checkpoints to enable multi-iteration conversations.

	
import os
import dotenv
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
Copy

To start, we create a checkpointer MemorySaver.

	
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
Copy

Notice>> We are using an in-memory checkpointer, meaning it is stored in RAM and when the graph execution finishes, it is deleted. This works for our case, as it is an example to learn how to use LangGraph. In a production application, it is likely that this would need to be changed to use SqliteSaver or PostgresSaver and connect to our own database.

Below, we define the graph.

	
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
Copy

We define the tool

	
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
tool = TavilySearchResults(api_wrapper=wrapper, max_results=2)
tools_list = [tool]
Copy

Next, the LLM with the bind_tools and we add it to the graph

	
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from huggingface_hub import login
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
# Create the LLM
login(token=HUGGINGFACE_TOKEN) # Login to HuggingFace to use the model
MODEL = "Qwen/Qwen2.5-72B-Instruct"
model = HuggingFaceEndpoint(
repo_id=MODEL,
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
# Create the chat model
llm = ChatHuggingFace(llm=model)
# Modification: tell the LLM which tools it can call
llm_with_tools = llm.bind_tools(tools_list)
# Define the chatbot function
def chatbot_function(state: State):
return {opening_brace}"messages": [llm_with_tools.invoke(state["messages"])]}
# Add the chatbot node
graph_builder.add_node("chatbot_node", chatbot_function)
Copy
	
<langgraph.graph.state.StateGraph at 0x1173534d0>

Previously, we built our own BasicToolNode to learn how it works. Now, we will replace it with the LangGraph ToolNode and tools_condition methods, as these do some good things like parallel API execution. Apart from that, the rest is the same as before.

	
from langgraph.prebuilt import ToolNode, tools_condition
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
Copy
	
<langgraph.graph.state.StateGraph at 0x1173534d0>

We add the tools_condition node to the graph

	
graph_builder.add_conditional_edges(
"chatbot_node",
tools_condition,
)
Copy
	
<langgraph.graph.state.StateGraph at 0x1173534d0>

We add the tools node to the graph

	
graph_builder.add_edge("tools", "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x1173534d0>

We add the START node to the graph

	
graph_builder.add_edge(START, "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x1173534d0>

We compile the graph by adding the checkpointer

	
graph = graph_builder.compile(checkpointer=memory)
Copy

We represent it graphically

from IPython.display import Image, display
      
      try:
          display(Image(graph.get_graph().draw_mermaid_png()))
      except Exception as e:
          print(f"Error al visualizar el grafo: {e}")
      
image uv 25

We create a configuration with a thread_id of a user

	
USER1_THREAD_ID = "1"
config_USER1 = {opening_brace}"configurable": {opening_brace}"thread_id": USER1_THREAD_ID{closing_brace}{closing_brace}
Copy
	
user_input = "Hi there! My name is Maximo."
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config_USER1,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Hi there! My name is Maximo.
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: does not reside in any location,{closing_brace}{closing_brace},
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Determining an individual's tax residency status - IRS", "url": "https://www.irs.gov/individuals/international-taxpayers/determining-an-individuals-tax-residency-status", "content": "If you are not a U.S. citizen, you are considered a nonresident of the United States for U.S. tax purposes unless you meet one of two tests.", "score": 0.1508904}, {opening_brace}"title": "Fix "Location Is Not Available", C:\WINDOWS\system32 ... - YouTube", "url": "https://www.youtube.com/watch?v=QFD-Ptp0SJw", "content": "Fix Error "Location is not available" C:\WINDOWS\system32\config\systemprofile\Desktop is unavailable. If the location is on this PC,", "score": 0.07777658}]
================================== Ai Message ==================================
Invalid Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
{"query": "Arguments["image={"}
	
user_input = "Do you remember my name?"
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config_USER1,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Do you remember my name?
================================== Ai Message ==================================
Of course! You mentioned your name is Maximo.

As can be seen, we haven't passed a list with the messages; everything is being managed by the checkpointer.

If we now try with another user, that is, with another thread_id, we will see that the graph does not remember the previous conversation.

	
USER2_THREAD_ID = "2"
config_USER2 = {opening_brace}"configurable": {opening_brace}"thread_id": USER2_THREAD_ID{closing_brace}{closing_brace}
user_input = "Do you remember my name?"
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config_USER2,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Do you remember my name?
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: Do you Remember My Name
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Sam Fender - Remember My Name (Official Video) - YouTube", "url": "https://www.youtube.com/watch?v=uaQm48G6IjY", "content": "Sam Fender - Remember My Name (Official Video) SamFenderVEVO 10743 likes 862209 views 14 Feb 2025 Remember My Name is a love song dedicated to my late Grandparents - they were always so fiercely proud of our family so I wrote the song in honour of them, from the perspective of my Grandad who was looking after my Grandma when she was suffering from dementia. This video is a really special one for me and I want to say thank you to everyone involved in making it. I hope you like it ❤️ [...] If I was wanting of anymore I’d be as greedy as those men on the hill But I remain forlorn In the memory of what once was Chasing a cross in from the wing Our boy’s a whippet, he’s faster than anything Remember the pride that we felt For the two of us made him ourselves Humour me Make my day I’ll tell you stories Kiss your face And I’ll pray You’ll remember My name I’m not sure of what awaits Wasn’t a fan of St Peter and his gates But by god I pray That I’ll see you in some way [...] Oh 11 Walk Avenue Something to behold To them it’s a council house To me it’s a home And a home that you made Where the grandkids could play But it’s never the same without you Humour me Make my day I’ll tell you stories I’ll kiss your face And I’ll pray You’ll remember My name And I’ll pray you remember my name And I’ll pray you remember my name ---", "score": 0.6609831}, {opening_brace}"title": "Do You Remember My Name? - Novel Updates", "url": "https://www.novelupdates.com/series/do-you-remember-my-name/", "content": "This is a Cute, Tender, and Heartwarming High School Romance. It's not Heavy. It's not so Emotional too, but it does have Emotional moments. It's story Full of", "score": 0.608897}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: do you remember my name
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Sam Fender - Remember My Name (Official Video) - YouTube", "url": "https://www.youtube.com/watch?v=uaQm48G6IjY", "content": "Sam Fender - Remember My Name (Official Video) SamFenderVEVO 10743 likes 862209 views 14 Feb 2025 Remember My Name is a love song dedicated to my late Grandparents - they were always so fiercely proud of our family so I wrote the song in honour of them, from the perspective of my Grandad who was looking after my Grandma when she was suffering from dementia. This video is a really special one for me and I want to say thank you to everyone involved in making it. I hope you like it ❤️ [...] Oh 11 Walk Avenue Something to behold To them it’s a council house To me it’s a home And a home that you made Where the grandkids could play But it’s never the same without you Humour me Make my day I’ll tell you stories I’ll kiss your face And I’ll pray You’ll remember My name And I’ll pray you remember my name And I’ll pray you remember my name --- [...] If I was wanting of anymore I’d be as greedy as those men on the hill But I remain forlorn In the memory of what once was Chasing a cross in from the wing Our boy’s a whippet, he’s faster than anything Remember the pride that we felt For the two of us made him ourselves Humour me Make my day I’ll tell you stories Kiss your face And I’ll pray You’ll remember My name I’m not sure of what awaits Wasn’t a fan of St Peter and his gates But by god I pray That I’ll see you in some way", "score": 0.7123327}, {opening_brace}"title": "Do you remember my name? - song and lyrics by Alea, Mama Marjas", "url": "https://open.spotify.com/track/3GVBn3rEQLxZl4zJ4dG8UJ", "content": "Listen to Do you remember my name? on Spotify. Song · Alea, Mama Marjas · 2023.", "score": 0.6506676}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: do you remember my name
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Sam Fender - Remember My Name (Official Video) - YouTube", "url": "https://www.youtube.com/watch?v=uaQm48G6IjY", "content": "Sam Fender - Remember My Name (Official Video) SamFenderVEVO 10743 likes 862209 views 14 Feb 2025 Remember My Name is a love song dedicated to my late Grandparents - they were always so fiercely proud of our family so I wrote the song in honour of them, from the perspective of my Grandad who was looking after my Grandma when she was suffering from dementia. This video is a really special one for me and I want to say thank you to everyone involved in making it. I hope you like it ❤️ [...] Oh 11 Walk Avenue Something to behold To them it’s a council house To me it’s a home And a home that you made Where the grandkids could play But it’s never the same without you Humour me Make my day I’ll tell you stories I’ll kiss your face And I’ll pray You’ll remember My name And I’ll pray you remember my name And I’ll pray you remember my name --- [...] If I was wanting of anymore I’d be as greedy as those men on the hill But I remain forlorn In the memory of what once was Chasing a cross in from the wing Our boy’s a whippet, he’s faster than anything Remember the pride that we felt For the two of us made him ourselves Humour me Make my day I’ll tell you stories Kiss your face And I’ll pray You’ll remember My name I’m not sure of what awaits Wasn’t a fan of St Peter and his gates But by god I pray That I’ll see you in some way", "score": 0.7123327}, {opening_brace}"title": "Do you remember my name? - song and lyrics by Alea, Mama Marjas", "url": "https://open.spotify.com/track/3GVBn3rEQLxZl4zJ4dG8UJ", "content": "Listen to Do you remember my name? on Spotify. Song · Alea, Mama Marjas · 2023.", "score": 0.6506676}]
================================== Ai Message ==================================
I'm here to assist you, but I don't actually have the ability to remember names or personal information from previous conversations. How can I assist you today?

Now that our chatbot has search and memory tools, we are going to repeat the previous example, where I ask it about the result of Real Madrid's last match in the League and then which players played.

	
USER3_THREAD_ID = "3"
config_USER3 = {opening_brace}"configurable": {opening_brace}"thread_id": USER3_THREAD_ID{closing_brace}{closing_brace}
user_input = "How did Real Madrid fare this weekend against Leganes in La Liga?"
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config_USER3,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
How did Real Madrid fare this weekend against Leganes in La Liga?
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: Real Madrid vs Leganes La Liga this weekend
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Real Madrid 3-2 Leganes: Goals and highlights - LaLiga 24/25 | Marca", "url": "https://www.marca.com/en/soccer/laliga/r-madrid-leganes/2025/03/29/01_0101_20250329_186_957-live.html", "content": "While their form has varied throughout the campaign there is no denying Real Madrid are a force at home in LaLiga this season, as they head into Saturday's match having picked up 34 points from 13 matches. As for Leganes they currently sit 18th in the table, though they are level with Alaves for 17th as both teams look to stay in the top flight. [...] The two teams have already played twice this season, with Real Madrid securing a 3-0 win in the reverse league fixture. They also met in the quarter-finals of the Copa del Rey, a game Real won 3-2. Real Madrid vs Leganes LIVE - Latest Updates Match ends, Real Madrid 3, Leganes 2. Second Half ends, Real Madrid 3, Leganes 2. Foul by Vinícius Júnior (Real Madrid). Seydouba Cissé (Leganes) wins a free kick in the defensive half. [...] Goal! Real Madrid 1, Leganes 1. Diego García (Leganes) left footed shot from very close range. Attempt missed. Óscar Rodríguez (Leganes) left footed shot from the centre of the box. Goal! Real Madrid 1, Leganes 0. Kylian Mbappé (Real Madrid) converts the penalty with a right footed shot. Penalty Real Madrid. Arda Güler draws a foul in the penalty area. Penalty conceded by Óscar Rodríguez (Leganes) after a foul in the penalty area. Delay over. They are ready to continue.", "score": 0.8548001}, {opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Game Analysis - ESPN", "url": "https://www.espn.com/soccer/report/_/gameId/704946", "content": "Real Madrid Leganés Mbappé nets twice to keep Real Madrid's title hopes alive Real Madrid vs. Leganés - Game Highlights Watch the Game Highlights from Real Madrid vs. Leganés, 03/30/2025 Real Madrid's Kylian Mbappé struck twice to help his side come from behind to claim a hard-fought 3-2 home win over relegation-threatened Leganes on Saturday to move the second-placed reigning champions level on points with leaders Barcelona. [...] Leganes pushed for an equaliser but fell to a third consecutive defeat to sit 18th on 27 points, level with Alaves who are one place higher in the safety zone on goal difference. "We have done a tremendous job. We leave with our heads held high because we were fighting until the end to score here," Leganes striker Garcia said. "Ultimately, it was down to the details that they took it. We played a very serious game and now we have to think about next week." Game Information", "score": 0.82220376}]
================================== Ai Message ==================================
Real Madrid secured a 3-2 victory against Leganes this weekend in their La Liga match. Kylian Mbappé scored twice, including a penalty, to help his team come from behind and claim the win, keeping Real Madrid's title hopes alive. Leganes, now sitting 18th in the table, continues to face challenges in their fight against relegation.

Now we ask for the players who played in the match.

	
user_input = "Which players played the match?"
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config_USER3,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Which players played the match?
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: Real Madrid vs Leganes match report players lineup
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Real Madrid vs. Leganes final score: La Liga result, updates, stats ...", "url": "https://www.sportingnews.com/us/soccer/news/real-madrid-leganes-score-result-updates-stats-la-liga/8ecf730cfcb9b6c5f6693a0d", "content": "Real Madrid came through a topsy-turvy game with Leganes to claim a 3-2 victory and put pressure back on Barcelona in La Liga's title race. Kylian Mbappe scored in each half either side of a Jude Bellingham goal — his first in the league since January 3 — to seal all three points for the champions after Leganes had come from behind to lead at the interval. Rodrygo won back the ball in the Leganes half and earned a free-kick on the edge of the box, and Mbappe found the bottom corner after rolling the ball short to Fran Garcia to work an angle. Leganes lead Real Madrid at the Bernabeu for the very first time! *Real Madrid starting lineup (4-3-3, right to left):* Lunin (GK) — Vazquez, Rudiger, Asencio, Garcia — Modric, Bellingham, Camavinga — B.", "score": 0.88372874}, {opening_brace}"title": "CONFIRMED lineups: Real Madrid vs Leganés, 2025 La Liga", "url": "https://www.managingmadrid.com/2025/3/29/24396638/real-madrid-vs-leganes-2025-la-liga-live-online-stream", "content": "Real Madrid starting XI: Lunin, Vazquez, Rudiger, Asencio, Fran Garcia, Camavinga, Guler, Modric, Bellingham, Brahim, Mbappe. Leganes starting", "score": 0.83452857}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: Real Madrid vs Leganes players 2025
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Player Ratings: Real Madrid 3-2 Leganes; 2025 La Liga", "url": "https://www.managingmadrid.com/2025/3/30/24396688/player-ratings-real-madrid-3-2-leganes-2025-la-liga", "content": "Raúl Asencio—7: Applauded by the Bernabeu on multiple occasions with good sweeping up defensively. Fran García—6: Better on the offensive end, getting into the final third and playing some dagger crosses. Eduardo Camavinga—6: Modric and Camavinga struggled to deal with Leganes counter attacks and Diego, playing as a #10 for Leganes, got the better of both of them. [...] Follow Managing Madrid online: Site search Managing Madrid main menu Filed under: Player Ratings: Real Madrid 3-2 Leganes; 2025 La Liga Kylian Mbappe scores a brace to help Madrid secure a nervy 3-2 victory. Share this story Share All sharing options for: Player Ratings: Real Madrid 3-2 Leganes; 2025 La Liga Full match player ratings below: Andriy Lunin—7: Not at fault for the goals, was left with the opposition taking a shot from near the six yard box. [...] Lucas Vázquez—4: Exposed in transition and lacking the speed and athleticism to cover the gaps he leaves when venturing forward. Needs a more “pessimistic” attitude when the ball is on the opposite flank, occupying better spots in ““rest defense”. Antonio Rudiger—5: Several unnecessary long distance shots to hurt Madrid’s rhythm and reinforce Leganes game plan. Playing with too many matches in his legs and it’s beginning to show.", "score": 0.8832463}, {opening_brace}"title": "Real Madrid vs. Leganés (Mar 29, 2025) Live Score - ESPN", "url": "https://www.espn.com/soccer/match/_/gameId/704946", "content": "Match Formations · 13. Lunin · 20. García · 22. Rüdiger · 35. Asencio · 17. Vázquez · 5. Bellingham · 10. Modric · 6. Camavinga.", "score": 0.86413884}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: Real Madrid vs Leganes starting lineup
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Starting lineups of Real Madrid and Leganés", "url": "https://www.realmadrid.com/en-US/news/football/first-team/latest-news/once-inicial-del-real-madrid-contra-el-leganes-29-03-2025", "content": "Starting lineups of Real Madrid and Leganés The Whites’ team is: Lunin, Lucas V., Asencio, Rüdiger, Fran García, Arda Güler, Modrić, Camavinga, Bellingham, Brahim and Mbappé. Real Madrid have named their starting line-up for the game against Leganés on matchday 29 of LaLiga, which will be played at the Santiago Bernabéu (9 pm CET). [...] Real Madrid starting line-up: 13. Lunin 17. Lucas V. 35. Asencio 22. Rüdiger 20. Fran García 15. Arda Güler 10. Modrić 6. Camavinga 5. Bellingham 21. Brahim 9. Mbappé. Substitutes: 26. Fran González 34. Sergio Mestre 4. Alaba 7. Vini Jr. 8. Valverde 11. Rodrygo 14. Tchouameni 16. Endrick 18. Vallejo 43. Diego Aguado. Leganés starting line-up: 13. Dmitrovic 5. Tapia 6. Sergio G. 7. Óscar 10. Raba 11. Cruz 12. V. Rosier 17. Neyou 19. Diego G. 20. Javi Hernández 22. Nastasic. [...] Suplentes: 1. Juan Soriano 36. Abajas 2. A. Alti 3. Jorge Sáenz 8. Cisse 9. Miguel 14. Darko 18. Duk 21. R. López 23. Munir 24. Chicco 30. I. Diomande. Download Now Official App Fan Real Madrid © 2025 All rights reserved", "score": 0.9465623}, {opening_brace}"title": "Real Madrid vs. Leganes lineups, confirmed starting 11, team news ...", "url": "https://www.sportingnews.com/us/soccer/news/real-madrid-leganes-lineups-starting-11-team-news-injuries/aac757d10cc7b9a084995b4d", "content": "Real Madrid starting lineup (4-3-3, right to left): Lunin (GK) — Vazquez, Rudiger, Asencio, Garcia — Modric, Bellingham, Camavinga — B. Diaz,", "score": 0.9224337}]
================================== Ai Message ==================================
The starting lineup for Real Madrid in their match against Leganés was: Lunin (GK), Vázquez, Rüdiger, Asencio, Fran García, Modric, Bellingham, Camavinga, Brahim, Arda Güler, and Mbappé. Notable players like Vini Jr., Rodrygo, and Valverde were on the bench.

After much searching, he finally finds it. So now we have a chatbot with tools and memory.

So far, we have created some checkpoints in three different threads. But, what goes into each checkpoint? To inspect the state of a graph for a given configuration, we can use the method get_state(config).

	
snapshot = graph.get_state(config_USER3)
snapshot
Copy
	
StateSnapshot(values={opening_brace}'messages': [HumanMessage(content='How did Real Madrid fare this weekend against Leganes in La Liga?', additional_kwargs={opening_brace}{closing_brace}, response_metadata={opening_brace}{closing_brace}, id='a33f5825-1ae4-4717-ad17-8e306f35b027'), AIMessage(content='', additional_kwargs={opening_brace}'tool_calls': [{opening_brace}'function': {'arguments': {opening_brace}'query': 'Real Madrid vs Leganes La Liga this weekend'{closing_brace}, 'name': 'tavily_search_results_json', 'description': None}, 'id': '0', 'type': 'function'{closing_brace}]}, response_metadata={opening_brace}'token_usage': {'completion_tokens': 25, 'prompt_tokens': 296, 'total_tokens': 321}, 'model': '', 'finish_reason': 'stop'{closing_brace}, id='run-7905b5ae-5dee-4641-b012-396affde984c-0', tool_calls=[{opening_brace}'name': 'tavily_search_results_json', 'args': {opening_brace}'query': 'Real Madrid vs Leganes La Liga this weekend'{closing_brace}, 'id': '0', 'type': 'tool_call'{closing_brace}]), ToolMessage(content='[{opening_brace}"title": "Real Madrid 3-2 Leganes: Goals and highlights - LaLiga 24/25 | Marca", "url": "https://www.marca.com/en/soccer/laliga/r-madrid-leganes/2025/03/29/01_0101_20250329_186_957-live.html", "content": "While their form has varied throughout the campaign there is no denying Real Madrid are a force at home in LaLiga this season, as they head into Saturday's match having picked up 34 points from 13 matches.\n\nAs for Leganes they currently sit 18th in the table, though they are level with Alaves for 17th as both teams look to stay in the top flight. [...] The two teams have already played twice this season, with Real Madrid securing a 3-0 win in the reverse league fixture. They also met in the quarter-finals of the Copa del Rey, a game Real won 3-2.\n\nReal Madrid vs Leganes LIVE - Latest Updates\n\nMatch ends, Real Madrid 3, Leganes 2.\n\nSecond Half ends, Real Madrid 3, Leganes 2.\n\nFoul by Vinícius Júnior (Real Madrid).\n\nSeydouba Cissé (Leganes) wins a free kick in the defensive half. [...] Goal! Real Madrid 1, Leganes 1. Diego García (Leganes) left footed shot from very close range.\n\nAttempt missed. Óscar Rodríguez (Leganes) left footed shot from the centre of the box.\n\nGoal! Real Madrid 1, Leganes 0. Kylian Mbappé (Real Madrid) converts the penalty with a right footed shot.\n\nPenalty Real Madrid. Arda Güler draws a foul in the penalty area.\n\nPenalty conceded by Óscar Rodríguez (Leganes) after a foul in the penalty area.\n\nDelay over. They are ready to continue.", "score": 0.8548001}, {opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Game Analysis - ESPN", "url": "https://www.espn.com/soccer/report/_/gameId/704946", "content": "Real Madrid\n\nLeganés\n\nMbappé nets twice to keep Real Madrid's title hopes alive\n\nReal Madrid vs. Leganés - Game Highlights\n\nWatch the Game Highlights from Real Madrid vs. Leganés, 03/30/2025\n\nReal Madrid's Kylian Mbappé struck twice to help his side come from behind to claim a hard-fought 3-2 home win over relegation-threatened Leganes on Saturday to move the second-placed reigning champions level on points with leaders Barcelona. [...] Leganes pushed for an equaliser but fell to a third consecutive defeat to sit 18th on 27 points, level with Alaves who are one place higher in the safety zone on goal difference.\n\n\"We have done a tremendous job. We leave with our heads held high because we were fighting until the end to score here,\" Leganes striker Garcia said.\n\n\"Ultimately, it was down to the details that they took it. We played a very serious game and now we have to think about next week.\"\n\nGame Information", "score": 0.82220376}]', name='tavily_search_results_json', id='0e02fce3-a6f0-4cce-9217-04c8c3219265', tool_call_id='0', artifact={opening_brace}'query': 'Real Madrid vs Leganes La Liga this weekend', 'follow_up_questions': None, 'answer': None, 'images': [], 'results': [{'url': 'https://www.marca.com/en/soccer/laliga/r-madrid-leganes/2025/03/29/01_0101_20250329_186_957-live.html', 'title': 'Real Madrid 3-2 Leganes: Goals and highlights - LaLiga 24/25 | Marca', 'content': "While their form has varied throughout the campaign there is no denying Real Madrid are a force at home in LaLiga this season, as they head into Saturday's match having picked up 34 points from 13 matches. As for Leganes they currently sit 18th in the table, though they are level with Alaves for 17th as both teams look to stay in the top flight. [...] The two teams have already played twice this season, with Real Madrid securing a 3-0 win in the reverse league fixture. They also met in the quarter-finals of the Copa del Rey, a game Real won 3-2. Real Madrid vs Leganes LIVE - Latest Updates Match ends, Real Madrid 3, Leganes 2. Second Half ends, Real Madrid 3, Leganes 2. Foul by Vinícius Júnior (Real Madrid). Seydouba Cissé (Leganes) wins a free kick in the defensive half. [...] Goal! Real Madrid 1, Leganes 1. Diego García (Leganes) left footed shot from very close range. Attempt missed. Óscar Rodríguez (Leganes) left footed shot from the centre of the box. Goal! Real Madrid 1, Leganes 0. Kylian Mbappé (Real Madrid) converts the penalty with a right footed shot. Penalty Real Madrid. Arda Güler draws a foul in the penalty area. Penalty conceded by Óscar Rodríguez (Leganes) after a foul in the penalty area. Delay over. They are ready to continue.", 'score': 0.8548001, 'raw_content': None}, {'url': 'https://www.espn.com/soccer/report/_/gameId/704946', 'title': 'Real Madrid 3-2 Leganés (Mar 29, 2025) Game Analysis - ESPN', 'content': 'Real Madrid Leganés Mbappé nets twice to keep Real Madrid's title hopes alive Real Madrid vs. Leganés - Game Highlights Watch the Game Highlights from Real Madrid vs. Leganés, 03/30/2025 Real Madrid's Kylian Mbappé struck twice to help his side come from behind to claim a hard-fought 3-2 home win over relegation-threatened Leganes on Saturday to move the second-placed reigning champions level on points with leaders Barcelona. [...] Leganes pushed for an equaliser but fell to a third consecutive defeat to sit 18th on 27 points, level with Alaves who are one place higher in the safety zone on goal difference. "We have done a tremendous job. We leave with our heads held high because we were fighting until the end to score here," Leganes striker Garcia said. "Ultimately, it was down to the details that they took it. We played a very serious game and now we have to think about next week." Game Information', 'score': 0.82220376, 'raw_content': None}], 'response_time': 1.47}), AIMessage(content="Real Madrid secured a 3-2 victory against Leganes this weekend in their La Liga match. Kylian Mbappé scored twice, including a penalty, to help his team come from behind and claim the win, keeping Real Madrid's title hopes alive. Leganes, now sitting 18th in the table, continues to face challenges in their fight against relegation.", additional_kwargs={opening_brace}{closing_brace}, response_metadata={opening_brace}'token_usage': {'completion_tokens': 92, 'prompt_tokens': 1086, 'total_tokens': 1178}, 'model': '', 'finish_reason': 'stop'{closing_brace}, id='run-22226dda-0475-49b7-882f-fe7bd63ef025-0'), HumanMessage(content='Which players played the match?', additional_kwargs={opening_brace}{closing_brace}, response_metadata={opening_brace}{closing_brace}, id='3e6d9f84-06a2-4148-8f2b-d8ef42c3bea1'), AIMessage(content='', additional_kwargs={opening_brace}'tool_calls': [{opening_brace}'function': {'arguments': {opening_brace}'query': 'Real Madrid vs Leganes match report players lineup'{closing_brace}, 'name': 'tavily_search_results_json', 'description': None}, 'id': '0', 'type': 'function'{closing_brace}]}, response_metadata={opening_brace}'token_usage': {'completion_tokens': 29, 'prompt_tokens': 1178, 'total_tokens': 1207}, 'model': '', 'finish_reason': 'stop'{closing_brace}, id='run-025d3235-61b9-4add-8e1b-5b1bc795a9d3-0', tool_calls=[{opening_brace}'name': 'tavily_search_results_json', 'args': {opening_brace}'query': 'Real Madrid vs Leganes match report players lineup'{closing_brace}, 'id': '0', 'type': 'tool_call'{closing_brace}]), ToolMessage(content='[{opening_brace}"title": "Real Madrid vs. Leganes final score: La Liga result, updates, stats ...", "url": "https://www.sportingnews.com/us/soccer/news/real-madrid-leganes-score-result-updates-stats-la-liga/8ecf730cfcb9b6c5f6693a0d", "content": "Real Madrid came through a topsy-turvy game with Leganes to claim a 3-2 victory and put pressure back on Barcelona in La Liga's title race. Kylian Mbappe scored in each half either side of a Jude Bellingham goal — his first in the league since January 3 — to seal all three points for the champions after Leganes had come from behind to lead at the interval. Rodrygo won back the ball in the Leganes half and earned a free-kick on the edge of the box, and Mbappe found the bottom corner after rolling the ball short to Fran Garcia to work an angle. Leganes lead Real Madrid at the Bernabeu for the very first time! *Real Madrid starting lineup (4-3-3, right to left):* Lunin (GK) — Vazquez, Rudiger, Asencio, Garcia — Modric, Bellingham, Camavinga — B.", "score": 0.88372874}, {opening_brace}"title": "CONFIRMED lineups: Real Madrid vs Leganés, 2025 La Liga", "url": "https://www.managingmadrid.com/2025/3/29/24396638/real-madrid-vs-leganes-2025-la-liga-live-online-stream", "content": "Real Madrid starting XI: Lunin, Vazquez, Rudiger, Asencio, Fran Garcia, Camavinga, Guler, Modric, Bellingham, Brahim, Mbappe. Leganes starting", "score": 0.83452857}]', name='tavily_search_results_json', id='2dbc1324-2c20-406a-b2d7-a3d6fc609537', tool_call_id='0', artifact={opening_brace}'query': 'Real Madrid vs Leganes match report players lineup', 'follow_up_questions': None, 'answer': None, 'images': [], 'results': [{'url': 'https://www.sportingnews.com/us/soccer/news/real-madrid-leganes-score-result-updates-stats-la-liga/8ecf730cfcb9b6c5f6693a0d', 'title': 'Real Madrid vs. Leganes final score: La Liga result, updates, stats ...', 'content': "Real Madrid came through a topsy-turvy game with Leganes to claim a 3-2 victory and put pressure back on Barcelona in La Liga's title race. Kylian Mbappe scored in each half either side of a Jude Bellingham goal — his first in the league since January 3 — to seal all three points for the champions after Leganes had come from behind to lead at the interval. Rodrygo won back the ball in the Leganes half and earned a free-kick on the edge of the box, and Mbappe found the bottom corner after rolling the ball short to Fran Garcia to work an angle. Leganes lead Real Madrid at the Bernabeu for the very first time! *Real Madrid starting lineup (4-3-3, right to left):* Lunin (GK) — Vazquez, Rudiger, Asencio, Garcia — Modric, Bellingham, Camavinga — B.", 'score': 0.88372874, 'raw_content': None}, {'url': 'https://www.managingmadrid.com/2025/3/29/24396638/real-madrid-vs-leganes-2025-la-liga-live-online-stream', 'title': 'CONFIRMED lineups: Real Madrid vs Leganés, 2025 La Liga', 'content': 'Real Madrid starting XI: Lunin, Vazquez, Rudiger, Asencio, Fran Garcia, Camavinga, Guler, Modric, Bellingham, Brahim, Mbappe. Leganes starting', 'score': 0.83452857, 'raw_content': None}], 'response_time': 3.36}), AIMessage(content='', additional_kwargs={opening_brace}'tool_calls': [{opening_brace}'function': {'arguments': {opening_brace}'query': 'Real Madrid vs Leganes players 2025'{closing_brace}, 'name': 'tavily_search_results_json', 'description': None}, 'id': '0', 'type': 'function'{closing_brace}]}, response_metadata={opening_brace}'token_usage': {'completion_tokens': 31, 'prompt_tokens': 1630, 'total_tokens': 1661}, 'model': '', 'finish_reason': 'stop'{closing_brace}, id='run-d6b4c4ff-0923-4082-9dea-7c51b2a4fc60-0', tool_calls=[{opening_brace}'name': 'tavily_search_results_json', 'args': {opening_brace}'query': 'Real Madrid vs Leganes players 2025'{closing_brace}, 'id': '0', 'type': 'tool_call'{closing_brace}]), ToolMessage(content='[{opening_brace}"title": "Player Ratings: Real Madrid 3-2 Leganes; 2025 La Liga", "url": "https://www.managingmadrid.com/2025/3/30/24396688/player-ratings-real-madrid-3-2-leganes-2025-la-liga", "content": "Raúl Asencio—7: Applauded by the Bernabeu on multiple occasions with good sweeping up defensively.\n\nFran García—6: Better on the offensive end, getting into the final third and playing some dagger crosses.\n\nEduardo Camavinga—6: Modric and Camavinga struggled to deal with Leganes counter attacks and Diego, playing as a #10 for Leganes, got the better of both of them. [...] Follow Managing Madrid online:\n\nSite search\n\nManaging Madrid main menu\n\nFiled under:\n\nPlayer Ratings: Real Madrid 3-2 Leganes; 2025 La Liga\n\nKylian Mbappe scores a brace to help Madrid secure a nervy 3-2 victory.\n\nShare this story\n\nShare\nAll sharing options for:\nPlayer Ratings: Real Madrid 3-2 Leganes; 2025 La Liga\n\nFull match player ratings below:\n\nAndriy Lunin—7: Not at fault for the goals, was left with the opposition taking a shot from near the six yard box. [...] Lucas Vázquez—4: Exposed in transition and lacking the speed and athleticism to cover the gaps he leaves when venturing forward. Needs a more “pessimistic” attitude when the ball is on the opposite flank, occupying better spots in ““rest defense”.\n\nAntonio Rudiger—5: Several unnecessary long distance shots to hurt Madrid’s rhythm and reinforce Leganes game plan. Playing with too many matches in his legs and it’s beginning to show.", "score": 0.8832463}, {opening_brace}"title": "Real Madrid vs. Leganés (Mar 29, 2025) Live Score - ESPN", "url": "https://www.espn.com/soccer/match/_/gameId/704946", "content": "Match Formations · 13. Lunin · 20. García · 22. Rüdiger · 35. Asencio · 17. Vázquez · 5. Bellingham · 10. Modric · 6. Camavinga.", "score": 0.86413884}]', name='tavily_search_results_json', id='ac15dd6e-09b1-4075-834e-d869f4079285', tool_call_id='0', artifact={opening_brace}'query': 'Real Madrid vs Leganes players 2025', 'follow_up_questions': None, 'answer': None, 'images': [], 'results': [{'url': 'https://www.managingmadrid.com/2025/3/30/24396688/player-ratings-real-madrid-3-2-leganes-2025-la-liga', 'title': 'Player Ratings: Real Madrid 3-2 Leganes; 2025 La Liga', 'content': 'Raúl Asencio—7: Applauded by the Bernabeu on multiple occasions with good sweeping up defensively. Fran García—6: Better on the offensive end, getting into the final third and playing some dagger crosses. Eduardo Camavinga—6: Modric and Camavinga struggled to deal with Leganes counter attacks and Diego, playing as a #10 for Leganes, got the better of both of them. [...] Follow Managing Madrid online: Site search Managing Madrid main menu Filed under: Player Ratings: Real Madrid 3-2 Leganes; 2025 La Liga Kylian Mbappe scores a brace to help Madrid secure a nervy 3-2 victory. Share this story Share All sharing options for: Player Ratings: Real Madrid 3-2 Leganes; 2025 La Liga Full match player ratings below: Andriy Lunin—7: Not at fault for the goals, was left with the opposition taking a shot from near the six yard box. [...] Lucas Vázquez—4: Exposed in transition and lacking the speed and athleticism to cover the gaps he leaves when venturing forward. Needs a more “pessimistic” attitude when the ball is on the opposite flank, occupying better spots in ““rest defense”. Antonio Rudiger—5: Several unnecessary long distance shots to hurt Madrid’s rhythm and reinforce Leganes game plan. Playing with too many matches in his legs and it’s beginning to show.', 'score': 0.8832463, 'raw_content': None}, {'url': 'https://www.espn.com/soccer/match/_/gameId/704946', 'title': 'Real Madrid vs. Leganés (Mar 29, 2025) Live Score - ESPN', 'content': 'Match Formations · 13. Lunin · 20. García · 22. Rüdiger · 35. Asencio · 17. Vázquez · 5. Bellingham · 10. Modric · 6. Camavinga.', 'score': 0.86413884, 'raw_content': None}], 'response_time': 0.89}), AIMessage(content='', additional_kwargs={opening_brace}'tool_calls': [{opening_brace}'function': {'arguments': {opening_brace}'query': 'Real Madrid vs Leganes starting lineup'{closing_brace}, 'name': 'tavily_search_results_json', 'description': None}, 'id': '0', 'type': 'function'{closing_brace}]}, response_metadata={opening_brace}'token_usage': {'completion_tokens': 27, 'prompt_tokens': 2212, 'total_tokens': 2239}, 'model': '', 'finish_reason': 'stop'{closing_brace}, id='run-68867df1-2012-47ac-9f01-42b071ef3a1f-0', tool_calls=[{opening_brace}'name': 'tavily_search_results_json', 'args': {opening_brace}'query': 'Real Madrid vs Leganes starting lineup'{closing_brace}, 'id': '0', 'type': 'tool_call'{closing_brace}]), ToolMessage(content='[{opening_brace}"title": "Starting lineups of Real Madrid and Leganés", "url": "https://www.realmadrid.com/en-US/news/football/first-team/latest-news/once-inicial-del-real-madrid-contra-el-leganes-29-03-2025", "content": "Starting lineups of Real Madrid and Leganés\n\n\n\nThe Whites’ team is: Lunin, Lucas V., Asencio, Rüdiger, Fran García, Arda Güler, Modrić, Camavinga, Bellingham, Brahim and Mbappé.\n\n\n\n\n\nReal Madrid have named their starting line-up for the game against Leganés on matchday 29 of LaLiga, which will be played at the Santiago Bernabéu (9 pm CET). [...] Real Madrid starting line-up:\n13. Lunin\n17. Lucas V.\n35. Asencio\n22. Rüdiger\n20. Fran García\n15. Arda Güler\n10. Modrić\n6. Camavinga\n5. Bellingham\n21. Brahim\n9. Mbappé.\n\nSubstitutes:\n26. Fran González\n34. Sergio Mestre\n4. Alaba\n7. Vini Jr.\n8. Valverde\n11. Rodrygo\n14. Tchouameni\n16. Endrick\n18. Vallejo\n43. Diego Aguado.\n\nLeganés starting line-up:\n13. Dmitrovic\n5. Tapia\n6. Sergio G.\n7. Óscar\n10. Raba\n11. Cruz\n12. V. Rosier\n17. Neyou\n19. Diego G.\n20. Javi Hernández\n22. Nastasic. [...] Suplentes:\n1. Juan Soriano\n36. Abajas\n2. A. Alti\n3. Jorge Sáenz\n8. Cisse\n9. Miguel\n14. Darko\n18. Duk\n21. R. López\n23. Munir\n24. Chicco\n30. I. Diomande.\n\n\n\nDownload Now\n\nOfficial App Fan\n\nReal Madrid © 2025 All rights reserved", "score": 0.9465623}, {opening_brace}"title": "Real Madrid vs. Leganes lineups, confirmed starting 11, team news ...", "url": "https://www.sportingnews.com/us/soccer/news/real-madrid-leganes-lineups-starting-11-team-news-injuries/aac757d10cc7b9a084995b4d", "content": "Real Madrid starting lineup (4-3-3, right to left): Lunin (GK) — Vazquez, Rudiger, Asencio, Garcia — Modric, Bellingham, Camavinga — B. Diaz,", "score": 0.9224337}]', name='tavily_search_results_json', id='46721f2b-2df2-4da2-831a-ce94f6b4ff8f', tool_call_id='0', artifact={opening_brace}'query': 'Real Madrid vs Leganes starting lineup', 'follow_up_questions': None, 'answer': None, 'images': [], 'results': [{'url': 'https://www.realmadrid.com/en-US/news/football/first-team/latest-news/once-inicial-del-real-madrid-contra-el-leganes-29-03-2025', 'title': 'Starting lineups of Real Madrid and Leganés', 'content': 'Starting lineups of Real Madrid and Leganés The Whites’ team is: Lunin, Lucas V., Asencio, Rüdiger, Fran García, Arda Güler, Modrić, Camavinga, Bellingham, Brahim and Mbappé. Real Madrid have named their starting line-up for the game against Leganés on matchday 29 of LaLiga, which will be played at the Santiago Bernabéu (9 pm CET). [...] Real Madrid starting line-up: 13. Lunin 17. Lucas V. 35. Asencio 22. Rüdiger 20. Fran García 15. Arda Güler 10. Modrić 6. Camavinga 5. Bellingham 21. Brahim 9. Mbappé. Substitutes: 26. Fran González 34. Sergio Mestre 4. Alaba 7. Vini Jr. 8. Valverde 11. Rodrygo 14. Tchouameni 16. Endrick 18. Vallejo 43. Diego Aguado. Leganés starting line-up: 13. Dmitrovic 5. Tapia 6. Sergio G. 7. Óscar 10. Raba 11. Cruz 12. V. Rosier 17. Neyou 19. Diego G. 20. Javi Hernández 22. Nastasic. [...] Suplentes: 1. Juan Soriano 36. Abajas 2. A. Alti 3. Jorge Sáenz 8. Cisse 9. Miguel 14. Darko 18. Duk 21. R. López 23. Munir 24. Chicco 30. I. Diomande. Download Now Official App Fan Real Madrid © 2025 All rights reserved', 'score': 0.9465623, 'raw_content': None}, {'url': 'https://www.sportingnews.com/us/soccer/news/real-madrid-leganes-lineups-starting-11-team-news-injuries/aac757d10cc7b9a084995b4d', 'title': 'Real Madrid vs. Leganes lineups, confirmed starting 11, team news ...', 'content': 'Real Madrid starting lineup (4-3-3, right to left): Lunin (GK) — Vazquez, Rudiger, Asencio, Garcia — Modric, Bellingham, Camavinga — B. Diaz,', 'score': 0.9224337, 'raw_content': None}], 'response_time': 2.3}), AIMessage(content='The starting lineup for Real Madrid in their match against Leganés was: Lunin (GK), Vázquez, Rüdiger, Asencio, Fran García, Modric, Bellingham, Camavinga, Brahim, Arda Güler, and Mbappé. Notable players like Vini Jr., Rodrygo, and Valverde were on the bench.', additional_kwargs={opening_brace}{closing_brace}, response_metadata={opening_brace}'token_usage': {'completion_tokens': 98, 'prompt_tokens': 2954, 'total_tokens': 3052}, 'model': '', 'finish_reason': 'stop'{closing_brace}, id='run-0bd921c6-1d94-4a4c-9d9c-d255d301e2d5-0')]}, next=(), config={'configurable': {'thread_id': '3', 'checkpoint_ns': '', 'checkpoint_id': '1f010a50-49f2-6904-800c-ec8d67fe5b92'{closing_brace}{closing_brace}, metadata={'source': 'loop', 'writes': {opening_brace}'chatbot_node': {opening_brace}'messages': [AIMessage(content='The starting lineup for Real Madrid in their match against Leganés was: Lunin (GK), Vázquez, Rüdiger, Asencio, Fran García, Modric, Bellingham, Camavinga, Brahim, Arda Güler, and Mbappé. Notable players like Vini Jr., Rodrygo, and Valverde were on the bench.', additional_kwargs={opening_brace}{closing_brace}, response_metadata={opening_brace}'token_usage': {'completion_tokens': 98, 'prompt_tokens': 2954, 'total_tokens': 3052}, 'model': '', 'finish_reason': 'stop'{closing_brace}, id='run-0bd921c6-1d94-4a4c-9d9c-d255d301e2d5-0')]{closing_brace}{closing_brace}, 'thread_id': '3', 'step': 12, 'parents': {opening_brace}{closing_brace{closing_brace}{closing_brace}, created_at='2025-04-03T16:02:18.167222+00:00', parent_config={'configurable': {'thread_id': '3', 'checkpoint_ns': '', 'checkpoint_id': '1f010a50-1feb-6534-800b-079c102aaa71'{closing_brace}{closing_brace}, tasks=())

If we want to see the next node to be processed, we can use the next attribute.

	
snapshot.next
Copy
	
()

Since the graph has finished, next is empty. If you get a state from within a graph invocation, next indicates which node will run next.

The previous snapshot (snapshot) contains the current state values, the corresponding configuration, and the next node (next) to process. In our case, the graph has reached the END state, which is why next is empty.

We are going to rewrite all the code to make it more readable.

	
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from huggingface_hub import login
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import ToolMessage
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.checkpoint.memory import MemorySaver
from IPython.display import Image, display
import json
import os
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
import dotenv
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
# State
class State(TypedDict):
messages: Annotated[list, add_messages]
# Tools
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
tool = TavilySearchResults(api_wrapper=wrapper, max_results=2)
tools_list = [tool]
# Create the LLM model
login(token=HUGGINGFACE_TOKEN) # Login to HuggingFace to use the model
MODEL = "Qwen/Qwen2.5-72B-Instruct"
model = HuggingFaceEndpoint(
repo_id=MODEL,
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
# Create the chat model
llm = ChatHuggingFace(llm=model)
# Create the LLM with tools
llm_with_tools = llm.bind_tools(tools_list)
# Tool node
tool_node = ToolNode(tools=tools_list)
# Functions
def chatbot_function(state: State):
return {opening_brace}"messages": [llm_with_tools.invoke(state["messages"])]}
# Start to build the graph
graph_builder = StateGraph(State)
# Add nodes to the graph
graph_builder.add_node("chatbot_node", chatbot_function)
graph_builder.add_node("tools", tool_node)
# Add edges
graph_builder.add_edge(START, "chatbot_node")
graph_builder.add_conditional_edges( "chatbot_node", tools_condition)
graph_builder.add_edge("tools", "chatbot_node")
# Compile the graph
memory = MemorySaver()
graph = graph_builder.compile(checkpointer=memory)
# Display the graph
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception as e:
print(f"Error al visualizar el grafo: {e}")
Copy
	
Error al visualizar el grafo: Failed to reach https://mermaid.ink/ API while trying to render your graph after 1 retries. To resolve this issue:
1. Check your internet connection and try again
2. Try with higher retry settings: `draw_mermaid_png(..., max_retries=5, retry_delay=2.0)`
3. Use the Pyppeteer rendering method which will render your graph locally in a browser: `draw_mermaid_png(..., draw_method=MermaidDrawMethod.PYPPETEER)`
	
USER1_THREAD_ID = "1"
config_USER1 = {opening_brace}"configurable": {opening_brace}"thread_id": USER1_THREAD_ID{closing_brace}{closing_brace}
user_input = "Hi there! My name is Maximo."
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config_USER1,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Hi there! My name is Maximo.
================================== Ai Message ==================================
Hello Maximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.
	
user_input = "Do you remember my name?"
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config_USER1,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Do you remember my name?
================================== Ai Message ==================================
Yes, I remember your name! You mentioned it's Maximo. It's nice to chat with you, Maximo. How can I assist you today?

Congratulations! Our chatbot can now maintain conversation state across all sessions thanks to the checkpoint system of LangGraph. This opens up possibilities for more natural and contextual interactions. The LangGraph controller even handles complex graph states.

Morelink image 91

Chatbot with summary messagelink image 92

If we are going to manage the conversation context to avoid using too many tokens, one thing we can do to improve the conversation is to add a message with a summary of the conversation. This can be useful for the previous example, where we have filtered out so much state that the LLM does not have enough context.

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import RemoveMessage, trim_messages, SystemMessage, HumanMessage, AIMessage, RemoveMessage
      from langgraph.checkpoint.memory import MemorySaver
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      memory_saver = MemorySaver()
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
          summary: str
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Print functions
      def print_message(m):
          if isinstance(m, HumanMessage):
              message_content = m.content
              message_lines = message_content.split("\n")
              for i, line in enumerate(message_lines):
                  if i == 0:
                      print(f"\t\t[HumanMessage]: {line}")
                  else:
                      print(f"\t\t{line}")
          elif isinstance(m, SystemMessage):
              message_content = m.content
              message_lines = message_content.split("\n")
              for i, line in enumerate(message_lines):
                  if i == 0:
                      print(f"\t\t[SystemMessage]: {line}")
                  else:
                      print(f"\t\t{line}")
          elif isinstance(m, AIMessage):
              message_content = m.content
              message_lines = message_content.split("\n")
              for i, line in enumerate(message_lines):
                  if i == 0:
                      print(f"\t\t[AIMessage]: {line}")
                  else:
                      print(f"\t\t{line}")
          elif isinstance(m, RemoveMessage):
              message_content = m.content
              message_lines = message_content.split("\n")
              for i, line in enumerate(message_lines):
                  if i == 0:
                      print(f"\t\t[RemoveMessage]: {line}")
                  else:
                      print(f"\t\t{line}")
          else:
              message_content = m.content
              message_lines = message_content.split("\n")
              for i, line in enumerate(message_lines):
                  if i == 0:
                      print(f"\t\t[{type(m)}]: {line}")
                  else:
                      print(f"\t\t{line}")
      
      def print_state_summary(state: State):
          if state.get("summary"):
              summary_lines = state["summary"].split("\n")
              for i, line in enumerate(summary_lines):
                  if i == 0:
                      print(f"\t\tSummary of the conversation: {line}")
                  else:
                      print(f"\t\t{line}")
          else:
              print("\t\tNo summary of the conversation")
      
      def print_summary(summary: str):
          if summary:
              summary_lines = summary.split("\n")
              for i, line in enumerate(summary_lines):
                  if i == 0:
                      print(f"\t\tSummary of the conversation: {line}")
                  else:
                      print(f"\t\t{line}")
          else:
              print("\t\tNo summary of the conversation")
      
      # Nodes
      def filter_messages(state: State):
          print("\t--- 1 messages (input to filter_messages) ---")
          for m in state["messages"]:
              print_message(m)
          print_state_summary(state)
          print("\t------------------------------------------------")
      
          # Delete all but the 2 most recent messages if there are more than 2
          if len(state["messages"]) > 2:
              delete_messages = [RemoveMessage(id=m.id) for m in state["messages"][:-2]]
          else:
              delete_messages = []
      
          print("\t--- 1 messages (output of filter_messages) ---")
          for m in delete_messages:
              print_message(m)
          print_state_summary(state)
          print("\t------------------------------------------------")
      
          return {"messages": delete_messages}
      
      def trim_messages_node(state: State):
          # print the messages received from filter_messages_node
          print("\n\n\t--- 2 messages (input to trim_messages) ---")
          for m in state["messages"]:
              print_message(m)
          print_state_summary(state)
          print("\t------------------------------------------------")
      
          # Trim the messages based on the specified parameters
          trimmed_messages = trim_messages(
              state["messages"],
              max_tokens=100,       # Maximum tokens allowed in the trimmed list
              strategy="last",     # Keep the latest messages
              token_counter=llm,   # Use the LLM's tokenizer to count tokens
              allow_partial=True,  # Allow cutting messages mid-way if needed
          )
      
          # Identify the messages that must be removed
          # This is crucial: determine which messages are in 'state["messages"]' but not in 'trimmed_messages'
          original_ids = {m.id for m in state["messages"]}
          trimmed_ids = {m.id for m in trimmed_messages}
          ids_to_remove = original_ids - trimmed_ids
          
          # Create a RemoveMessage for each message that must be removed
          messages_to_remove = [RemoveMessage(id=msg_id) for msg_id in ids_to_remove]
      
          # Print the result of the trimming
          print("\t--- 2 messages (output of trim_messages - after trimming) ---")
          if trimmed_messages:
              for m in trimmed_messages:
                  print_message(m)
          else:
              print("[Empty list - No messages after trimming]")
          print_state_summary(state)
          print("\t------------------------------------------------")
      
          return {"messages": messages_to_remove}
      
      def chat_model_node(state: State):
          # Get summary of the conversation if it exists
          summary = state.get("summary", "")
      
          print("\n\n\t--- 3 messages (input to chat_model_node) ---")
          for m in state["messages"]:
              print_message(m)
          print_state_summary(state)
          print("\t------------------------------------------------")
      
          # If there is a summary, add it to the system message
          if summary:
              # Add the summary to the system message
              system_message = f"Summary of the conversation earlier: {summary}"
      
              # Add the system message to the messages at the beginning
              messages = [SystemMessage(content=system_message)] + state["messages"]
          
          # If there is no summary, just return the messages
          else:
              messages = state["messages"]
          print(f"\t--- 3 messages (input to chat_model_node) ---")
          for m in messages:
              print_message(m)
          print_summary(summary)
          print("\t------------------------------------------------")
      
          # Invoke the LLM with the messages
          response = llm.invoke(messages)
      
          print("\t--- 3 messages (output of chat_model_node) ---")
          print_message(response)
          print_summary(summary)
          print("\t------------------------------------------------")
      
          # Return the LLM's response in the correct state format
          return {"messages": [response]}
      
      def summarize_conversation(state: State):
          # Get summary of the conversation if it exists
          summary = state.get("summary", "")
      
          print("\n\n\t--- 4 messages (input to summarize_conversation) ---")
          for m in state["messages"]:
              print_message(m)
          print_summary(summary)
          print("\t------------------------------------------------")
      
          # If there is a summary, add it to the system message
          if summary:
              summary_message = (
                  f"This is a summary of the conversation to date: {summary}\n\n"
                  "Extend the summary by taking into account the new messages above."
              )
          
          # If there is no summary, create a new one
          else:
              summary_message = "Create a summary of the conversation above."
          print(f"\t--- 4 summary message ---")
          summary_lines = summary_message.split("\n")
          for i, line in enumerate(summary_lines):
              if i == 0:
                  print(f"\t\t{line}")
              else:
                  print(f"\t\t{line}")
          print_summary(summary)
          print("\t------------------------------------------------")
      
          # Add prompt to the messages
          messages = state["messages"] + [HumanMessage(summary_message)]
      
          print("\t--- 4 messages (input to summarize_conversation with summary) ---")
          for m in messages:
              print_message(m)
          print("\t------------------------------------------------")
      
          # Invoke the LLM with the messages
          response = llm.invoke(messages)
      
          print("\t--- 4 messages (output of summarize_conversation) ---")
          print_message(response)
          print("\t------------------------------------------------")
      
          # Return the summary message in the correct state format
          return {"summary": response.content}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("filter_messages_node", filter_messages)
      graph_builder.add_node("trim_messages_node", trim_messages_node)
      graph_builder.add_node("chatbot_node", chat_model_node)
      graph_builder.add_node("summarize_conversation_node", summarize_conversation)
      
      # Connecto nodes
      graph_builder.add_edge(START, "filter_messages_node")
      graph_builder.add_edge("filter_messages_node", "trim_messages_node")
      graph_builder.add_edge("trim_messages_node", "chatbot_node")
      graph_builder.add_edge("chatbot_node", "summarize_conversation_node")
      graph_builder.add_edge("summarize_conversation_node", END)
      
      # Compile the graph
      graph = graph_builder.compile(checkpointer=memory_saver)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 26

As we can see, we have:

  • Message filtering function: If there are more than 2 messages in the state, all messages except the last 2 are removed.* Message trimming function: Messages exceeding 100 tokens are removed.* Chatbot function: The model is run with the filtered and trimmed messages. Additionally, if there is a summary, it is added to the system message.* Summary function: A summary of the conversation is created.

We create a function to print the graph messages.

	
# Colors for the terminal
COLOR_GREEN = "\033[32m"
COLOR_YELLOW = "\033[33m"
COLOR_RESET = "\033[0m"
def stream_graph_updates(user_input: str, config: dict):
# Initialize a flag to track if an assistant response has been printed
assistant_response_printed = False
# Print the user's input immediately
print(f"\n\n{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}user_input{closing_brace}")
# Create the user's message with the HumanMessage class
user_message = HumanMessage(content=user_input)
# Stream events from the graph execution
for event in graph.stream({"messages": [user_message]}, config, stream_mode="values"):
# event is a dictionary mapping node names to their output
# Example: {opening_brace}'chatbot_node': {opening_brace}'messages': [...]{closing_brace}{closing_brace} or {opening_brace}'summarize_conversation_node': {opening_brace}'summary': '...'{closing_brace}{closing_brace}
# Iterate through node name and its output
for node_name, value in event.items():
# Check if this event is from the chatbot node which should contain the assistant's reply
if node_name == 'messages':
# Ensure the output format is as expected (list of messages)
if isinstance(value, list):
# Get the messages from the event
messages = value
# Ensure 'messages' is a non-empty list
if isinstance(messages, list) and messages:
# Get the last message (presumably the assistant's reply)
last_message = messages[-1]
# Ensure the message is an instance of AIMessage
if isinstance(last_message, AIMessage):
# Ensure the message has content to display
if hasattr(last_message, 'content'):
# Print the assistant's message content
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}{last_message.content}")
assistant_response_printed = True # Mark that we've printed the response
# Fallback if no assistant response was printed (e.g., graph error before chatbot_node)
if not assistant_response_printed:
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}[No response generated or error occurred]")
Copy

Now we run the graph

	
USER1_THREAD_ID = "1"
config_USER1 = {opening_brace}"configurable": {opening_brace}"thread_id": USER1_THREAD_ID{closing_brace}{closing_brace}
while True:
user_input = input(f"\n\nUser: ")
if user_input.lower() in ["quit", "exit", "q"]:
print(f"{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}Exiting...")
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}Goodbye!")
break
events = stream_graph_updates(user_input, config_USER1)
Copy
	
User: Hello
--- 1 messages (input to filter_messages) ---
[HumanMessage]: Hello
No summary of the conversation
------------------------------------------------
--- 1 messages (output of filter_messages) ---
No summary of the conversation
------------------------------------------------
--- 2 messages (input to trim_messages) ---
[HumanMessage]: Hello
No summary of the conversation
------------------------------------------------
--- 2 messages (output of trim_messages - after trimming) ---
[HumanMessage]: Hello
No summary of the conversation
------------------------------------------------
--- 3 messages (input to chat_model_node) ---
[HumanMessage]: Hello
No summary of the conversation
------------------------------------------------
--- 3 messages (input to chat_model_node) ---
[HumanMessage]: Hello
No summary of the conversation
------------------------------------------------
--- 3 messages (output of chat_model_node) ---
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
No summary of the conversation
------------------------------------------------
Assistant: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
--- 4 messages (input to summarize_conversation) ---
[HumanMessage]: Hello
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
No summary of the conversation
------------------------------------------------
--- 4 summary message ---
Create a summary of the conversation above.
No summary of the conversation
------------------------------------------------
--- 4 messages (input to summarize_conversation with summary) ---
[HumanMessage]: Hello
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
[HumanMessage]: Create a summary of the conversation above.
------------------------------------------------
--- 4 messages (output of summarize_conversation) ---
[AIMessage]: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
Assistant: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: I am studying about langgraph, do you know it?
--- 1 messages (input to filter_messages) ---
[HumanMessage]: Hello
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
[HumanMessage]: I am studying about langgraph, do you know it?
Summary of the conversation: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
--- 1 messages (output of filter_messages) ---
[RemoveMessage]:
Summary of the conversation: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
--- 2 messages (input to trim_messages) ---
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
[HumanMessage]: I am studying about langgraph, do you know it?
Summary of the conversation: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
--- 2 messages (output of trim_messages - after trimming) ---
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
[HumanMessage]: I am studying about langgraph, do you know it?
Summary of the conversation: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
--- 3 messages (input to chat_model_node) ---
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
[HumanMessage]: I am studying about langgraph, do you know it?
Summary of the conversation: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
--- 3 messages (input to chat_model_node) ---
[SystemMessage]: Summary of the conversation earlier: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
[HumanMessage]: I am studying about langgraph, do you know it?
Summary of the conversation: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
--- 3 messages (output of chat_model_node) ---
[AIMessage]: Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models.
LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. **Visualizing Model Architecture**: Provides a clear and detailed view of how different components of a language model are connected.
2. **Comparing Models**: Allows for easy comparison of different language models in terms of their structure, training data, and performance metrics.
3. **Understanding Training Processes**: Helps in understanding the training dynamics and the flow of data through the model.
4. **Identifying Bottlenecks**: Can help in identifying potential bottlenecks or areas for improvement in the model.
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
Summary of the conversation: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
Assistant: Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models.
LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. **Visualizing Model Architecture**: Provides a clear and detailed view of how different components of a language model are connected.
2. **Comparing Models**: Allows for easy comparison of different language models in terms of their structure, training data, and performance metrics.
3. **Understanding Training Processes**: Helps in understanding the training dynamics and the flow of data through the model.
4. **Identifying Bottlenecks**: Can help in identifying potential bottlenecks or areas for improvement in the model.
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
--- 4 messages (input to summarize_conversation) ---
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
[HumanMessage]: I am studying about langgraph, do you know it?
[AIMessage]: Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models.
LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. **Visualizing Model Architecture**: Provides a clear and detailed view of how different components of a language model are connected.
2. **Comparing Models**: Allows for easy comparison of different language models in terms of their structure, training data, and performance metrics.
3. **Understanding Training Processes**: Helps in understanding the training dynamics and the flow of data through the model.
4. **Identifying Bottlenecks**: Can help in identifying potential bottlenecks or areas for improvement in the model.
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
Summary of the conversation: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
--- 4 summary message ---
This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
Extend the summary by taking into account the new messages above.
Summary of the conversation: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
--- 4 messages (input to summarize_conversation with summary) ---
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
[HumanMessage]: I am studying about langgraph, do you know it?
[AIMessage]: Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models.
LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. **Visualizing Model Architecture**: Provides a clear and detailed view of how different components of a language model are connected.
2. **Comparing Models**: Allows for easy comparison of different language models in terms of their structure, training data, and performance metrics.
3. **Understanding Training Processes**: Helps in understanding the training dynamics and the flow of data through the model.
4. **Identifying Bottlenecks**: Can help in identifying potential bottlenecks or areas for improvement in the model.
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
[HumanMessage]: This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
Extend the summary by taking into account the new messages above.
------------------------------------------------
--- 4 messages (output of summarize_conversation) ---
[AIMessage]: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
Assistant: Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models.
LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. **Visualizing Model Architecture**: Provides a clear and detailed view of how different components of a language model are connected.
2. **Comparing Models**: Allows for easy comparison of different language models in terms of their structure, training data, and performance metrics.
3. **Understanding Training Processes**: Helps in understanding the training dynamics and the flow of data through the model.
4. **Identifying Bottlenecks**: Can help in identifying potential bottlenecks or areas for improvement in the model.
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
User: I would like to know about using langsmith with huggingface llms, the integration of huggingface
--- 1 messages (input to filter_messages) ---
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
[HumanMessage]: I am studying about langgraph, do you know it?
[AIMessage]: Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models.
LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. **Visualizing Model Architecture**: Provides a clear and detailed view of how different components of a language model are connected.
2. **Comparing Models**: Allows for easy comparison of different language models in terms of their structure, training data, and performance metrics.
3. **Understanding Training Processes**: Helps in understanding the training dynamics and the flow of data through the model.
4. **Identifying Bottlenecks**: Can help in identifying potential bottlenecks or areas for improvement in the model.
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
[HumanMessage]: I would like to know about using langsmith with huggingface llms, the integration of huggingface
Summary of the conversation: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
--- 1 messages (output of filter_messages) ---
[RemoveMessage]:
[RemoveMessage]:
Summary of the conversation: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
--- 2 messages (input to trim_messages) ---
[AIMessage]: Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models.
LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. **Visualizing Model Architecture**: Provides a clear and detailed view of how different components of a language model are connected.
2. **Comparing Models**: Allows for easy comparison of different language models in terms of their structure, training data, and performance metrics.
3. **Understanding Training Processes**: Helps in understanding the training dynamics and the flow of data through the model.
4. **Identifying Bottlenecks**: Can help in identifying potential bottlenecks or areas for improvement in the model.
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
[HumanMessage]: I would like to know about using langsmith with huggingface llms, the integration of huggingface
Summary of the conversation: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
--- 2 messages (output of trim_messages - after trimming) ---
[HumanMessage]: I would like to know about using langsmith with huggingface llms, the integration of huggingface
Summary of the conversation: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
--- 3 messages (input to chat_model_node) ---
[HumanMessage]: I would like to know about using langsmith with huggingface llms, the integration of huggingface
Summary of the conversation: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
--- 3 messages (input to chat_model_node) ---
[SystemMessage]: Summary of the conversation earlier: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
[HumanMessage]: I would like to know about using langsmith with huggingface llms, the integration of huggingface
Summary of the conversation: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
--- 3 messages (output of chat_model_node) ---
[AIMessage]: Certainly! LangSmith and Hugging Face are both powerful tools in the domain of natural language processing (NLP), and integrating them can significantly enhance your workflow. Here’s a detailed look at how you can use LangSmith with Hugging Face models:
### What is LangSmith?
LangSmith is a platform designed to help developers and researchers build, test, and deploy natural language applications. It offers features such as:
- **Model Management**: Manage and version control your language models.
- **Data Labeling**: Annotate and label data for training and evaluation.
- **Model Evaluation**: Evaluate and compare different models and versions.
- **API Integration**: Integrate with various NLP tools and platforms.
### What is Hugging Face?
Hugging Face is a leading company in the NLP domain, known for its transformers library. Hugging Face provides a wide array of pre-trained models and tools for NLP tasks, including:
- **Pre-trained Models**: Access to a vast library of pre-trained models.
- **Transformers Library**: A powerful library for working with transformer models.
- **Hugging Face Hub**: A platform for sharing and accessing models, datasets, and metrics.
### Integrating LangSmith with Hugging Face Models
#### Step-by-Step Guide
1. **Install Required Libraries**
Ensure you have the necessary libraries installed:
```bash
pip install transformers datasets langsmith
```
2. **Load a Hugging Face Model**
Use the `transformers` library to load a pre-trained model:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "distilbert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
```
3. **Integrate with LangSmith**
- **Initialize LangSmith Client**:
```python
from langsmith import Client
client = Client()
```
- **Create or Load a Dataset**:
```python
from datasets import Dataset
# Example dataset
data = {
"text": ["This is a positive review.", "This is a negative review."],
"label": [1, 0]
}
dataset = Dataset.from_dict(data)
# Save dataset to LangSmith
dataset_id = client.create_dataset(name="my_dataset", data=dataset)
```
- **Evaluate the Model**:
```python
from langsmith import EvaluationResult
def evaluate_model(dataset, tokenizer, model):
results = []
for example in dataset:
inputs = tokenizer(example["text"], return_tensors="pt")
outputs = model(**inputs)
predicted_label = outputs.logits.argmax().item()
result = EvaluationResult(
example_id=example["id"],
predicted_label=predicted_label,
ground_truth_label=example["label"]
)
results.append(result)
return results
evaluation_results = evaluate_model(dataset, tokenizer, model)
```
- **Upload Evaluation Results to LangSmith**:
```python
client.log_results(dataset_id, evaluation_results)
```
4. **Visualize and Analyze Results**
- Use LangSmith’s web interface to visualize the evaluation results.
- Compare different models and versions to identify the best performing model.
### Additional Tips
- **Model Tuning**: Use Hugging Face’s `Trainer` class to fine-tune models on your datasets and then evaluate them using LangSmith.
- **Custom Metrics**: Define custom evaluation metrics and use them to assess model performance.
- **Collaboration**: Share datasets and models with team members using the Hugging Face Hub and LangSmith.
By following these steps, you can effectively integrate Hugging Face models with LangSmith, leveraging the strengths of both platforms to build and evaluate robust NLP applications.
If you have any specific questions or need further assistance, feel free to ask!
Summary of the conversation: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
Assistant: Certainly! LangSmith and Hugging Face are both powerful tools in the domain of natural language processing (NLP), and integrating them can significantly enhance your workflow. Here’s a detailed look at how you can use LangSmith with Hugging Face models:
### What is LangSmith?
LangSmith is a platform designed to help developers and researchers build, test, and deploy natural language applications. It offers features such as:
- **Model Management**: Manage and version control your language models.
- **Data Labeling**: Annotate and label data for training and evaluation.
- **Model Evaluation**: Evaluate and compare different models and versions.
- **API Integration**: Integrate with various NLP tools and platforms.
### What is Hugging Face?
Hugging Face is a leading company in the NLP domain, known for its transformers library. Hugging Face provides a wide array of pre-trained models and tools for NLP tasks, including:
- **Pre-trained Models**: Access to a vast library of pre-trained models.
- **Transformers Library**: A powerful library for working with transformer models.
- **Hugging Face Hub**: A platform for sharing and accessing models, datasets, and metrics.
### Integrating LangSmith with Hugging Face Models
#### Step-by-Step Guide
1. **Install Required Libraries**
Ensure you have the necessary libraries installed:
```bash
pip install transformers datasets langsmith
```
2. **Load a Hugging Face Model**
Use the `transformers` library to load a pre-trained model:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "distilbert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
```
3. **Integrate with LangSmith**
- **Initialize LangSmith Client**:
```python
from langsmith import Client
client = Client()
```
- **Create or Load a Dataset**:
```python
from datasets import Dataset
# Example dataset
data = {
"text": ["This is a positive review.", "This is a negative review."],
"label": [1, 0]
}
dataset = Dataset.from_dict(data)
# Save dataset to LangSmith
dataset_id = client.create_dataset(name="my_dataset", data=dataset)
```
- **Evaluate the Model**:
```python
from langsmith import EvaluationResult
def evaluate_model(dataset, tokenizer, model):
results = []
for example in dataset:
inputs = tokenizer(example["text"], return_tensors="pt")
outputs = model(**inputs)
predicted_label = outputs.logits.argmax().item()
result = EvaluationResult(
example_id=example["id"],
predicted_label=predicted_label,
ground_truth_label=example["label"]
)
results.append(result)
return results
evaluation_results = evaluate_model(dataset, tokenizer, model)
```
- **Upload Evaluation Results to LangSmith**:
```python
client.log_results(dataset_id, evaluation_results)
```
4. **Visualize and Analyze Results**
- Use LangSmith’s web interface to visualize the evaluation results.
- Compare different models and versions to identify the best performing model.
### Additional Tips
- **Model Tuning**: Use Hugging Face’s `Trainer` class to fine-tune models on your datasets and then evaluate them using LangSmith.
- **Custom Metrics**: Define custom evaluation metrics and use them to assess model performance.
- **Collaboration**: Share datasets and models with team members using the Hugging Face Hub and LangSmith.
By following these steps, you can effectively integrate Hugging Face models with LangSmith, leveraging the strengths of both platforms to build and evaluate robust NLP applications.
If you have any specific questions or need further assistance, feel free to ask!
--- 4 messages (input to summarize_conversation) ---
[HumanMessage]: I would like to know about using langsmith with huggingface llms, the integration of huggingface
[AIMessage]: Certainly! LangSmith and Hugging Face are both powerful tools in the domain of natural language processing (NLP), and integrating them can significantly enhance your workflow. Here’s a detailed look at how you can use LangSmith with Hugging Face models:
### What is LangSmith?
LangSmith is a platform designed to help developers and researchers build, test, and deploy natural language applications. It offers features such as:
- **Model Management**: Manage and version control your language models.
- **Data Labeling**: Annotate and label data for training and evaluation.
- **Model Evaluation**: Evaluate and compare different models and versions.
- **API Integration**: Integrate with various NLP tools and platforms.
### What is Hugging Face?
Hugging Face is a leading company in the NLP domain, known for its transformers library. Hugging Face provides a wide array of pre-trained models and tools for NLP tasks, including:
- **Pre-trained Models**: Access to a vast library of pre-trained models.
- **Transformers Library**: A powerful library for working with transformer models.
- **Hugging Face Hub**: A platform for sharing and accessing models, datasets, and metrics.
### Integrating LangSmith with Hugging Face Models
#### Step-by-Step Guide
1. **Install Required Libraries**
Ensure you have the necessary libraries installed:
```bash
pip install transformers datasets langsmith
```
2. **Load a Hugging Face Model**
Use the `transformers` library to load a pre-trained model:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "distilbert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
```
3. **Integrate with LangSmith**
- **Initialize LangSmith Client**:
```python
from langsmith import Client
client = Client()
```
- **Create or Load a Dataset**:
```python
from datasets import Dataset
# Example dataset
data = {
"text": ["This is a positive review.", "This is a negative review."],
"label": [1, 0]
}
dataset = Dataset.from_dict(data)
# Save dataset to LangSmith
dataset_id = client.create_dataset(name="my_dataset", data=dataset)
```
- **Evaluate the Model**:
```python
from langsmith import EvaluationResult
def evaluate_model(dataset, tokenizer, model):
results = []
for example in dataset:
inputs = tokenizer(example["text"], return_tensors="pt")
outputs = model(**inputs)
predicted_label = outputs.logits.argmax().item()
result = EvaluationResult(
example_id=example["id"],
predicted_label=predicted_label,
ground_truth_label=example["label"]
)
results.append(result)
return results
evaluation_results = evaluate_model(dataset, tokenizer, model)
```
- **Upload Evaluation Results to LangSmith**:
```python
client.log_results(dataset_id, evaluation_results)
```
4. **Visualize and Analyze Results**
- Use LangSmith’s web interface to visualize the evaluation results.
- Compare different models and versions to identify the best performing model.
### Additional Tips
- **Model Tuning**: Use Hugging Face’s `Trainer` class to fine-tune models on your datasets and then evaluate them using LangSmith.
- **Custom Metrics**: Define custom evaluation metrics and use them to assess model performance.
- **Collaboration**: Share datasets and models with team members using the Hugging Face Hub and LangSmith.
By following these steps, you can effectively integrate Hugging Face models with LangSmith, leveraging the strengths of both platforms to build and evaluate robust NLP applications.
If you have any specific questions or need further assistance, feel free to ask!
Summary of the conversation: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
--- 4 summary message ---
This is a summary of the conversation to date: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
Extend the summary by taking into account the new messages above.
Summary of the conversation: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
--- 4 messages (input to summarize_conversation with summary) ---
[HumanMessage]: I would like to know about using langsmith with huggingface llms, the integration of huggingface
[AIMessage]: Certainly! LangSmith and Hugging Face are both powerful tools in the domain of natural language processing (NLP), and integrating them can significantly enhance your workflow. Here’s a detailed look at how you can use LangSmith with Hugging Face models:
### What is LangSmith?
LangSmith is a platform designed to help developers and researchers build, test, and deploy natural language applications. It offers features such as:
- **Model Management**: Manage and version control your language models.
- **Data Labeling**: Annotate and label data for training and evaluation.
- **Model Evaluation**: Evaluate and compare different models and versions.
- **API Integration**: Integrate with various NLP tools and platforms.
### What is Hugging Face?
Hugging Face is a leading company in the NLP domain, known for its transformers library. Hugging Face provides a wide array of pre-trained models and tools for NLP tasks, including:
- **Pre-trained Models**: Access to a vast library of pre-trained models.
- **Transformers Library**: A powerful library for working with transformer models.
- **Hugging Face Hub**: A platform for sharing and accessing models, datasets, and metrics.
### Integrating LangSmith with Hugging Face Models
#### Step-by-Step Guide
1. **Install Required Libraries**
Ensure you have the necessary libraries installed:
```bash
pip install transformers datasets langsmith
```
2. **Load a Hugging Face Model**
Use the `transformers` library to load a pre-trained model:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "distilbert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
```
3. **Integrate with LangSmith**
- **Initialize LangSmith Client**:
```python
from langsmith import Client
client = Client()
```
- **Create or Load a Dataset**:
```python
from datasets import Dataset
# Example dataset
data = {
"text": ["This is a positive review.", "This is a negative review."],
"label": [1, 0]
}
dataset = Dataset.from_dict(data)
# Save dataset to LangSmith
dataset_id = client.create_dataset(name="my_dataset", data=dataset)
```
- **Evaluate the Model**:
```python
from langsmith import EvaluationResult
def evaluate_model(dataset, tokenizer, model):
results = []
for example in dataset:
inputs = tokenizer(example["text"], return_tensors="pt")
outputs = model(**inputs)
predicted_label = outputs.logits.argmax().item()
result = EvaluationResult(
example_id=example["id"],
predicted_label=predicted_label,
ground_truth_label=example["label"]
)
results.append(result)
return results
evaluation_results = evaluate_model(dataset, tokenizer, model)
```
- **Upload Evaluation Results to LangSmith**:
```python
client.log_results(dataset_id, evaluation_results)
```
4. **Visualize and Analyze Results**
- Use LangSmith’s web interface to visualize the evaluation results.
- Compare different models and versions to identify the best performing model.
### Additional Tips
- **Model Tuning**: Use Hugging Face’s `Trainer` class to fine-tune models on your datasets and then evaluate them using LangSmith.
- **Custom Metrics**: Define custom evaluation metrics and use them to assess model performance.
- **Collaboration**: Share datasets and models with team members using the Hugging Face Hub and LangSmith.
By following these steps, you can effectively integrate Hugging Face models with LangSmith, leveraging the strengths of both platforms to build and evaluate robust NLP applications.
If you have any specific questions or need further assistance, feel free to ask!
[HumanMessage]: This is a summary of the conversation to date: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
Extend the summary by taking into account the new messages above.
------------------------------------------------
--- 4 messages (output of summarize_conversation) ---
[AIMessage]: Sure! Here's the extended summary of the conversation to date:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
**User:** Is there anything else you need assistance with?
**Qwen:** If you have any more questions about LangGraph or any other topic, feel free to ask! I'm here to help with any information or assistance you need.
---
Is there anything else you need assistance with?
------------------------------------------------
Assistant: Certainly! LangSmith and Hugging Face are both powerful tools in the domain of natural language processing (NLP), and integrating them can significantly enhance your workflow. Here’s a detailed look at how you can use LangSmith with Hugging Face models:
### What is LangSmith?
LangSmith is a platform designed to help developers and researchers build, test, and deploy natural language applications. It offers features such as:
- **Model Management**: Manage and version control your language models.
- **Data Labeling**: Annotate and label data for training and evaluation.
- **Model Evaluation**: Evaluate and compare different models and versions.
- **API Integration**: Integrate with various NLP tools and platforms.
### What is Hugging Face?
Hugging Face is a leading company in the NLP domain, known for its transformers library. Hugging Face provides a wide array of pre-trained models and tools for NLP tasks, including:
- **Pre-trained Models**: Access to a vast library of pre-trained models.
- **Transformers Library**: A powerful library for working with transformer models.
- **Hugging Face Hub**: A platform for sharing and accessing models, datasets, and metrics.
### Integrating LangSmith with Hugging Face Models
#### Step-by-Step Guide
1. **Install Required Libraries**
Ensure you have the necessary libraries installed:
```bash
pip install transformers datasets langsmith
```
2. **Load a Hugging Face Model**
Use the `transformers` library to load a pre-trained model:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "distilbert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
```
3. **Integrate with LangSmith**
- **Initialize LangSmith Client**:
```python
from langsmith import Client
client = Client()
```
- **Create or Load a Dataset**:
```python
from datasets import Dataset
# Example dataset
data = {
"text": ["This is a positive review.", "This is a negative review."],
"label": [1, 0]
}
dataset = Dataset.from_dict(data)
# Save dataset to LangSmith
dataset_id = client.create_dataset(name="my_dataset", data=dataset)
```
- **Evaluate the Model**:
```python
from langsmith import EvaluationResult
def evaluate_model(dataset, tokenizer, model):
results = []
for example in dataset:
inputs = tokenizer(example["text"], return_tensors="pt")
outputs = model(**inputs)
predicted_label = outputs.logits.argmax().item()
result = EvaluationResult(
example_id=example["id"],
predicted_label=predicted_label,
ground_truth_label=example["label"]
)
results.append(result)
return results
evaluation_results = evaluate_model(dataset, tokenizer, model)
```
- **Upload Evaluation Results to LangSmith**:
```python
client.log_results(dataset_id, evaluation_results)
```
4. **Visualize and Analyze Results**
- Use LangSmith’s web interface to visualize the evaluation results.
- Compare different models and versions to identify the best performing model.
### Additional Tips
- **Model Tuning**: Use Hugging Face’s `Trainer` class to fine-tune models on your datasets and then evaluate them using LangSmith.
- **Custom Metrics**: Define custom evaluation metrics and use them to assess model performance.
- **Collaboration**: Share datasets and models with team members using the Hugging Face Hub and LangSmith.
By following these steps, you can effectively integrate Hugging Face models with LangSmith, leveraging the strengths of both platforms to build and evaluate robust NLP applications.
If you have any specific questions or need further assistance, feel free to ask!
User: Exiting...
Assistant: Goodbye!

If we go to the end of the conversation, we can see

--- 2 messages (input to trim_messages) ---
    	[AIMessage]: Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models.
    	
    	LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
    	
    	1. **Visualizing Model Architecture**: Provides a clear and detailed view of how different components of a language model are connected.
    	2. **Comparing Models**: Allows for easy comparison of different language models in terms of their structure, training data, and performance metrics.
    	3. **Understanding Training Processes**: Helps in understanding the training dynamics and the flow of data through the model.
    	4. **Identifying Bottlenecks**: Can help in identifying potential bottlenecks or areas for improvement in the model.
    	
    	If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
    	[HumanMessage]: I would like to know about using langsmith with huggingface llms, the integration of huggingface
    	Summary of the conversation: Sure! Here's an extended summary of the conversation:
    	
    	---
    	
    	**User:** Hello
    	
    	**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
    	
    	**User:** I am studying about langgraph, do you know it?
    	
    	**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
    	1. Visualizing Model Architecture
    	2. Comparing Models
    	3. Understanding Training Processes
    	4. Identifying Bottlenecks
    	
    	If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
    	
    	**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
    	User: Hello
    	Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
    	User: Create a summary of the conversation above.
    	Qwen: [Provided the summary you are now reading.]
    	
    	Is there anything else you need assistance with?
    	
    	**Qwen:** [Extended the summary you are now reading.]
    	
    	---
    	
    	Is there anything else you need assistance with?
    ------------------------------------------------

Vemos que en los mensajes del estado solo se conservan

[AIMessage]: Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models.
    	
    	LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
    	
    	1. **Visualizing Model Architecture**: Provides a clear and detailed view of how different components of a language model are connected.
    	2. **Comparing Models**: Allows for easy comparison of different language models in terms of their structure, training data, and performance metrics.
    	3. **Understanding Training Processes**: Helps in understanding the training dynamics and the flow of data through the model.
    	4. **Identifying Bottlenecks**: Can help in identifying potential bottlenecks or areas for improvement in the model.
    	
    	If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
    	[HumanMessage]: I would like to know about using langsmith with huggingface llms, the integration of huggingface

Es decir, la función de filtrado solo mantiene los 2 últimos mensajes.

But later we can see

--- 2 messages (output of trim_messages - after trimming) ---I would like to know about using LangSmith with Hugging Face LLMs, the integration of Hugging FaceResumen de la conversación: ¡Claro! Aquí tienes un resumen extendido de la conversación:		
---		
**User:** Hello		
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.		
**User:** I am studying about langgraph, do you know it?		
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:1. Visualizing Model Architecture2. Comparing Models3. Understanding Training Processes4. Identifying Bottlenecks		
Si tienes preguntas específicas o aspectos de LangGraph que te interesen, no dudes en hacérmelo saber!		
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:HelloQwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.The instruction provided is not part of a conversation but rather a standalone request to create a summary. Since there is no preceding conversation to summarize, I will translate the given text as requested:

User: Cree un resumen de la conversación anterior.


      However, for your requirement, here is the translation of the provided instruction:
      

User: Create a summary of the conversation above.

Qwen:
      ¿Necesitas ayuda con algo más?		
      **Qwen:** [Extended the summary you are now reading.]		
      ---		
      ¿Necesitas ayuda con algo más?------------------------------------------------```
      
      That is, the trimming function removes the assistant's message because it exceeds 100 tokens.

Even by deleting messages, so the LLM doesn't have them as context, we can still have a conversation thanks to the summary of the conversation that we are generating.

Save state in SQLitelink image 93

We have seen how to save the state of the graph in memory, but once we finish the process, that memory is lost, so we are going to see how to save it in SQLite.

First we need to install the sqlite package for LangGraph.

pip install langgraph-checkpoint-sqlite```

We import the sqlite and langgraph-checkpoint-sqlite libraries. Previously, when we saved the state in memory, we used memory_saver. Now, we will use SqliteSaver to save the state in an SQLite database.

	
import sqlite3
from langgraph.checkpoint.sqlite import SqliteSaver
import os
# Create the directory if it doesn't exist
os.makedirs("state_db", exist_ok=True)
db_path = "state_db/langgraph_sqlite.db"
conn = sqlite3.connect(db_path, check_same_thread=False)
memory = SqliteSaver(conn)
Copy

Let's create a basic chatbot to avoid adding complexity beyond the functionality we want to test.

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import HumanMessage, AIMessage
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Nodes
      def chat_model_node(state: State):
          # Return the LLM's response in the correct state format
          return {"messages": [llm.invoke(state["messages"])]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("chatbot_node", chat_model_node)
      
      # Connecto nodes
      graph_builder.add_edge(START, "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile(checkpointer=memory)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 27

We define the function to print the graph messages.

	
# Colors for the terminal
COLOR_GREEN = "\033[32m"
COLOR_YELLOW = "\033[33m"
COLOR_RESET = "\033[0m"
def stream_graph_updates(user_input: str, config: dict):
# Initialize a flag to track if an assistant response has been printed
assistant_response_printed = False
# Print the user's input immediately
print(f"\n\n{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}user_input{closing_brace}")
# Create the user's message with the HumanMessage class
user_message = HumanMessage(content=user_input)
# Stream events from the graph execution
for event in graph.stream({"messages": [user_message]}, config, stream_mode="values"):
# event is a dictionary mapping node names to their output
# Example: {opening_brace}'chatbot_node': {opening_brace}'messages': [...]{closing_brace}{closing_brace} or {opening_brace}'summarize_conversation_node': {opening_brace}'summary': '...'{closing_brace}{closing_brace}
# Iterate through node name and its output
for node_name, value in event.items():
# Check if this event is from the chatbot node which should contain the assistant's reply
if node_name == 'messages':
# Ensure the output format is as expected (list of messages)
if isinstance(value, list):
# Get the messages from the event
messages = value
# Ensure 'messages' is a non-empty list
if isinstance(messages, list) and messages:
# Get the last message (presumably the assistant's reply)
last_message = messages[-1]
# Ensure the message is an instance of AIMessage
if isinstance(last_message, AIMessage):
# Ensure the message has content to display
if hasattr(last_message, 'content'):
# Print the assistant's message content
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}{last_message.content}")
assistant_response_printed = True # Mark that we've printed the response
# Fallback if no assistant response was printed (e.g., graph error before chatbot_node)
if not assistant_response_printed:
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}[No response generated or error occurred]")
Copy

We run the graph

	
USER1_THREAD_ID = "USER1"
config_USER1 = {opening_brace}"configurable": {opening_brace}"thread_id": USER1_THREAD_ID{closing_brace}{closing_brace}
while True:
user_input = input(f"\n\nUser: ")
if user_input.lower() in ["quit", "exit", "q"]:
print(f"{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}Exiting...")
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}Goodbye!")
break
events = stream_graph_updates(user_input, config_USER1)
Copy
	
User: Hello, my name is Máximo
Assistant: Hello Máximo! It's a pleasure to meet you. How can I assist you today?
User: Exiting...
Assistant: Goodbye!

As you can see, I have only told you my name.

Now we restart the notebook to remove all data stored in the notebook's RAM and re-run the previous code.

We recreate the sqlite memory with SqliteSaver

	
import sqlite3
from langgraph.checkpoint.sqlite import SqliteSaver
import os
# Create the directory if it doesn't exist
os.makedirs("state_db", exist_ok=True)
db_path = "state_db/langgraph_sqlite.db"
conn = sqlite3.connect(db_path, check_same_thread=False)
memory = SqliteSaver(conn)
Copy

We recreate the graph

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import HumanMessage, AIMessage
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Nodes
      def chat_model_node(state: State):
          # Return the LLM's response in the correct state format
          return {"messages": [llm.invoke(state["messages"])]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("chatbot_node", chat_model_node)
      
      # Connecto nodes
      graph_builder.add_edge(START, "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile(checkpointer=memory)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 28

We redefine the function to print the graph messages.

	
# Colors for the terminal
COLOR_GREEN = "\033[32m"
COLOR_YELLOW = "\033[33m"
COLOR_RESET = "\033[0m"
def stream_graph_updates(user_input: str, config: dict):
# Initialize a flag to track if an assistant response has been printed
assistant_response_printed = False
# Print the user's input immediately
print(f"\n\n{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}user_input{closing_brace}")
# Create the user's message with the HumanMessage class
user_message = HumanMessage(content=user_input)
# Stream events from the graph execution
for event in graph.stream({"messages": [user_message]}, config, stream_mode="values"):
# event is a dictionary mapping node names to their output
# Example: {opening_brace}'chatbot_node': {opening_brace}'messages': [...]{closing_brace}{closing_brace} or {opening_brace}'summarize_conversation_node': {opening_brace}'summary': '...'{closing_brace}{closing_brace}
# Iterate through node name and its output
for node_name, value in event.items():
# Check if this event is from the chatbot node which should contain the assistant's reply
if node_name == 'messages':
# Ensure the output format is as expected (list of messages)
if isinstance(value, list):
# Get the messages from the event
messages = value
# Ensure 'messages' is a non-empty list
if isinstance(messages, list) and messages:
# Get the last message (presumably the assistant's reply)
last_message = messages[-1]
# Ensure the message is an instance of AIMessage
if isinstance(last_message, AIMessage):
# Ensure the message has content to display
if hasattr(last_message, 'content'):
# Print the assistant's message content
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}{last_message.content}")
assistant_response_printed = True # Mark that we've printed the response
# Fallback if no assistant response was printed (e.g., graph error before chatbot_node)
if not assistant_response_printed:
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}[No response generated or error occurred]")
Copy

And we run it again

	
USER1_THREAD_ID = "USER1"
config_USER1 = {opening_brace}"configurable": {opening_brace}"thread_id": USER1_THREAD_ID{closing_brace}{closing_brace}
while True:
user_input = input(f"\n\nUser: ")
if user_input.lower() in ["quit", "exit", "q"]:
print(f"{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}Exiting...")
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}Goodbye!")
break
events = stream_graph_updates(user_input, config_USER1)
Copy
	
User: What's my name?
Assistant: Your name is Máximo. It's nice to know and use your name as we chat. How can I assist you today, Máximo?
User: Exiting...
Assistant: Goodbye!

As can be seen, we have been able to recover the state of the graph from the SQLite database.

Long-term memory, memory between threadslink image 94

Memory is a cognitive function that allows people to store, retrieve, and use information to understand their present and future based on their past. There are several types of memory that can be used in AI applications.

Introduction to LangGraph Memory Storelink image 95

LangGraph provides the LangGraph Memory Store, which is a way to save and retrieve long-term memory across different threads. This way, in a conversation, a user can indicate that they like something, and in another conversation, the chatbot can retrieve this information to generate a more personalized response.

This is a class for persistent key-value stores (key-value).

When objects are stored in memory, three things are needed:* A namespace for the object, done through a tuple* A unique key* The value of the object Let's see an example

	
import uuid
from langgraph.store.memory import InMemoryStore
in_memory_store = InMemoryStore()
# Namespace for the memory to save
user_id = "1"
namespace_for_memory = (user_id, "memories")
# Save a memory to namespace as key and value
key = str(uuid.uuid4())
# The value needs to be a dictionary
value = {opening_brace}"food_preference" : "I like pizza"{closing_brace}
# Save the memory
in_memory_store.put(namespace_for_memory, key, value)
Copy

The object in_memory_store that we have created has several methods and one of them is search, which allows us to search by namespace

	
# Search
memories = in_memory_store.search(namespace_for_memory)
type(memories), len(memories)
Copy
	
(list, 1)

It's a list with a single value, which makes sense because we only stored one value, so let's take a look at it.

	
value = memories[0]
value.dict()
Copy
	
{opening_brace}'namespace': ['1', 'memories'],
'key': '70006131-948a-4d7a-bdce-78351c44fc4d',
'value': {'food_preference': 'I like pizza'{closing_brace},
'created_at': '2025-05-11T07:24:31.462465+00:00',
'updated_at': '2025-05-11T07:24:31.462468+00:00',
'score': None}

We can see its key and its value

	
# The key, value
memories[0].key, memories[0].value
Copy
	
('70006131-948a-4d7a-bdce-78351c44fc4d', {'food_preference': 'I like pizza'})

We can also use the get method to obtain an object from memory based on its namespace and key.

	
# Get the memory by namespace and key
memory = in_memory_store.get(namespace_for_memory, key)
memory.dict()
Copy
	
{opening_brace}'namespace': ['1', 'memories'],
'key': '70006131-948a-4d7a-bdce-78351c44fc4d',
'value': {'food_preference': 'I like pizza'{closing_brace},
'created_at': '2025-05-11T07:24:31.462465+00:00',
'updated_at': '2025-05-11T07:24:31.462468+00:00'}

Just like we used checkpoints for short-term memory, for long-term memory we are going to use LangGraph Store

Chatbot with long-term memorylink image 96

We created a basic chatbot, with long-term memory and short-term memory.

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, MessagesState, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
      from langgraph.checkpoint.memory import MemorySaver # Short-term memory
      from langgraph.store.base import BaseStore          # Long-term memory
      from langchain_core.runnables.config import RunnableConfig
      from langgraph.store.memory import InMemoryStore
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Chatbot instruction
      MODEL_SYSTEM_MESSAGE = """You are a helpful assistant that can answer questions and help with tasks.
      You have access to a long-term memory that you can use to answer questions and help with tasks.
      Here is the memory (it may be empty): {memory}"""
      
      # Create new memory from the chat history and any existing memory
      CREATE_MEMORY_INSTRUCTION = """You are a helpful assistant that gets information from the user to personalize your responses.
      
      # INFORMATION FROM THE USER:
      {memory}
      
      # INSTRUCTIONS:
      1. Carefully review the chat history
      2. Identify new information from the user, such as:
         - Personal details (name, location)
         - Preferences (likes, dislikes)
         - Interests and hobbies
         - Past experiences
         - Goals or future plans
      3. Combine any new information with the existing memory
      4. Format the memory as a clear, bulleted list
      5. If new information conflicts with existing memory, keep the most recent version
      
      Remember: Only include factual information directly stated by the user. Do not make assumptions or inferences.
      
      Based on the chat history below, please update the user information:"""
      
      # Nodes
      def call_model(state: MessagesState, config: RunnableConfig, store: BaseStore):
      
          """Load memory from the store and use it to personalize the chatbot's response."""
          
          # Get the user ID from the config
          user_id = config["configurable"]["user_id"]
      
          # Retrieve memory from the store
          namespace = ("memory", user_id)
          key = "user_memory"
          existing_memory = store.get(namespace, key)
      
          # Extract the actual memory content if it exists and add a prefix
          if existing_memory:
              # Value is a dictionary with a memory key
              existing_memory_content = existing_memory.value.get('memory')
          else:
              existing_memory_content = "No existing memory found."
          if isinstance(existing_memory_content, str):
              print(f"\t[Call model debug] Existing memory: {existing_memory_content}")
          else:
              print(f"\t[Call model debug] Existing memory: {existing_memory_content.content}")
      
          # Format the memory in the system prompt
          system_msg = MODEL_SYSTEM_MESSAGE.format(memory=existing_memory_content)
          
          # Respond using memory as well as the chat history
          response = llm.invoke([SystemMessage(content=system_msg)]+state["messages"])
      
          return {"messages": response}
      
      def write_memory(state: MessagesState, config: RunnableConfig, store: BaseStore):
      
          """Reflect on the chat history and save a memory to the store."""
          
          # Get the user ID from the config
          user_id = config["configurable"]["user_id"]
      
          # Retrieve existing memory from the store
          namespace = ("memory", user_id)
          existing_memory = store.get(namespace, "user_memory")
              
          # Extract the memory
          if existing_memory:
              existing_memory_content = existing_memory.value.get('memory')
          else:
              existing_memory_content = "No existing memory found."
          if isinstance(existing_memory_content, str):
              print(f"\t[Write memory debug] Existing memory: {existing_memory_content}")
          else:
              print(f"\t[Write memory debug] Existing memory: {existing_memory_content.content}")
      
          # Format the memory in the system prompt
          system_msg = CREATE_MEMORY_INSTRUCTION.format(memory=existing_memory_content)
          new_memory = llm.invoke([SystemMessage(content=system_msg)]+state['messages'])
          if isinstance(new_memory, str):
              print(f"\n\t[Write memory debug] New memory: {new_memory}")
          else:
              print(f"\n\t[Write memory debug] New memory: {new_memory.content}")
      
          # Overwrite the existing memory in the store 
          key = "user_memory"
      
          # Write value as a dictionary with a memory key
          store.put(namespace, key, {"memory": new_memory.content})
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("call_model", call_model)
      graph_builder.add_node("write_memory", write_memory)
      
      # Connect nodes
      graph_builder.add_edge(START, "call_model")
      graph_builder.add_edge("call_model", "write_memory")
      graph_builder.add_edge("write_memory", END)
      
      # Store for long-term (across-thread) memory
      long_term_memory = InMemoryStore()
      
      # Checkpointer for short-term (within-thread) memory
      short_term_memory = MemorySaver()
      
      # Compile the graph
      graph = graph_builder.compile(checkpointer=short_term_memory, store=long_term_memory)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 29

Let's test it out.

	
# We supply a thread ID for short-term (within-thread) memory
# We supply a user ID for long-term (across-thread) memory
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1", "user_id": "1"{closing_brace}{closing_brace}
# User input
input_messages = [HumanMessage(content="Hi, my name is Maximo")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Hi, my name is Maximo
[Call model debug] Existing memory: No existing memory found.
================================== Ai Message ==================================
Hello Maximo! It's nice to meet you. How can I assist you today?
[Write memory debug] Existing memory: No existing memory found.
[Write memory debug] New memory:
Here's the updated information I have about you:
- Name: Maximo
	
# User input
input_messages = [HumanMessage(content="I like to bike around San Francisco")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
I like to bike around San Francisco
[Call model debug] Existing memory:
Here's the updated information I have about you:
- Name: Maximo
================================== Ai Message ==================================
That sounds like a great way to explore the city! San Francisco has some fantastic biking routes. Are there any specific areas or routes you enjoy biking the most, or are you looking for some new recommendations?
[Write memory debug] Existing memory:
Here's the updated information I have about you:
- Name: Maximo
[Write memory debug] New memory:
Here's the updated information about you:
- Name: Maximo
- Location: San Francisco
- Interest: Biking around San Francisco

If we recover long-term memory

	
# Namespace for the memory to save
user_id = "1"
namespace = ("memory", user_id)
existing_memory = long_term_memory.get(namespace, "user_memory")
existing_memory.dict()
Copy
	
{opening_brace}'namespace': ['memory', '1'],
'key': 'user_memory',
'value': {'memory': " Here's the updated information about you: - Name: Maximo - Location: San Francisco - Interest: Biking around San Francisco"},
'created_at': '2025-05-11T09:41:26.739207+00:00',
'updated_at': '2025-05-11T09:41:26.739211+00:00'}

We get its value

	
print(existing_memory.value.get('memory'))
Copy
	
Here's the updated information about you:
- Name: Maximo
- Location: San Francisco
- Interest: Biking around San Francisco

Now we can start a new conversation thread, but with the same long-term memory. We will see that the chatbot remembers the user's information.

	
# We supply a user ID for across-thread memory as well as a new thread ID
config = {opening_brace}"configurable": {opening_brace}"thread_id": "2", "user_id": "1"{closing_brace}{closing_brace}
# User input
input_messages = [HumanMessage(content="Hi! Where would you recommend that I go biking?")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Hi! Where would you recommend that I go biking?
[Call model debug] Existing memory:
Here's the updated information about you:
- Name: Maximo
- Location: San Francisco
- Interest: Biking around San Francisco
================================== Ai Message ==================================
Hi there! Given my interest in biking around San Francisco, I'd recommend a few great routes:
1. **Golden Gate Park**: This is a fantastic place to bike, with wide paths that are separated from vehicle traffic. You can start at the eastern end near Stow Lake and bike all the way to the western end at Ocean Beach. There are plenty of scenic spots to stop and enjoy along the way.
2. **The Embarcadero**: This route follows the waterfront from Fisherman’s Wharf to the Bay Bridge. It’s relatively flat and offers beautiful views of the San Francisco Bay and the city skyline. You can also stop by the Ferry Building for some delicious food and drinks.
3. **Presidio**: The Presidio is a large park with numerous trails that offer diverse landscapes, from forests to coastal bluffs. The Crissy Field area is especially popular for its views of the Golden Gate Bridge.
4. **Golden Gate Bridge**: Riding across the Golden Gate Bridge is a must-do experience. You can start from the San Francisco side, bike across the bridge, and then continue into Marin County for a longer ride with stunning views.
5. **Lombard Street**: While not a long ride, biking down the famous crooked section of Lombard Street can be a fun and memorable experience. Just be prepared for the steep hill on the way back up!
Each of these routes offers a unique experience, so you can choose based on your interests and the type of scenery you enjoy. Happy biking!
[Write memory debug] Existing memory:
Here's the updated information about you:
- Name: Maximo
- Location: San Francisco
- Interest: Biking around San Francisco
[Write memory debug] New memory: 😊
Let me know if you have any other questions or if you need more recommendations!

I started a new conversation thread, I asked where I could go cycling, they remembered that I had told them I like to go cycling in San Francisco and responded with places in San Francisco that I could visit.

Chatbot with user profilelink image 97

Note: We will be doing this section using Sonnet 3.7, as the HuggingFace integration does not have the with_structured_output functionality that provides a structured output with a defined structure.

We can create types so that the LLM generates an output with a structure defined by us.

Let's create a type definition for the user profile.

	
from typing import TypedDict, List
class UserProfile(TypedDict):
"""User profile schema with typed fields"""
user_name: str # The user's preferred name
interests: List[str] # A list of the user's interests
Copy

Now we recreate the graph, but this time with the UserProfile type.

We are going to use with_structured_output so that the LLM generates an output with a structure defined by us, and we will define that structure using the Subjects class, which is a BaseModel class from Pydantic.

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, MessagesState, START, END
      from langgraph.graph.message import add_messages
      from langchain_anthropic import ChatAnthropic
      from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
      from langgraph.checkpoint.memory import MemorySaver # Short-term memory
      from langgraph.store.base import BaseStore          # Long-term memory
      from langchain_core.runnables.config import RunnableConfig
      from langgraph.store.memory import InMemoryStore
      from IPython.display import Image, display
      from pydantic import BaseModel, Field
      import os
      import dotenv
      
      dotenv.load_dotenv()
      ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
      llm_with_structured_output = llm.with_structured_output(UserProfile)
      
      # Chatbot instruction
      MODEL_SYSTEM_MESSAGE = """You are a helpful assistant with memory that provides information about the user. 
      If you have memory for this user, use it to personalize your responses.
      Here is the memory (it may be empty): {memory}"""
      
      # Create new memory from the chat history and any existing memory
      CREATE_MEMORY_INSTRUCTION = """Create or update a user profile memory based on the user's chat history. 
      This will be saved for long-term memory. If there is an existing memory, simply update it. 
      Here is the existing memory (it may be empty): {memory}"""
      
      # Nodes
      def call_model(state: MessagesState, config: RunnableConfig, store: BaseStore):
      
          """Load memory from the store and use it to personalize the chatbot's response."""
          
          # Get the user ID from the config
          user_id = config["configurable"]["user_id"]
      
          # Retrieve memory from the store
          namespace = ("memory", user_id)
          existing_memory = store.get(namespace, "user_memory")
      
          # Format the memories for the system prompt
          if existing_memory and existing_memory.value:
              memory_dict = existing_memory.value
              formatted_memory = (
                  f"Name: {memory_dict.get('user_name', 'Unknown')}\n"
                  f"Interests: {', '.join(memory_dict.get('interests', []))}"
              )
          else:
              formatted_memory = None
          # if isinstance(existing_memory_content, str):
          print(f"\t[Call model debug] Existing memory: {formatted_memory}")
          # else:
          #     print(f"\t[Call model debug] Existing memory: {existing_memory_content.content}")
      
          # Format the memory in the system prompt
          system_msg = MODEL_SYSTEM_MESSAGE.format(memory=formatted_memory)
      
          # Respond using memory as well as the chat history
          response = llm.invoke([SystemMessage(content=system_msg)]+state["messages"])
      
          return {"messages": response}
      
      def write_memory(state: MessagesState, config: RunnableConfig, store: BaseStore):
      
          """Reflect on the chat history and save a memory to the store."""
          
          # Get the user ID from the config
          user_id = config["configurable"]["user_id"]
      
          # Retrieve existing memory from the store
          namespace = ("memory", user_id)
          existing_memory = store.get(namespace, "user_memory")
      
          # Format the memories for the system prompt
          if existing_memory and existing_memory.value:
              memory_dict = existing_memory.value
              formatted_memory = (
                  f"Name: {memory_dict.get('user_name', 'Unknown')}\n"
                  f"Interests: {', '.join(memory_dict.get('interests', []))}"
              )
          else:
              formatted_memory = None
          print(f"\t[Write memory debug] Existing memory: {formatted_memory}")
              
          # Format the existing memory in the instruction
          system_msg = CREATE_MEMORY_INSTRUCTION.format(memory=formatted_memory)
      
          # Invoke the model to produce structured output that matches the schema
          new_memory = llm_with_structured_output.invoke([SystemMessage(content=system_msg)]+state['messages'])
          print(f"\t[Write memory debug] New memory: {new_memory}")
      
          # Overwrite the existing use profile memory
          key = "user_memory"
          store.put(namespace, key, new_memory)
      
      # Create graph builder
      graph_builder = StateGraph(MessagesState)
      
      # Add nodes
      graph_builder.add_node("call_model", call_model)
      graph_builder.add_node("write_memory", write_memory)
      
      # Connect nodes
      graph_builder.add_edge(START, "call_model")
      graph_builder.add_edge("call_model", "write_memory")
      graph_builder.add_edge("write_memory", END)
      
      # Store for long-term (across-thread) memory
      long_term_memory = InMemoryStore()
      
      # Checkpointer for short-term (within-thread) memory
      short_term_memory = MemorySaver()
      
      # Compile the graph
      graph = graph_builder.compile(checkpointer=short_term_memory, store=long_term_memory)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 30

We run the graph

	
# We supply a thread ID for short-term (within-thread) memory
# We supply a user ID for long-term (across-thread) memory
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1", "user_id": "1"{closing_brace}{closing_brace}
# User input
input_messages = [HumanMessage(content="Hi, my name is Maximo and I like to bike around Madrid and eat salads.")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Hi, my name is Maximo and I like to bike around Madrid and eat salads.
[Call model debug] Existing memory: None
================================== Ai Message ==================================
Hello Maximo! It's nice to meet you. I see you enjoy biking around Madrid and eating salads - those are great healthy habits! Madrid has some beautiful areas to explore by bike, and the city has been improving its cycling infrastructure in recent years.
Is there anything specific about Madrid's cycling routes or perhaps some good places to find delicious salads in the city that you'd like to know more about? I'd be happy to help with any questions you might have.
[Write memory debug] Existing memory: None
[Write memory debug] New memory: {'user_name': 'Maximo', 'interests': ['biking', 'Madrid', 'salads']{closing_brace}

As we can see, the LLM has generated an output with the structure defined by us.

Let's see how long-term memory has been stored.

	
# Namespace for the memory to save
user_id = "1"
namespace = ("memory", user_id)
existing_memory = long_term_memory.get(namespace, "user_memory")
existing_memory.value
Copy
	
{'user_name': 'Maximo', 'interests': ['biking', 'Madrid', 'salads']}

Morelink image 98

Update Structured Schemas with Trustcalllink image 99

In the previous example, we created user profiles with structured data In fact, what happens under the hood is that the user profile is regenerated with each interaction. This generates an unnecessary expense of tokens and can lead to the loss of important user profile information.

So to solve it, we are going to use the TrustCall library, which is an open source library for updating JSON schemas. When it needs to update a JSON schema, it does so incrementally, that is, it doesn't delete the previous schema, but rather adds the new fields.

Let's create a conversation example to see how it works.

	
from langchain_core.messages import HumanMessage, AIMessage
# Conversation
conversation = [HumanMessage(content="Hi, I'm Maximo."),
AIMessage(content="Nice to meet you, Maximo."),
HumanMessage(content="I really like playing soccer.")]
Copy

We create a structured schema and an LLM model

	
from pydantic import BaseModel, Field
from typing import List
# Schema
class UserProfile(BaseModel):
"""User profile schema with typed fields"""
user_name: str = Field(description="The user's preferred name")
interests: List[str] = Field(description="A list of the user's interests")
from langchain_anthropic import ChatAnthropic
import os
import dotenv
dotenv.load_dotenv()
ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
# Create the LLM model
llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
Copy

We use the create_extractor function from trustcall to create a structured data extractor

	
from trustcall import create_extractor
# Create the extractor
trustcall_extractor = create_extractor(
llm,
tools=[UserProfile],
tool_choice="UserProfile"
)
Copy

As can be seen, the trustcall_extractor method is given an llm, which will be used as the search engine.

We extracted the structured data

	
from langchain_core.messages import SystemMessage
# Instruction
system_msg = "Extract the user profile from the following conversation"
# Invoke the extractor
result = trustcall_extractor.invoke({"messages": [SystemMessage(content=system_msg)]+conversation})
result
Copy
	
{opening_brace}'messages': [AIMessage(content=[{opening_brace}'id': 'toolu_01WfgbD1fG3rJYAXGrjqjfVY', 'input': {'user_name': 'Maximo', 'interests': ['soccer']{closing_brace}, 'name': 'UserProfile', 'type': 'tool_use'{closing_brace}], additional_kwargs={opening_brace}{closing_brace}, response_metadata={opening_brace}'id': 'msg_01TEB3FeDKLAeHJtbKo5noyW', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 497, 'output_tokens': 56}, 'model_name': 'claude-3-7-sonnet-20250219'{closing_brace}, id='run-8a15289b-fd39-4a2d-878a-fa6feaa805c5-0', tool_calls=[{opening_brace}'name': 'UserProfile', 'args': {'user_name': 'Maximo', 'interests': ['soccer']{closing_brace}, 'id': 'toolu_01WfgbD1fG3rJYAXGrjqjfVY', 'type': 'tool_call'{closing_brace}], usage_metadata={'input_tokens': 497, 'output_tokens': 56, 'total_tokens': 553, 'input_token_details': {'cache_read': 0, 'cache_creation': 0{closing_brace}{closing_brace})],
'responses': [UserProfile(user_name='Maximo', interests=['soccer'])],
'response_metadata': [{opening_brace}'id': 'toolu_01WfgbD1fG3rJYAXGrjqjfVY'{closing_brace}],
'attempts': 1}

Let's take a look at the messages that have been generated to extract the structured data

	
for m in result["messages"]:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
[{'id': 'toolu_01WfgbD1fG3rJYAXGrjqjfVY', 'input': {'user_name': 'Maximo', 'interests': ['soccer']}, 'name': 'UserProfile', 'type': 'tool_use'}]
Tool Calls:
UserProfile (toolu_01WfgbD1fG3rJYAXGrjqjfVY)
Call ID: toolu_01WfgbD1fG3rJYAXGrjqjfVY
Args:
user_name: Maximo
interests: ['soccer']

The UserProfile schema has been updated with the new data.

	
schema = result["responses"]
schema
Copy
	
[UserProfile(user_name='Maximo', interests=['soccer'])]

As we can see, the schema is a list, let's check the data type of its only element

	
type(schema[0])
Copy
	
__main__.UserProfile

We can convert it to a dictionary with model_dump

	
schema[0].model_dump()
Copy
	
{opening_brace}'user_name': 'Maximo', 'interests': ['soccer']{closing_brace}

Thanks to giving trustcall_extractor an LLM, we can ask it what we want it to extract

Let's simulate that the conversation continues to see how the schema updates.

	
# Update the conversation
updated_conversation = [HumanMessage(content="Hi, I'm Maximo."),
AIMessage(content="Nice to meet you, Maximo."),
HumanMessage(content="I really like playing soccer."),
AIMessage(content="It is great to play soccer! Where do you go after playing soccer?"),
HumanMessage(content="I really like to go to a bakery after playing soccer."),]
Copy

We ask the model to update the schema (a JSON) using the trustcall library.

	
# Update the instruction
system_msg = f"""Update the memory (JSON doc) to incorporate new information from the following conversation"""
# Invoke the extractor with the updated instruction and existing profile with the corresponding tool name (UserProfile)
result = trustcall_extractor.invoke({"messages": [SystemMessage(content=system_msg)]+updated_conversation},
{opening_brace}"existing": {opening_brace}"UserProfile": schema[0].model_dump(){closing_brace}{closing_brace})
result
Copy
	
{'messages': [AIMessage(content=[{'id': 'toolu_01K1zTh33kXDAw1h18Yh2HBb', 'input': {'user_name': 'Maximo', 'interests': ['soccer', 'bakeries']}, 'name': 'UserProfile', 'type': 'tool_use'}], additional_kwargs={}, response_metadata={'id': 'msg_01RYUJvCdzL4b8kBYKo4BtQf', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 538, 'output_tokens': 60}, 'model_name': 'claude-3-7-sonnet-20250219'}, id='run-06994472-5ba0-46cc-a512-5fcacce283fc-0', tool_calls=[{'name': 'UserProfile', 'args': {'user_name': 'Maximo', 'interests': ['soccer', 'bakeries']}, 'id': 'toolu_01K1zTh33kXDAw1h18Yh2HBb', 'type': 'tool_call'}], usage_metadata={'input_tokens': 538, 'output_tokens': 60, 'total_tokens': 598, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}})],
'responses': [UserProfile(user_name='Maximo', interests=['soccer', 'bakeries'])],
'response_metadata': [{'id': 'toolu_01K1zTh33kXDAw1h18Yh2HBb'}],
'attempts': 1}

Let's take a look at the messages that have been generated to update the schema

	
for m in result["messages"]:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
[{'id': 'toolu_01K1zTh33kXDAw1h18Yh2HBb', 'input': {'user_name': 'Maximo', 'interests': ['soccer', 'bakeries']}, 'name': 'UserProfile', 'type': 'tool_use'}]
Tool Calls:
UserProfile (toolu_01K1zTh33kXDAw1h18Yh2HBb)
Call ID: toolu_01K1zTh33kXDAw1h18Yh2HBb
Args:
user_name: Maximo
interests: ['soccer', 'bakeries']

We see the updated schema

	
updated_schema = result["responses"][0]
updated_schema.model_dump()
Copy
	
{'user_name': 'Maximo', 'interests': ['soccer', 'bakeries']}

Chatbot with updated user profile using Trustcalllink image 100

We recreate the graph that updates the user profile, but now with the trustcall library.

from pydantic import BaseModel, Field
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, MessagesState, START, END
      from langgraph.graph.message import add_messages
      from langchain_anthropic import ChatAnthropic
      from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
      from langgraph.checkpoint.memory import MemorySaver # Short-term memory
      from langgraph.store.base import BaseStore          # Long-term memory
      from langchain_core.runnables.config import RunnableConfig
      from langgraph.store.memory import InMemoryStore
      from IPython.display import Image, display
      from pydantic import BaseModel, Field
      import os
      import dotenv
      from trustcall import create_extractor
      
      dotenv.load_dotenv()
      ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Schema 
      class UserProfile(BaseModel):
          """ Profile of a user """
          user_name: str = Field(description="The user's preferred name")
          user_location: str = Field(description="The user's location")
          interests: list = Field(description="A list of the user's interests")
      
      # Create the LLM model
      llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
      
      # Create the extractor
      trustcall_extractor = create_extractor(
          llm,
          tools=[UserProfile],
          tool_choice="UserProfile", # Enforces use of the UserProfile tool
      )
      
      # Chatbot instruction
      MODEL_SYSTEM_MESSAGE = """You are a helpful assistant with memory that provides information about the user. 
      If you have memory for this user, use it to personalize your responses.
      Here is the memory (it may be empty): {memory}"""
      
      # Create new memory from the chat history and any existing memory
      TRUSTCALL_INSTRUCTION = """Create or update the memory (JSON doc) to incorporate information from the following conversation:"""
      
      # Nodes
      def call_model(state: MessagesState, config: RunnableConfig, store: BaseStore):
      
          """Load memory from the store and use it to personalize the chatbot's response."""
          
          """Load memory from the store and use it to personalize the chatbot's response."""
          
          # Get the user ID from the config
          user_id = config["configurable"]["user_id"]
      
          # Retrieve memory from the store
          namespace = ("memory", user_id)
          existing_memory = store.get(namespace, "user_memory")
      
          # Format the memories for the system prompt
          if existing_memory and existing_memory.value:
              memory_dict = existing_memory.value
              formatted_memory = (
                  f"Name: {memory_dict.get('user_name', 'Unknown')}\n"
                  f"Location: {memory_dict.get('user_location', 'Unknown')}\n"
                  f"Interests: {', '.join(memory_dict.get('interests', []))}"      
              )
          else:
              formatted_memory = None
          print(f"\t[Call model debug] Existing memory: {formatted_memory}")
      
          # Format the memory in the system prompt
          system_msg = MODEL_SYSTEM_MESSAGE.format(memory=formatted_memory)
      
          # Respond using memory as well as the chat history
          response = llm.invoke([SystemMessage(content=system_msg)]+state["messages"])
      
          return {"messages": response}
      
      def write_memory(state: MessagesState, config: RunnableConfig, store: BaseStore):
      
          """Reflect on the chat history and save a memory to the store."""
          
          # Get the user ID from the config
          user_id = config["configurable"]["user_id"]
      
          # Retrieve existing memory from the store
          namespace = ("memory", user_id)
          existing_memory = store.get(namespace, "user_memory")
              
          # Get the profile as the value from the list, and convert it to a JSON doc
          existing_profile = {"UserProfile": existing_memory.value} if existing_memory else None
          print(f"\t[Write memory debug] Existing profile: {existing_profile}")
          
          # Invoke the extractor
          result = trustcall_extractor.invoke({"messages": [SystemMessage(content=TRUSTCALL_INSTRUCTION)]+state["messages"], "existing": existing_profile})
          
          # Get the updated profile as a JSON object
          updated_profile = result["responses"][0].model_dump()
          print(f"\t[Write memory debug] Updated profile: {updated_profile}")
      
          # Save the updated profile
          key = "user_memory"
          store.put(namespace, key, updated_profile)
      
      # Create graph builder
      graph_builder = StateGraph(MessagesState)
      
      # Add nodes
      graph_builder.add_node("call_model", call_model)
      graph_builder.add_node("write_memory", write_memory)
      
      # Connect nodes
      graph_builder.add_edge(START, "call_model")
      graph_builder.add_edge("call_model", "write_memory")
      graph_builder.add_edge("write_memory", END)
      
      # Store for long-term (across-thread) memory
      long_term_memory = InMemoryStore()
      
      # Checkpointer for short-term (within-thread) memory
      short_term_memory = MemorySaver()
      
      # Compile the graph
      graph = graph_builder.compile(checkpointer=short_term_memory, store=long_term_memory)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 31

We start the conversation

	
# We supply a thread ID for short-term (within-thread) memory
# We supply a user ID for long-term (across-thread) memory
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1", "user_id": "1"{closing_brace}{closing_brace}
# User input
input_messages = [HumanMessage(content="Hi, my name is Maximo")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Hi, my name is Maximo
[Call model debug] Existing memory: None
================================== Ai Message ==================================
Hello Maximo! It's nice to meet you. How can I help you today? Whether you have questions, need information, or just want to chat, I'm here to assist you. Is there something specific you'd like to talk about?
[Write memory debug] Existing profile: None
[Write memory debug] Updated profile: {'user_name': 'Maximo', 'user_location': '<UNKNOWN>', 'interests': []}

As we can see, it doesn't know the user's location or interests. Let's update the user's profile.

	
# User input
input_messages = [HumanMessage(content="I like to play soccer and I live in Madrid")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
I like to play soccer and I live in Madrid
[Call model debug] Existing memory: Name: Maximo
Location: <UNKNOWN>
Interests:
================================== Ai Message ==================================
Hello Maximo! It's great to learn that you live in Madrid and enjoy playing soccer. Madrid is a fantastic city with a rich soccer culture, being home to world-famous clubs like Real Madrid and Atlético Madrid.
Soccer is truly a way of life in Spain, so you're in a perfect location for your interest. Do you support any particular team in Madrid? Or perhaps you enjoy playing soccer recreationally in the city's parks and facilities?
Is there anything specific about Madrid or soccer you'd like to discuss further?
[Write memory debug] Existing profile: {'UserProfile': {'user_name': 'Maximo', 'user_location': '<UNKNOWN>', 'interests': []{closing_brace}{closing_brace}
[Write memory debug] Updated profile: {'user_name': 'Maximo', 'user_location': 'Madrid', 'interests': ['soccer']{closing_brace}

You have updated the profile with the user's location and interests.

Let's check the updated memory

	
# Namespace for the memory to save
user_id = "1"
namespace = ("memory", user_id)
existing_memory = long_term_memory.get(namespace, "user_memory")
existing_memory.dict()
Copy
	
{'namespace': ['memory', '1'],
'key': 'user_memory',
'value': {'user_name': 'Maximo',
'user_location': 'Madrid',
'interests': ['soccer']},
'created_at': '2025-05-12T17:35:03.583258+00:00',
'updated_at': '2025-05-12T17:35:03.583259+00:00'}

We see the schema with the user profile updated

	
# The user profile saved as a JSON object
existing_memory.value
Copy
	
{'user_name': 'Maximo', 'user_location': 'Madrid', 'interests': ['soccer']}

Let's add a new user interest

	
# User input
input_messages = [HumanMessage(content="I also like to play basketball")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
I also like to play basketball
[Call model debug] Existing memory: Name: Maximo
Location: Madrid
Interests: soccer
================================== Ai Message ==================================
That's great to know, Maximo! It's nice that you enjoy both soccer and basketball. Basketball is also quite popular in Spain, with Liga ACB being one of the strongest basketball leagues in Europe.
In Madrid, you have the opportunity to follow Real Madrid's basketball section, which is one of the most successful basketball teams in Europe. The city offers plenty of courts and facilities where you can play basketball too.
Do you play basketball casually with friends, or are you part of any local leagues in Madrid? And how do you balance your time between soccer and basketball?
[Write memory debug] Existing profile: {'UserProfile': {'user_name': 'Maximo', 'user_location': 'Madrid', 'interests': ['soccer']{closing_brace}{closing_brace}
[Write memory debug] Updated profile: {'user_name': 'Maximo', 'user_location': 'Madrid', 'interests': ['soccer', 'basketball']{closing_brace}

We review the updated memory.

	
# Namespace for the memory to save
user_id = "1"
namespace = ("memory", user_id)
existing_memory = long_term_memory.get(namespace, "user_memory")
existing_memory.value
Copy
	
{'user_name': 'Maximo',
'user_location': 'Madrid',
'interests': ['soccer', 'basketball']}

It has correctly added the new user interest.

With this long-term memory stored, we can start a new thread and the chatbot will have access to our updated profile.

	
# We supply a thread ID for short-term (within-thread) memory
# We supply a user ID for long-term (across-thread) memory
config = {opening_brace}"configurable": {opening_brace}"thread_id": "2", "user_id": "1"{closing_brace}{closing_brace}
# User input
input_messages = [HumanMessage(content="What soccer players do you recommend for me?")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
What soccer players do you recommend for me?
[Call model debug] Existing memory: Name: Maximo
Location: Madrid
Interests: soccer, basketball
================================== Ai Message ==================================
Based on your interest in soccer, I can recommend some players who might appeal to you. Since you're from Madrid, you might already follow Real Madrid or Atlético Madrid players, but here are some recommendations:
From La Liga:
- Vinícius Júnior and Jude Bellingham (Real Madrid)
- Antoine Griezmann (Atlético Madrid)
- Robert Lewandowski (Barcelona)
- Lamine Yamal (Barcelona's young talent)
International stars:
- Kylian Mbappé
- Erling Haaland
- Mohamed Salah
- Kevin De Bruyne
You might also enjoy watching players with creative playing styles since you're interested in basketball as well, which is a sport that values creativity and flair - players like Rodrigo De Paul or João Félix.
Is there a particular league or playing style you prefer in soccer?
[Write memory debug] Existing profile: {'UserProfile': {'user_name': 'Maximo', 'user_location': 'Madrid', 'interests': ['soccer', 'basketball']{closing_brace}{closing_brace}
[Write memory debug] Updated profile: {'user_name': 'Maximo', 'user_location': 'Madrid', 'interests': ['soccer', 'basketball']{closing_brace}

Since it knows I live in Madrid, it first suggested Spanish La Liga football players. And then it suggested players from other leagues.

Chatbot with user document collections updated with Trustcalllink image 101

Another approach is to store a collection of documents instead of saving the user profile in a single document. This way, we are not tied to a single closed schema. Let's see how to do it

from langgraph.graph import StateGraph, MessagesState, START, END
      from langchain_anthropic import ChatAnthropic
      from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
      from langchain_core.messages import merge_message_runs
      from langgraph.checkpoint.memory import MemorySaver # Short-term memory
      from langgraph.store.base import BaseStore          # Long-term memory
      from langchain_core.runnables.config import RunnableConfig
      from langgraph.store.memory import InMemoryStore
      from IPython.display import Image, display
      from trustcall import create_extractor
      from pydantic import BaseModel, Field
      import uuid
      import os
      import dotenv
      
      dotenv.load_dotenv()
      ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Memory schema
      class Memory(BaseModel):
          """A memory item representing a piece of information learned about the user."""
          content: str = Field(description="The main content of the memory. For example: User expressed interest in learning about French.")
      
      # Create the LLM model
      llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
      
      # Create the extractor
      trustcall_extractor = create_extractor(
          llm,
          tools=[Memory],
          tool_choice="Memory",
          # This allows the extractor to insert new memories
          enable_inserts=True,
      )
      
      # Chatbot instruction
      MODEL_SYSTEM_MESSAGE = """You are a helpful chatbot. You are designed to be a companion to a user. 
      You have a long term memory which keeps track of information you learn about the user over time.
      Current Memory (may include updated memories from this conversation): 
      {memory}"""
      
      # Create new memory from the chat history and any existing memory
      TRUSTCALL_INSTRUCTION = """Reflect on following interaction. 
      Use the provided tools to retain any necessary memories about the user. 
      Use parallel tool calling to handle updates and insertions simultaneously:"""
      
      # Nodes
      def call_model(state: MessagesState, config: RunnableConfig, store: BaseStore):
      
          """Load memory from the store and use it to personalize the chatbot's response."""
          
          # Get the user ID from the config
          user_id = config["configurable"]["user_id"]
      
          # Retrieve memory from the store
          namespace = ("memories", user_id)
          memories = store.search(namespace)
          print(f"\t[Call model debug] Memories: {memories}")
      
          # Format the memories for the system prompt
          info = "\n".join(f"- {mem.value['content']}" for mem in memories)
          system_msg = MODEL_SYSTEM_MESSAGE.format(memory=info)
      
          # Respond using memory as well as the chat history
          response = llm.invoke([SystemMessage(content=system_msg)]+state["messages"])
      
          return {"messages": response}
      
      def write_memory(state: MessagesState, config: RunnableConfig, store: BaseStore):
      
          """Reflect on the chat history and save a memory to the store."""
          
          # Get the user ID from the config
          user_id = config["configurable"]["user_id"]
      
          # Define the namespace for the memories
          namespace = ("memories", user_id)
      
          # Retrieve the most recent memories for context
          existing_items = store.search(namespace)
      
          # Format the existing memories for the Trustcall extractor
          tool_name = "Memory"
          existing_memories = ([(existing_item.key, tool_name, existing_item.value)
                                for existing_item in existing_items]
                                if existing_items
                                else None
                              )
          print(f"\t[Write memory debug] Existing memories: {existing_memories}")
      
          # Merge the chat history and the instruction
          updated_messages=list(merge_message_runs(messages=[SystemMessage(content=TRUSTCALL_INSTRUCTION)] + state["messages"]))
      
          # Invoke the extractor
          result = trustcall_extractor.invoke({"messages": updated_messages, 
                                              "existing": existing_memories})
      
          # Save the memories from Trustcall to the store
          for r, rmeta in zip(result["responses"], result["response_metadata"]):
              store.put(namespace,
                        rmeta.get("json_doc_id", str(uuid.uuid4())),
                        r.model_dump(mode="json"),
                  )
          print(f"\t[Write memory debug] Saved memories: {result['responses']}")
      
      # Create graph builder
      graph_builder = StateGraph(MessagesState)
      
      # Add nodes
      graph_builder.add_node("call_model", call_model)
      graph_builder.add_node("write_memory", write_memory)
      
      # Connect nodes
      graph_builder.add_edge(START, "call_model")
      graph_builder.add_edge("call_model", "write_memory")
      graph_builder.add_edge("write_memory", END)
      
      # Store for long-term (across-thread) memory
      long_term_memory = InMemoryStore()
      
      # Checkpointer for short-term (within-thread) memory
      short_term_memory = MemorySaver()
      
      # Compile the graph
      graph = graph_builder.compile(checkpointer=short_term_memory, store=long_term_memory)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 32

We start a new conversation

	
# We supply a thread ID for short-term (within-thread) memory
# We supply a user ID for long-term (across-thread) memory
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1", "user_id": "1"{closing_brace}{closing_brace}
# User input
input_messages = [HumanMessage(content="Hi, my name is Maximo")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Hi, my name is Maximo
[Call model debug] Memories: []
================================== Ai Message ==================================
Hello Maximo! It's nice to meet you. I'm your companion chatbot, here to chat, help answer questions, or just be someone to talk to.
I'll remember your name is Maximo for our future conversations. What would you like to talk about today? How are you doing?
[Write memory debug] Existing memories: None
[Write memory debug] Saved memories: [Memory(content="User's name is Maximo.")]

We add a new user interest

	
# User input
input_messages = [HumanMessage(content="I like to play soccer")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
I like to play soccer
[Call model debug] Memories: [Item(namespace=['memories', '1'], key='6d06c4f5-3a74-46b2-92b4-1e29ba128c90', value={'content': "User's name is Maximo."}, created_at='2025-05-12T18:32:38.070902+00:00', updated_at='2025-05-12T18:32:38.070903+00:00', score=None)]
================================== Ai Message ==================================
That's great to know, Maximo! Soccer is such a wonderful sport. Do you play on a team, or more casually with friends? I'd also be curious to know what position you typically play, or if you have a favorite professional team you follow. I'll remember that you enjoy soccer for our future conversations.
[Write memory debug] Existing memories: [('6d06c4f5-3a74-46b2-92b4-1e29ba128c90', 'Memory', {'content': "User's name is Maximo."})]
[Write memory debug] Saved memories: [Memory(content='User enjoys playing soccer.')]

As we can see, the user's new interest has been added to the memory.

Let's check the updated memory

	
# Namespace for the memory to save
user_id = "1"
namespace = ("memories", user_id)
memories = long_term_memory.search(namespace)
for m in memories:
print(m.dict())
Copy
	
{'namespace': ['memories', '1'], 'key': '6d06c4f5-3a74-46b2-92b4-1e29ba128c90', 'value': {'content': "User's name is Maximo."}, 'created_at': '2025-05-12T18:32:38.070902+00:00', 'updated_at': '2025-05-12T18:32:38.070903+00:00', 'score': None}
{'namespace': ['memories', '1'], 'key': '25d2ee8c-5890-415b-85e0-d9fb0ea4cd43', 'value': {'content': 'User enjoys playing soccer.'}, 'created_at': '2025-05-12T18:32:42.558787+00:00', 'updated_at': '2025-05-12T18:32:42.558789+00:00', 'score': None}
	
for m in memories:
print(m.value)
Copy
	
{'content': "User's name is Maximo."}
{'content': 'User enjoys playing soccer.'}

We see that memory documents are saved, not a user profile.

Let's add a new user interest

	
# User input
input_messages = [HumanMessage(content="I also like to play basketball")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
I also like to play basketball
[Call model debug] Memories: [Item(namespace=['memories', '1'], key='6d06c4f5-3a74-46b2-92b4-1e29ba128c90', value={'content': "User's name is Maximo."}, created_at='2025-05-12T18:32:38.070902+00:00', updated_at='2025-05-12T18:32:38.070903+00:00', score=None), Item(namespace=['memories', '1'], key='25d2ee8c-5890-415b-85e0-d9fb0ea4cd43', value={'content': 'User enjoys playing soccer.'{closing_brace}, created_at='2025-05-12T18:32:42.558787+00:00', updated_at='2025-05-12T18:32:42.558789+00:00', score=None)]
================================== Ai Message ==================================
That's awesome, Maximo! Both soccer and basketball are fantastic sports. I'll remember that you enjoy basketball as well. Do you find yourself playing one more than the other? And similar to soccer, do you play basketball with a team or more casually? Many people enjoy the different skills and dynamics each sport offers - soccer with its continuous flow and footwork, and basketball with its fast pace and shooting precision. Any favorite basketball teams you follow?
[Write memory debug] Existing memories: [('6d06c4f5-3a74-46b2-92b4-1e29ba128c90', 'Memory', {'content': "User's name is Maximo."}), ('25d2ee8c-5890-415b-85e0-d9fb0ea4cd43', 'Memory', {'content': 'User enjoys playing soccer.'})]
[Write memory debug] Saved memories: [Memory(content='User enjoys playing basketball.')]

We revisit the updated memory.

	
# Namespace for the memory to save
user_id = "1"
namespace = ("memories", user_id)
memories = long_term_memory.search(namespace)
for m in memories:
print(m.value)
Copy
	
{'content': "User's name is Maximo."}
{'content': 'User enjoys playing soccer.'}
{'content': 'User enjoys playing basketball.'}

We start a new conversation with a new thread

	
# We supply a thread ID for short-term (within-thread) memory
# We supply a user ID for long-term (across-thread) memory
config = {opening_brace}"configurable": {opening_brace}"thread_id": "2", "user_id": "1"{closing_brace}{closing_brace}
# User input
input_messages = [HumanMessage(content="What soccer players do you recommend for me?")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
What soccer players do you recommend for me?
[Call model debug] Memories: [Item(namespace=['memories', '1'], key='6d06c4f5-3a74-46b2-92b4-1e29ba128c90', value={'content': "User's name is Maximo."}, created_at='2025-05-12T18:32:38.070902+00:00', updated_at='2025-05-12T18:32:38.070903+00:00', score=None), Item(namespace=['memories', '1'], key='25d2ee8c-5890-415b-85e0-d9fb0ea4cd43', value={'content': 'User enjoys playing soccer.'{closing_brace}, created_at='2025-05-12T18:32:42.558787+00:00', updated_at='2025-05-12T18:32:42.558789+00:00', score=None), Item(namespace=['memories', '1'], key='965f2e52-bea0-44d4-8534-4fce2bbc1c4b', value={'content': 'User enjoys playing basketball.'{closing_brace}, created_at='2025-05-12T18:33:38.613626+00:00', updated_at='2025-05-12T18:33:38.613629+00:00', score=None)]
================================== Ai Message ==================================
Hi Maximo! Since you enjoy soccer, I'd be happy to recommend some players you might find interesting to follow or learn from.
Based on your interests in both soccer and basketball, I might suggest players who are known for their athleticism and skill:
1. Lionel Messi - Widely considered one of the greatest players of all time
2. Cristiano Ronaldo - Known for incredible athleticism and dedication
3. Kylian Mbappé - Young talent with amazing speed and technical ability
4. Kevin De Bruyne - Master of passing and vision
5. Erling Haaland - Goal-scoring phenomenon
Is there a particular position or playing style you're most interested in? That would help me refine my recommendations further. I could also suggest players from specific leagues or teams if you have preferences!
[Write memory debug] Existing memories: [('6d06c4f5-3a74-46b2-92b4-1e29ba128c90', 'Memory', {'content': "User's name is Maximo."}), ('25d2ee8c-5890-415b-85e0-d9fb0ea4cd43', 'Memory', {'content': 'User enjoys playing soccer.'}), ('965f2e52-bea0-44d4-8534-4fce2bbc1c4b', 'Memory', {'content': 'User enjoys playing basketball.'})]
[Write memory debug] Saved memories: [Memory(content='User asked for soccer player recommendations, suggesting an active interest in following professional soccer beyond just playing it.')]

We see that it was remembered that we liked football and basketball.

Human in the looplink image 102

Although an agent can perform tasks, for certain tasks, human supervision is necessary. This is called human in the loop. So let's see how this can be done with LangGraph.

The persistence layer of LangGraph supports workflows with humans in the loop, allowing execution to pause and resume based on user feedback. The main interface for this functionality is the interrupt function. Calling interrupt within a node will stop the execution. Execution can be resumed, along with the new human contribution, passed in a Command primitive. interrupt is similar to the Python input() command, but with some additional considerations.

Let's add to the chatbot short-term memory and access to tools, but we'll make a change by adding a simple tool human_assistance. This tool uses interrupt to receive information from a human.

First we load the values of the API keys

	
import os
import dotenv
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
Copy

We create the graph

	
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
Copy

We define the tool for searching

	
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
search_tool = TavilySearchResults(api_wrapper=wrapper, max_results=2)
Copy

Now we create the tool for human assistance

	
from langgraph.types import Command, interrupt
from langchain_core.tools import tool
@tool
def human_assistance(query: str) -> str:
"""
Request assistance from a human expert. Use this tool ONLY ONCE per conversation.
After receiving the expert's response, you should provide an elaborated response to the user based on the information received
based on the information received, without calling this tool again.
Args:
query: The query to ask the human expert.
Returns:
The response from the human expert.
"""
human_response = interrupt({"query": query})
return human_response["data"]
Copy

LangGraph obtains information about the tools from the tool's documentation, that is, the function's docstring. Therefore, it is very important to generate a good docstring for the tool.

We create a list of tools

	
tools_list = [search_tool, human_assistance]
Copy

Next, the LLM with the bind_tools and we add it to the graph

	
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from huggingface_hub import login
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
# Create the LLM
login(token=HUGGINGFACE_TOKEN)
MODEL = "Qwen/Qwen2.5-72B-Instruct"
model = HuggingFaceEndpoint(
repo_id=MODEL,
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
# Create the chat model
llm = ChatHuggingFace(llm=model)
# Modification: tell the LLM which tools it can call
llm_with_tools = llm.bind_tools(tools_list)
# Define the chatbot function
def chatbot_function(state: State):
message = llm_with_tools.invoke(state["messages"])
assert len(message.tool_calls) <= 1
return {opening_brace}"messages": [message]}
# Add the chatbot node
graph_builder.add_node("chatbot_node", chatbot_function)
Copy
	
<langgraph.graph.state.StateGraph at 0x10764b380>

If you notice, we have changed the way we define the function chatbot_function, as it now has to handle the interruption.

We add the tool_node to the graph

	
from langgraph.prebuilt import ToolNode, tools_condition
tool_node = ToolNode(tools=tools_list)
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges("chatbot_node", tools_condition)
graph_builder.add_edge("tools", "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x10764b380>

We add the START node to the graph

	
graph_builder.add_edge(START, "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x10764b380>

We create a checkpointer MemorySaver.

	
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
Copy

We compile the graph with the checkpointer

	
graph = graph_builder.compile(checkpointer=memory)
Copy

We represent it graphically

from IPython.display import Image, display
      
      try:
          display(Image(graph.get_graph().draw_mermaid_png()))
      except Exception as e:
          print(f"Error al visualizar el grafo: {e}")
      
image uv 33

Now let's ask the chatbot a question that will involve the new human_assistance tool:

	
user_input = "I need some expert guidance for building an AI agent. Could you request assistance for me?"
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1"{closing_brace}{closing_brace}
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
I need some expert guidance for building an AI agent. Could you request assistance for me?
================================== Ai Message ==================================
Tool Calls:
human_assistance (0)
Call ID: 0
Args:
query: I need some expert guidance for building an AI agent. Could you provide me with some advice?

As can be seen, the chatbot generated a call to the human assistance tool.

Tool Calls:human_assistance (0)Call ID: 0Args:I need some expert guidance for building an AI agent. Could you provide advice on key considerations, best practices, and potential pitfalls to avoid?```
But then the execution was interrupted. Let's check the status of the graph.
	
snapshot = graph.get_state(config)
snapshot.next
Copy
	
('tools',)

We see that it stopped at the tools node. We analyze how the human_assistance tool has been defined.

from langgraph.types import Command, interruptfrom langchain_core.tools import tool
@tooldef human_assistance(query: str) -> str:"""Solicite asistencia de un experto humano. Use esta herramienta SOLAMENTE UNA VEZ por conversación.Sure, I understand your instructions. Please provide me with the Markdown text you would like translated to English, and I will handle it according to your guidelines.basado en la información recibida, sin llamar a esta herramienta nuevamente.
Args:La consulta para preguntar al experto humano.
Devuelve:La respuesta del experto humano. 
(Note: This is not a translation but an example of how you should format your response. Please provide the actual translation for the given Markdown text.)"""```markdown
human_response = interrupt({"query": query})
```return human_response["data"]```

Calling the interrupt tool will stop the execution, similar to the Python input() function. Progress is maintained based on our choice of checkpointer. That is, the choice of where the graph state is saved. So if we are persisting (saving the graph state) with a database like SQLite, Postgres, etc., we can resume execution at any time as long as the database is alive. Here we are persisting (saving the state of the graph) with the checkpoint pointer in memory RAM, so we can resume at any time while our Python kernel is running. In my case, as long as I don't reset the kernel of my Jupyter Notebook. To resume execution, we pass a Command object that contains the data expected by the tool. The format of this data can be customized based on our needs. Here, we only need a dictionary with a data key.

	
human_response = (
"We, the experts are here to help! We'd recommend you check out LangGraph to build your agent."
"It's much more reliable and extensible than simple autonomous agents."
)
human_command = Command(resume={opening_brace}"data": human_response})
events = graph.stream(human_command, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================== Ai Message ==================================
Tool Calls:
human_assistance (0)
Call ID: 0
Args:
query: I need some expert guidance for building an AI agent. Could you provide me with some advice?
================================= Tool Message =================================
Name: human_assistance
We, the experts are here to help! We'd recommend you check out LangGraph to build your agent.It's much more reliable and extensible than simple autonomous agents.
================================== Ai Message ==================================
The experts recommend checking out LangGraph for building your AI agent. It's known for being more reliable and extensible compared to simple autonomous agents.

As we can see, the chatbot has waited for a human to provide the answer and then generated a response based on the received information. We asked for help from an expert on how to create agents, the human told it that the best option is to use LangGraph, and the chatbot generated a response based on that information.

But it still has the ability to perform web searches. So now we're going to ask for the latest news about LangGraph.

	
user_input = "What's the latest news about LangGraph?"
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
What's the latest news about LangGraph?
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain - Changelog", "url": "https://changelog.langchain.com/", "content": "LangGraph `interrupt`: Simplifying human-in-the-loop agents --------------------------------------------------- Our latest feature in LangGraph, interrupt , makes building human-in-the-loop workflows easier. Agents aren’t perfect, so keeping humans “in the loop”... December 16, 2024 [...] LangGraph 🔁 Modify graph state from tools in LangGraph --------------------------------------------- LangGraph's latest update gives you greater control over your agents by enabling tools to directly update the graph state. This is a game-changer for use... December 18, 2024 [...] LangGraph Platform Custom authentication & access control for LangGraph Platform ------------------------------------------------------------- Today, we're thrilled to announce Custom Authentication and Resource-Level Access Control for Python deployments in LangGraph Cloud and self-hosted... December 20, 2024", "score": 0.78650844}, {opening_brace}"title": "LangGraph 0.3 Release: Prebuilt Agents - LangChain Blog", "url": "https://blog.langchain.dev/langgraph-0-3-release-prebuilt-agents/", "content": "LangGraph 0.3 Release: Prebuilt Agents 2 min read Feb 27, 2025 By Nuno Campos and Vadym Barda Over the past year, we’ve invested heavily in making LangGraph the go-to framework for building AI agents. With companies like Replit, Klarna, LinkedIn and Uber choosing to build on top of LangGraph, we have more conviction than ever that we are on the right path. [...] Up to this point, we’ve had one higher level abstraction and it’s lived in the main langgraph package. It was create_react_agent, a wrapper for creating a simple tool calling agent. Today, we are splitting that out of langgraph as part of a 0.3 release, and moving it into langgraph-prebuilt. We are also introducing a new set of prebuilt agents built on top of LangGraph, in both Python and JavaScript. Over the past three weeks, we’ve already released a few of these: [...] Published Time: 2025-02-27T15:09:15.000Z LangGraph 0.3 Release: Prebuilt Agents Skip to content Case Studies In the Loop LangChain Docs Changelog Sign in Subscribe", "score": 0.72348577}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain - Changelog", "url": "https://changelog.langchain.com/", "content": "LangGraph 🔁 Modify graph state from tools in LangGraph --------------------------------------------- LangGraph's latest update gives you greater control over your agents by enabling tools to directly update the graph state. This is a game-changer for use... December 18, 2024 [...] LangGraph `interrupt`: Simplifying human-in-the-loop agents --------------------------------------------------- Our latest feature in LangGraph, interrupt , makes building human-in-the-loop workflows easier. Agents aren’t perfect, so keeping humans “in the loop”... December 16, 2024 [...] LangGraph Platform Custom authentication & access control for LangGraph Platform ------------------------------------------------------------- Today, we're thrilled to announce Custom Authentication and Resource-Level Access Control for Python deployments in LangGraph Cloud and self-hosted... December 20, 2024", "score": 0.79732054}, {opening_brace}"title": "LangGraph 0.3 Release: Prebuilt Agents - LangChain Blog", "url": "https://blog.langchain.dev/langgraph-0-3-release-prebuilt-agents/", "content": "LangGraph 0.3 Release: Prebuilt Agents 2 min read Feb 27, 2025 By Nuno Campos and Vadym Barda Over the past year, we’ve invested heavily in making LangGraph the go-to framework for building AI agents. With companies like Replit, Klarna, LinkedIn and Uber choosing to build on top of LangGraph, we have more conviction than ever that we are on the right path. [...] Up to this point, we’ve had one higher level abstraction and it’s lived in the main langgraph package. It was create_react_agent, a wrapper for creating a simple tool calling agent. Today, we are splitting that out of langgraph as part of a 0.3 release, and moving it into langgraph-prebuilt. We are also introducing a new set of prebuilt agents built on top of LangGraph, in both Python and JavaScript. Over the past three weeks, we’ve already released a few of these: [...] Published Time: 2025-02-27T15:09:15.000Z LangGraph 0.3 Release: Prebuilt Agents Skip to content Case Studies In the Loop LangChain Docs Changelog Sign in Subscribe", "score": 0.7552947}]
================================== Ai Message ==================================
The latest news about LangGraph includes several updates and releases. Firstly, the 'interrupt' feature has been added, which simplifies creating human-in-the-loop workflows, essential for maintaining oversight of AI agents. Secondly, an update allows tools to modify the graph state directly, providing more control over the agents. Lastly, custom authentication and resource-level access control have been implemented for Python deployments in LangGraph Cloud and self-hosted environments. In addition, LangGraph released version 0.3, which introduces prebuilt agents in both Python and JavaScript, aimed at making it even easier to develop AI agents.

He has looked for the latest news about LangGraph and has generated a response based on the information received.

Let's write everything together so it is more understandable

	
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from huggingface_hub import login
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import ToolMessage
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.types import Command, interrupt
from langchain_core.tools import tool
from langgraph.checkpoint.memory import MemorySaver
from IPython.display import Image, display
import json
import os
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
import dotenv
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
# State
class State(TypedDict):
messages: Annotated[list, add_messages]
# Tools
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
tool_search = TavilySearchResults(api_wrapper=wrapper, max_results=2)
@tool
def human_assistance(query: str) -> str:
"""
Request assistance from a human expert. Use this tool ONLY ONCE per conversation.
After receiving the expert's response, you should provide an elaborated response to the user based on the information received
based on the information received, without calling this tool again.
Args:
query: The query to ask the human expert.
Returns:
The response from the human expert.
"""
human_response = interrupt({"query": query})
return human_response["data"]
tools_list = [tool_search, human_assistance]
# Create the LLM model
login(token=HUGGINGFACE_TOKEN) # Login to HuggingFace to use the model
MODEL = "Qwen/Qwen2.5-72B-Instruct"
model = HuggingFaceEndpoint(
repo_id=MODEL,
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
# Create the chat model
llm = ChatHuggingFace(llm=model)
# Create the LLM with tools
llm_with_tools = llm.bind_tools(tools_list)
# Tool node
tool_node = ToolNode(tools=tools_list)
# Functions
def chatbot_function(state: State):
message = llm_with_tools.invoke(state["messages"])
assert len(message.tool_calls) <= 1
return {opening_brace}"messages": [message]}
# Start to build the graph
graph_builder = StateGraph(State)
# Add nodes to the graph
graph_builder.add_node("chatbot_node", chatbot_function)
graph_builder.add_node("tools", tool_node)
# Add edges
graph_builder.add_edge(START, "chatbot_node")
graph_builder.add_conditional_edges( "chatbot_node", tools_condition)
graph_builder.add_edge("tools", "chatbot_node")
# Compile the graph
memory = MemorySaver()
graph = graph_builder.compile(checkpointer=memory)
# Display the graph
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception as e:
print(f"Error al visualizar el grafo: {e}")
Copy
	
Error al visualizar el grafo: Failed to reach https://mermaid.ink/ API while trying to render your graph after 1 retries. To resolve this issue:
1. Check your internet connection and try again
2. Try with higher retry settings: `draw_mermaid_png(..., max_retries=5, retry_delay=2.0)`
3. Use the Pyppeteer rendering method which will render your graph locally in a browser: `draw_mermaid_png(..., draw_method=MermaidDrawMethod.PYPPETEER)`

We ask the chatbot for help again to create agents. We request that it seek assistance.

	
user_input = "I need some expert guidance for building an AI agent. Could you request assistance for me?"
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1"{closing_brace}{closing_brace}
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
I need some expert guidance for building an AI agent. Could you request assistance for me?
================================== Ai Message ==================================
Tool Calls:
human_assistance (0)
Call ID: 0
Args:
query: I need expert guidance for building an AI agent.

We see in what state the graph has been left

	
snapshot = graph.get_state(config)
snapshot.next
Copy
	
('tools',)

We provide the assistance you are requesting.

	
human_response = (
"We, the experts are here to help! We'd recommend you check out LangGraph to build your agent."
"It's much more reliable and extensible than simple autonomous agents."
)
human_command = Command(resume={opening_brace}"data": human_response})
events = graph.stream(human_command, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================== Ai Message ==================================
Tool Calls:
human_assistance (0)
Call ID: 0
Args:
query: I need expert guidance for building an AI agent.
================================= Tool Message =================================
Name: human_assistance
We, the experts are here to help! We'd recommend you check out LangGraph to build your agent.It's much more reliable and extensible than simple autonomous agents.
================================== Ai Message ==================================
Tool Calls:
human_assistance (0)
Call ID: 0
Args:
query: I need some expert guidance for building an AI agent. Could you recommend a platform and any tips for getting started?

And lastly, we ask you to search the internet for the latest news about LangGraph

	
user_input = "What's the latest news about LangGraph?"
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
What's the latest news about LangGraph?
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain Blog", "url": "https://blog.langchain.dev/", "content": "LangSmith Incident on May 1, 2025 Requests to the US LangSmith API from both the web application and SDKs experienced an elevated error rate for 28 minutes on May 1, 2025 Featured How Klarna's AI assistant redefined customer support at scale for 85 million active users Is LangGraph Used In Production? Introducing Interrupt: The AI Agent Conference by LangChain Top 5 LangGraph Agents in Production 2024 [...] See how Harmonic uses LangSmith and LangGraph products to streamline venture investing workflows. Why Definely chose LangGraph for building their multi-agent AI system See how Definely used LangGraph to design a multi-agent system to help lawyers speed up their workflows. Introducing End-to-End OpenTelemetry Support in LangSmith LangSmith now provides end-to-end OpenTelemetry (OTel) support for applications built on LangChain and/or LangGraph.", "score": 0.6811549}, {opening_brace}"title": "LangGraph + UiPath: advancing agentic automation together", "url": "https://www.uipath.com/blog/product-and-updates/langgraph-uipath-advancing-agentic-automation-together", "content": "Raghu Malpani, Chief Technology Officer at UiPath, emphasizes the significance of these announcements for the UiPath developer community: Our collaboration with LangChain on LangSmith and Agent Protocol advances interoperability across agent frameworks. Further, by enabling the deployment of LangGraph agents into UiPath's enterprise-grade infrastructure, we are expanding the capabilities of our platform and opening up more possibilities for our developer community. [...] Today, we’re excited to announce: Native support for LangSmith observability in the UiPath LLM Gateway via OpenTelemetry (OTLP), enabling developers to monitor, debug, and evaluate LLM-powered features in UiPath using LangSmith either in LangChain’s cloud or self-hosted on-premises. This feature is currently in private preview.", "score": 0.6557114}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain Blog", "url": "https://blog.langchain.dev/", "content": "LangSmith Incident on May 1, 2025 Requests to the US LangSmith API from both the web application and SDKs experienced an elevated error rate for 28 minutes on May 1, 2025 Featured How Klarna's AI assistant redefined customer support at scale for 85 million active users Is LangGraph Used In Production? Introducing Interrupt: The AI Agent Conference by LangChain Top 5 LangGraph Agents in Production 2024 [...] See how Harmonic uses LangSmith and LangGraph products to streamline venture investing workflows. Why Definely chose LangGraph for building their multi-agent AI system See how Definely used LangGraph to design a multi-agent system to help lawyers speed up their workflows. Introducing End-to-End OpenTelemetry Support in LangSmith LangSmith now provides end-to-end OpenTelemetry (OTel) support for applications built on LangChain and/or LangGraph.", "score": 0.6811549}, {opening_brace}"title": "LangGraph + UiPath: advancing agentic automation together", "url": "https://www.uipath.com/blog/product-and-updates/langgraph-uipath-advancing-agentic-automation-together", "content": "Raghu Malpani, Chief Technology Officer at UiPath, emphasizes the significance of these announcements for the UiPath developer community: Our collaboration with LangChain on LangSmith and Agent Protocol advances interoperability across agent frameworks. Further, by enabling the deployment of LangGraph agents into UiPath's enterprise-grade infrastructure, we are expanding the capabilities of our platform and opening up more possibilities for our developer community. [...] Today, we’re excited to announce: Native support for LangSmith observability in the UiPath LLM Gateway via OpenTelemetry (OTLP), enabling developers to monitor, debug, and evaluate LLM-powered features in UiPath using LangSmith either in LangChain’s cloud or self-hosted on-premises. This feature is currently in private preview.", "score": 0.6557114}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain Blog", "url": "https://blog.langchain.dev/", "content": "LangSmith Incident on May 1, 2025 Requests to the US LangSmith API from both the web application and SDKs experienced an elevated error rate for 28 minutes on May 1, 2025 Featured How Klarna's AI assistant redefined customer support at scale for 85 million active users Is LangGraph Used In Production? Introducing Interrupt: The AI Agent Conference by LangChain Top 5 LangGraph Agents in Production 2024 [...] See how Harmonic uses LangSmith and LangGraph products to streamline venture investing workflows. Why Definely chose LangGraph for building their multi-agent AI system See how Definely used LangGraph to design a multi-agent system to help lawyers speed up their workflows. Introducing End-to-End OpenTelemetry Support in LangSmith LangSmith now provides end-to-end OpenTelemetry (OTel) support for applications built on LangChain and/or LangGraph.", "score": 0.6811549}, {opening_brace}"title": "LangGraph + UiPath: advancing agentic automation together", "url": "https://www.uipath.com/blog/product-and-updates/langgraph-uipath-advancing-agentic-automation-together", "content": "Raghu Malpani, Chief Technology Officer at UiPath, emphasizes the significance of these announcements for the UiPath developer community: Our collaboration with LangChain on LangSmith and Agent Protocol advances interoperability across agent frameworks. Further, by enabling the deployment of LangGraph agents into UiPath's enterprise-grade infrastructure, we are expanding the capabilities of our platform and opening up more possibilities for our developer community. [...] Today, we’re excited to announce: Native support for LangSmith observability in the UiPath LLM Gateway via OpenTelemetry (OTLP), enabling developers to monitor, debug, and evaluate LLM-powered features in UiPath using LangSmith either in LangChain’s cloud or self-hosted on-premises. This feature is currently in private preview.", "score": 0.6557114}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain Blog", "url": "https://blog.langchain.dev/", "content": "LangSmith Incident on May 1, 2025 Requests to the US LangSmith API from both the web application and SDKs experienced an elevated error rate for 28 minutes on May 1, 2025 Featured How Klarna's AI assistant redefined customer support at scale for 85 million active users Is LangGraph Used In Production? Introducing Interrupt: The AI Agent Conference by LangChain Top 5 LangGraph Agents in Production 2024 [...] See how Harmonic uses LangSmith and LangGraph products to streamline venture investing workflows. Why Definely chose LangGraph for building their multi-agent AI system See how Definely used LangGraph to design a multi-agent system to help lawyers speed up their workflows. Introducing End-to-End OpenTelemetry Support in LangSmith LangSmith now provides end-to-end OpenTelemetry (OTel) support for applications built on LangChain and/or LangGraph.", "score": 0.6811549}, {opening_brace}"title": "LangGraph + UiPath: advancing agentic automation together", "url": "https://www.uipath.com/blog/product-and-updates/langgraph-uipath-advancing-agentic-automation-together", "content": "Raghu Malpani, Chief Technology Officer at UiPath, emphasizes the significance of these announcements for the UiPath developer community: Our collaboration with LangChain on LangSmith and Agent Protocol advances interoperability across agent frameworks. Further, by enabling the deployment of LangGraph agents into UiPath's enterprise-grade infrastructure, we are expanding the capabilities of our platform and opening up more possibilities for our developer community. [...] Today, we’re excited to announce: Native support for LangSmith observability in the UiPath LLM Gateway via OpenTelemetry (OTLP), enabling developers to monitor, debug, and evaluate LLM-powered features in UiPath using LangSmith either in LangChain’s cloud or self-hosted on-premises. This feature is currently in private preview.", "score": 0.6557114}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain Blog", "url": "https://blog.langchain.dev/", "content": "LangSmith Incident on May 1, 2025 Requests to the US LangSmith API from both the web application and SDKs experienced an elevated error rate for 28 minutes on May 1, 2025 Featured How Klarna's AI assistant redefined customer support at scale for 85 million active users Is LangGraph Used In Production? Introducing Interrupt: The AI Agent Conference by LangChain Top 5 LangGraph Agents in Production 2024 [...] See how Harmonic uses LangSmith and LangGraph products to streamline venture investing workflows. Why Definely chose LangGraph for building their multi-agent AI system See how Definely used LangGraph to design a multi-agent system to help lawyers speed up their workflows. Introducing End-to-End OpenTelemetry Support in LangSmith LangSmith now provides end-to-end OpenTelemetry (OTel) support for applications built on LangChain and/or LangGraph.", "score": 0.6811549}, {opening_brace}"title": "LangGraph + UiPath: advancing agentic automation together", "url": "https://www.uipath.com/blog/product-and-updates/langgraph-uipath-advancing-agentic-automation-together", "content": "Raghu Malpani, Chief Technology Officer at UiPath, emphasizes the significance of these announcements for the UiPath developer community: Our collaboration with LangChain on LangSmith and Agent Protocol advances interoperability across agent frameworks. Further, by enabling the deployment of LangGraph agents into UiPath's enterprise-grade infrastructure, we are expanding the capabilities of our platform and opening up more possibilities for our developer community. [...] Today, we’re excited to announce: Native support for LangSmith observability in the UiPath LLM Gateway via OpenTelemetry (OTLP), enabling developers to monitor, debug, and evaluate LLM-powered features in UiPath using LangSmith either in LangChain’s cloud or self-hosted on-premises. This feature is currently in private preview.", "score": 0.6557114}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain Blog", "url": "https://blog.langchain.dev/", "content": "LangSmith Incident on May 1, 2025 Requests to the US LangSmith API from both the web application and SDKs experienced an elevated error rate for 28 minutes on May 1, 2025 Featured How Klarna's AI assistant redefined customer support at scale for 85 million active users Is LangGraph Used In Production? Introducing Interrupt: The AI Agent Conference by LangChain Top 5 LangGraph Agents in Production 2024 [...] See how Harmonic uses LangSmith and LangGraph products to streamline venture investing workflows. Why Definely chose LangGraph for building their multi-agent AI system See how Definely used LangGraph to design a multi-agent system to help lawyers speed up their workflows. Introducing End-to-End OpenTelemetry Support in LangSmith LangSmith now provides end-to-end OpenTelemetry (OTel) support for applications built on LangChain and/or LangGraph.", "score": 0.6811549}, {opening_brace}"title": "LangGraph + UiPath: advancing agentic automation together", "url": "https://www.uipath.com/blog/product-and-updates/langgraph-uipath-advancing-agentic-automation-together", "content": "Raghu Malpani, Chief Technology Officer at UiPath, emphasizes the significance of these announcements for the UiPath developer community: Our collaboration with LangChain on LangSmith and Agent Protocol advances interoperability across agent frameworks. Further, by enabling the deployment of LangGraph agents into UiPath's enterprise-grade infrastructure, we are expanding the capabilities of our platform and opening up more possibilities for our developer community. [...] Today, we’re excited to announce: Native support for LangSmith observability in the UiPath LLM Gateway via OpenTelemetry (OTLP), enabling developers to monitor, debug, and evaluate LLM-powered features in UiPath using LangSmith either in LangChain’s cloud or self-hosted on-premises. This feature is currently in private preview.", "score": 0.6557114}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain Blog", "url": "https://blog.langchain.dev/", "content": "LangSmith Incident on May 1, 2025 Requests to the US LangSmith API from both the web application and SDKs experienced an elevated error rate for 28 minutes on May 1, 2025 Featured How Klarna's AI assistant redefined customer support at scale for 85 million active users Is LangGraph Used In Production? Introducing Interrupt: The AI Agent Conference by LangChain Top 5 LangGraph Agents in Production 2024 [...] See how Harmonic uses LangSmith and LangGraph products to streamline venture investing workflows. Why Definely chose LangGraph for building their multi-agent AI system See how Definely used LangGraph to design a multi-agent system to help lawyers speed up their workflows. Introducing End-to-End OpenTelemetry Support in LangSmith LangSmith now provides end-to-end OpenTelemetry (OTel) support for applications built on LangChain and/or LangGraph.", "score": 0.6811549}, {opening_brace}"title": "LangGraph + UiPath: advancing agentic automation together", "url": "https://www.uipath.com/blog/product-and-updates/langgraph-uipath-advancing-agentic-automation-together", "content": "Raghu Malpani, Chief Technology Officer at UiPath, emphasizes the significance of these announcements for the UiPath developer community: Our collaboration with LangChain on LangSmith and Agent Protocol advances interoperability across agent frameworks. Further, by enabling the deployment of LangGraph agents into UiPath's enterprise-grade infrastructure, we are expanding the capabilities of our platform and opening up more possibilities for our developer community. [...] Today, we’re excited to announce: Native support for LangSmith observability in the UiPath LLM Gateway via OpenTelemetry (OTLP), enabling developers to monitor, debug, and evaluate LLM-powered features in UiPath using LangSmith either in LangChain’s cloud or self-hosted on-premises. This feature is currently in private preview.", "score": 0.6557114}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain Blog", "url": "https://blog.langchain.dev/", "content": "LangSmith Incident on May 1, 2025 Requests to the US LangSmith API from both the web application and SDKs experienced an elevated error rate for 28 minutes on May 1, 2025 Featured How Klarna's AI assistant redefined customer support at scale for 85 million active users Is LangGraph Used In Production? Introducing Interrupt: The AI Agent Conference by LangChain Top 5 LangGraph Agents in Production 2024 [...] See how Harmonic uses LangSmith and LangGraph products to streamline venture investing workflows. Why Definely chose LangGraph for building their multi-agent AI system See how Definely used LangGraph to design a multi-agent system to help lawyers speed up their workflows. Introducing End-to-End OpenTelemetry Support in LangSmith LangSmith now provides end-to-end OpenTelemetry (OTel) support for applications built on LangChain and/or LangGraph.", "score": 0.6811549}, {opening_brace}"title": "LangGraph + UiPath: advancing agentic automation together", "url": "https://www.uipath.com/blog/product-and-updates/langgraph-uipath-advancing-agentic-automation-together", "content": "Raghu Malpani, Chief Technology Officer at UiPath, emphasizes the significance of these announcements for the UiPath developer community: Our collaboration with LangChain on LangSmith and Agent Protocol advances interoperability across agent frameworks. Further, by enabling the deployment of LangGraph agents into UiPath's enterprise-grade infrastructure, we are expanding the capabilities of our platform and opening up more possibilities for our developer community. [...] Today, we’re excited to announce: Native support for LangSmith observability in the UiPath LLM Gateway via OpenTelemetry (OTLP), enabling developers to monitor, debug, and evaluate LLM-powered features in UiPath using LangSmith either in LangChain’s cloud or self-hosted on-premises. This feature is currently in private preview.", "score": 0.6557114}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: Latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain Blog", "url": "https://blog.langchain.dev/", "content": "LangSmith Incident on May 1, 2025 Requests to the US LangSmith API from both the web application and SDKs experienced an elevated error rate for 28 minutes on May 1, 2025 Featured How Klarna's AI assistant redefined customer support at scale for 85 million active users Is LangGraph Used In Production? Introducing Interrupt: The AI Agent Conference by LangChain Top 5 LangGraph Agents in Production 2024 [...] See how Harmonic uses LangSmith and LangGraph products to streamline venture investing workflows. Why Definely chose LangGraph for building their multi-agent AI system See how Definely used LangGraph to design a multi-agent system to help lawyers speed up their workflows. Introducing End-to-End OpenTelemetry Support in LangSmith LangSmith now provides end-to-end OpenTelemetry (OTel) support for applications built on LangChain and/or LangGraph.", "score": 0.67758125}, {opening_brace}"title": "LangGraph + UiPath: advancing agentic automation together", "url": "https://www.uipath.com/blog/product-and-updates/langgraph-uipath-advancing-agentic-automation-together", "content": "Raghu Malpani, Chief Technology Officer at UiPath, emphasizes the significance of these announcements for the UiPath developer community: Our collaboration with LangChain on LangSmith and Agent Protocol advances interoperability across agent frameworks. Further, by enabling the deployment of LangGraph agents into UiPath's enterprise-grade infrastructure, we are expanding the capabilities of our platform and opening up more possibilities for our developer community. [...] Today, we’re excited to announce: Native support for LangSmith observability in the UiPath LLM Gateway via OpenTelemetry (OTLP), enabling developers to monitor, debug, and evaluate LLM-powered features in UiPath using LangSmith either in LangChain’s cloud or self-hosted on-premises. This feature is currently in private preview.", "score": 0.6522641}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangGraph - LangChain", "url": "https://www.langchain.com/langgraph", "content": "“As Ally advances its exploration of Generative AI, our tech labs is excited by LangGraph, the new library from LangChain, which is central to our experiments", "score": 0.98559}, {opening_brace}"title": "Evaluating LangGraph Framework : Series 1 | by Jalaj Agrawal", "url": "https://medium.com/@jalajagr/evaluating-langgraph-as-a-multiagent-framework-a-10-dimensional-framework-series-1-c7203b7f4659", "content": ": LangGraph excels with its intuitive graph-based abstraction that allows new developers to build working multi-agent systems within hours.", "score": 0.98196}]
================================== Ai Message ==================================
It looks like LangGraph has been generating some significant buzz in the AI community, especially for its capabilities in building multi-agent systems. Here are a few highlights from the latest news:
1. **LangGraph in Production**: Companies like Klarna and Definely are already using LangGraph to build and optimize their AI systems. Klarna has leveraged LangGraph to enhance their customer support, and Definely has used it to design a multi-agent system to speed up legal workflows.
2. **Integration with UiPath**: LangChain and UiPath have collaborated to advance agentic automation. This partnership includes native support for LangSmith observability in UiPath’s LLM Gateway via OpenTelemetry, which will allow developers to monitor, debug, and evaluate LLM-powered features more effectively.
3. **Intuitive Design**: LangGraph is praised for its intuitive graph-based abstraction, which enables developers to build working multi-agent systems quickly, even if they are new to the field.
4. **Community and Conferences**: LangChain is also hosting an AI Agent Conference called "Interrupt," which could be a great opportunity to learn more about the latest developments and best practices in building AI agents.
If you're considering using LangGraph for your project, these resources and updates might provide valuable insights and support. Would you like more detailed information on any specific aspect of LangGraph?

Morelink image 103

Approval of tool usagelink image 104

Note: We are going to use Sonnet 3.7 for this section, as at the time of writing the post, it is the best model for use with agents, and it is the only one that understands when to call the tools and when not to for this example

We can add a human in the loop to approve the use of tools. We are going to create a chatbot with several tools for performing mathematical operations, so when building the graph we specify where we want to insert the breakpoint (graph_builder.compile(interrupt_before=["tools"], checkpointer=memory))

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
      from langgraph.prebuilt import ToolNode, tools_condition
      from langgraph.checkpoint.memory import MemorySaver
      from langchain_core.tools import tool
      from langchain_anthropic import ChatAnthropic
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
      
      memory = MemorySaver()
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Tools
      @tool
      def multiply(a: int, b: int) -> int:
          """Multiply a and b.
      
          Args:
              a: first int
              b: second int
      
          Returns:
              The product of a and b.
          """
          return a * b
      
      @tool
      def add(a: int, b: int) -> int:
          """Adds a and b.
      
          Args:
              a: first int
              b: second int
      
          Returns:
              The sum of a and b.
          """
          return a + b
      
      @tool
      def subtract(a: int, b: int) -> int:
          """Subtract b from a.
      
          Args:
              a: first int
              b: second int
      
          Returns:
              The difference between a and b.
          """
          return a - b
      
      @tool
      def divide(a: int, b: int) -> float:
          """Divide a by b.
      
          Args:
              a: first int
              b: second int
      
          Returns:
              The quotient of a and b.
          """
          return a / b
      
      tools_list = [multiply, add, subtract, divide]
      
      # Create the LLM model
      llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
      llm_with_tools = llm.bind_tools(tools_list)
      
      # Nodes
      def chat_model_node(state: State):
          system_message = "You are a helpful assistant that can use tools to answer questions. Once you have the result of a tool, provide a final answer without calling more tools."
          messages = [SystemMessage(content=system_message)] + state["messages"]
          return {"messages": [llm_with_tools.invoke(messages)]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("chatbot_node", chat_model_node)
      tool_node = ToolNode(tools=tools_list)
      graph_builder.add_node("tools", tool_node)
      
      # Connecto nodes
      graph_builder.add_edge(START, "chatbot_node")
      graph_builder.add_conditional_edges("chatbot_node", tools_condition)
      graph_builder.add_edge("tools", "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile(interrupt_before=["tools"], checkpointer=memory)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 34

As we can see in the graph, there is an interrupt before using the tools. This means it will stop before using them to ask for our permission.

	
# Input
initial_input = {opening_brace}"messages": HumanMessage(content="Multiply 2 and 3")}
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1"{closing_brace}{closing_brace}
# Run the graph until the first interruption
for event in graph.stream(initial_input, config, stream_mode="updates"):
if 'chatbot_node' in event:
print(event['chatbot_node']['messages'][-1].pretty_print())
else:
print(event)
Copy
	
================================== Ai Message ==================================
[{'text': "I'll multiply 2 and 3 for you.", 'type': 'text'}, {'id': 'toolu_01QDuind1VBHWtvifELN9SPf', 'input': {'a': 2, 'b': 3}, 'name': 'multiply', 'type': 'tool_use'}]
Tool Calls:
multiply (toolu_01QDuind1VBHWtvifELN9SPf)
Call ID: toolu_01QDuind1VBHWtvifELN9SPf
Args:
a: 2
b: 3
None
{'__interrupt__': ()}

As we can see, the LLM knows it has to use the multiply tool, but the execution is interrupted because it has to wait for a human to authorize the use of the tool.

We can see the state in which the graph has been left.

	
state = graph.get_state(config)
state.next
Copy
	
('tools',)

As we can see, it has remained on the tools node.

We can create a function (not in the graph, but outside the graph, to improve the user experience and help them understand why execution pauses) that asks the user to approve the use of the tool.

We create a new thread_id so that a new state is created.

	
# Input
initial_input = {opening_brace}"messages": HumanMessage(content="Multiply 2 and 3")}
config = {opening_brace}"configurable": {opening_brace}"thread_id": "2"{closing_brace}{closing_brace}
# Run the graph until the first interruption
for event in graph.stream(initial_input, config, stream_mode="updates"):
function_name = None
function_args = None
if 'chatbot_node' in event:
for element in event['chatbot_node']['messages'][-1].content:
if element['type'] == 'text':
print(element['text'])
elif element['type'] == 'tool_use':
function_name = element['name']
function_args = element['input']
print(f"The LLM wants to use the tool {function_name} with the arguments {function_args}")
elif '__interrupt__' in event:
pass
else:
print(event)
question = f"Do you approve the use of the tool {function_name} with the arguments {function_args}? (y/n)"
user_approval = input(question)
print(f"{question}: {user_approval}")
if user_approval.lower() == 'y':
print("User approved the use of the tool")
for event in graph.stream(None, config, stream_mode="updates"):
if 'chatbot_node' in event:
for element in event['chatbot_node']['messages'][-1].content:
if isinstance(element, str):
print(element, end="")
elif 'tools' in event:
result = event['tools']['messages'][-1].content
tool_used = event['tools']['messages'][-1].name
print(f"The result of the tool {tool_used} is {result}")
else:
print(event)
Copy
	
I'll multiply 2 and 3 for you.
The LLM wants to use the tool multiply with the arguments {'a': 2, 'b': 3}
Do you approve the use of the tool None with the arguments None? (y/n): y
User approved the use of the tool
The result of the tool multiply is 6
The result of multiplying 2 and 3 is 6.

We can see that it has asked us if we approve the use of the tool for multiplication, we have approved it and the graph has finished execution. Looking at the state of the graph.

	
state = graph.get_state(config)
state.next
Copy
	
()

We see that the next state of the graph is empty, which indicates that the graph execution has finished.

State Modificationlink image 105

Note: We are going to use Sonnet 3.7 for this section, as at the time of writing the post, it is the best model for use with agents, and it is the only one that understands when it needs to call the tools and when it does not for this example

Let's repeat the previous example, but instead of interrupting the graph before the use of a tool, we will interrupt it at the LLM. To do this, when building the graph, we specify that we want to stop at the agent (graph_builder.compile(interrupt_before=["chatbot_node"], checkpointer=memory))

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
      from langgraph.prebuilt import ToolNode, tools_condition
      from langgraph.checkpoint.memory import MemorySaver
      from langchain_core.tools import tool
      from langchain_anthropic import ChatAnthropic
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
      
      memory = MemorySaver()
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Tools
      @tool
      def multiply(a: int, b: int) -> int:
          """Multiply a and b.
      
          Args:
              a: first int
              b: second int
      
          Returns:
              The product of a and b.
          """
          return a * b
      
      @tool
      def add(a: int, b: int) -> int:
          """Adds a and b.
      
          Args:
              a: first int
              b: second int
      
          Returns:
              The sum of a and b.
          """
          return a + b
      
      @tool
      def subtract(a: int, b: int) -> int:
          """Subtract b from a.
      
          Args:
              a: first int
              b: second int
      
          Returns:
              The difference between a and b.
          """
          return a - b
      
      @tool
      def divide(a: int, b: int) -> float:
          """Divide a by b.
      
          Args:
              a: first int
              b: second int
      
          Returns:
              The quotient of a and b.
          """
          return a / b
      
      tools_list = [multiply, add, subtract, divide]
      
      # Create the LLM model
      llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
      llm_with_tools = llm.bind_tools(tools_list)
      
      # Nodes
      def chat_model_node(state: State):
          system_message = "You are a helpful assistant that can use tools to answer questions. Once you have the result of a tool, provide a final answer without calling more tools."
          messages = [SystemMessage(content=system_message)] + state["messages"]
          return {"messages": [llm_with_tools.invoke(messages)]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("chatbot_node", chat_model_node)
      tool_node = ToolNode(tools=tools_list)
      graph_builder.add_node("tools", tool_node)
      
      # Connecto nodes
      graph_builder.add_edge(START, "chatbot_node")
      graph_builder.add_conditional_edges("chatbot_node", tools_condition)
      graph_builder.add_edge("tools", "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile(interrupt_before=["chatbot_node"], checkpointer=memory)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 35

We see in the graph representation that there is an interrupt before the execution of chatbot_node, so before the chatbot executes, the execution will be interrupted and we will have to make it continue.

Now we ask for a multiplication again

	
# Input
initial_input = {opening_brace}"messages": HumanMessage(content="Multiply 2 and 3")}
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1"{closing_brace}{closing_brace}
# Run the graph until the first interruption
for event in graph.stream(initial_input, config, stream_mode="updates"):
if 'chatbot_node' in event:
print(event['chatbot_node']['messages'][-1].pretty_print())
else:
print(event)
Copy
	
{'__interrupt__': ()}

We can see that it has done nothing. If we check the status

	
state = graph.get_state(config)
state.next
Copy
	
('chatbot_node',)

We see that the next node is the chatbot node. Additionally, if we look at its values, we see the message that we have sent.

	
state.values
Copy
	
{'messages': [HumanMessage(content='Multiply 2 and 3', additional_kwargs={}, response_metadata={}, id='08fd6084-ecd2-4156-ab24-00d2d5c26f00')]}

Now we proceed to modify the state by adding a new message

	
graph.update_state(
config,
{opening_brace}"messages": [HumanMessage(content="No, actually multiply 3 and 3!")]}
)
Copy
	
{'configurable': {'thread_id': '1',
'checkpoint_ns': '',
'checkpoint_id': '1f027eb6-6c8b-6b6a-8001-bc0f8942566c'}}

We get the new state

	
new_state = graph.get_state(config)
new_state.next
Copy
	
('chatbot_node',)

The following node is still the chatbot's, but if we now look at the messages

	
new_state.values
Copy
	
{opening_brace}'messages': [HumanMessage(content='Multiply 2 and 3', additional_kwargs={opening_brace}{closing_brace}, response_metadata={opening_brace}{closing_brace}, id='08fd6084-ecd2-4156-ab24-00d2d5c26f00'),
HumanMessage(content='No, actually multiply 3 and 3!', additional_kwargs={opening_brace}{closing_brace}, response_metadata={opening_brace}{closing_brace}, id='e95394c2-e62e-47d2-b9b2-51eba40f3e22')]}

We see that the new one has been added. So we make it continue the execution.

	
for event in graph.stream(None, config, stream_mode="values"):
event['messages'][-1].pretty_print()
Copy
	
================================ Human Message =================================
No, actually multiply 3 and 3!
================================== Ai Message ==================================
[{'text': "I'll multiply 3 and 3 for you.", 'type': 'text'{closing_brace}, {opening_brace}'id': 'toolu_01UABhLnEdg5ZqxVQTE5pGUx', 'input': {'a': 3, 'b': 3}, 'name': 'multiply', 'type': 'tool_use'{closing_brace}]
Tool Calls:
multiply (toolu_01UABhLnEdg5ZqxVQTE5pGUx)
Call ID: toolu_01UABhLnEdg5ZqxVQTE5pGUx
Args:
a: 3
b: 3
================================= Tool Message =================================
Name: multiply
9

The multiplication of 3 by 3 has been done, which is the state modification we made, and not 2 by 3, which is what we asked for the first time.

This can be useful when we have an agent and want to review that what it does is correct, so we can enter the execution and modify the state

Dynamic breakpointslink image 106

So far we have created static breakpoints through the graph compilation, but we can create dynamic breakpoints using NodeInterrupt. This is useful because execution can be interrupted by logical rules introduced through programming.

These NodeInterrupt allow customizing how the user will be notified of the interruption.

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
      from langgraph.checkpoint.memory import MemorySaver
      from langgraph.errors import NodeInterrupt
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      memory_saver = MemorySaver()
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Nodes
      def chatbot_function(state: State):
          max_len = 15
          input_message = state["messages"][-1]
      
          # Check len message
          if len(input_message.content) > max_len:
              raise NodeInterrupt(f"Received input is longer than {max_len} characters --> {input_message}")
      
          # Invoke the LLM with the messages
          response = llm.invoke(state["messages"])
      
          # Return the LLM's response in the correct state format
          return {"messages": [response]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("chatbot_node", chatbot_function)
      
      # Connecto nodes
      graph_builder.add_edge(START, "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile(checkpointer=memory_saver)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 36

As you can see, we have created an interruption in case the message is long. Let's test it.

	
initial_input = {opening_brace}"messages": HumanMessage(content="Hello, how are you? My name is Máximo")}
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1"{closing_brace}{closing_brace}
# Run the graph until the first interruption
for event in graph.stream(initial_input, config, stream_mode="updates"):
if 'chatbot_node' in event:
print(event['chatbot_node']['messages'][-1].pretty_print())
else:
print(event)
Copy
	
{'__interrupt__': (Interrupt(value="Received input is longer than 15 characters --> content='Hello, how are you? My name is Máximo' additional_kwargs={} response_metadata={} id='2bdc6d41-0cfe-4d3c-8748-ca7d46fd5a60'", resumable=False, ns=None),)}

Indeed, the interruption has stopped and it has given us the error message that we created.

If we look at the node where it has stopped

	
state = graph.get_state(config)
state.next
Copy
	
('chatbot_node',)

We see that it is stuck at the chatbot node. We can make it continue with the execution again, but it will give us the same error.

	
for event in graph.stream(None, config, stream_mode="updates"):
if 'chatbot_node' in event:
print(event['chatbot_node']['messages'][-1].pretty_print())
else:
print(event)
Copy
	
{'__interrupt__': (Interrupt(value="Received input is longer than 15 characters --> content='Hello, how are you? My name is Máximo' additional_kwargs={} response_metadata={} id='2bdc6d41-0cfe-4d3c-8748-ca7d46fd5a60'", resumable=False, ns=None),)}

So we have to modify the state

	
graph.update_state(
config,
{opening_brace}"messages": [HumanMessage(content="How are you?")]}
)
Copy
	
{'configurable': {'thread_id': '1',
'checkpoint_ns': '',
'checkpoint_id': '1f027f13-5827-6a18-8001-4209d5a866f0'}}

We revisit the state and its values

	
new_state = graph.get_state(config)
print(f"Siguiente nodo: {new_state.next}")
print("Valores:")
for value in new_state.values["messages"]:
print(f"\t{value.content}")
Copy
	
Siguiente nodo: ('chatbot_node',)
Valores:
Hello, how are you? My name is Máximo
How are you?

The last message is shorter, so we tried to resume the execution of the graph

	
for event in graph.stream(None, config, stream_mode="updates"):
if 'chatbot_node' in event:
print(event['chatbot_node']['messages'][-1].pretty_print())
else:
print(event)
Copy
	
================================== Ai Message ==================================
Hello Máximo! I'm doing well, thank you for asking. How about you? How can I assist you today?
None

Customization of the statelink image 107

Note: We will be using Sonnet 3.7 for this section, as at the time of writing the post, it is the best model for use with agents, and it is the only one that understands when to call the tools and when not to.

So far, we have relied on a simple state with an input and a list of messages. You can get quite far with this simple state, but if you want to define more complex behavior without relying on the message list, you can add additional fields to the state. Here we are going to see a new scenario, in which the chatbot is using the search tool to find specific information, and forwarding it to a human for review. We will make the chatbot investigate the birthday of an entity. We will add name and birthday as state keys.

First we load the values of the API keys

	
import os
import dotenv
dotenv.load_dotenv()
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
Copy

We create the new state

	
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
name: str
birthday: str
Copy

Adding this information to the state makes it easily accessible by other nodes of the graph (for example, a node that stores or processes the information), as well as the graph's persistence layer.

Now we create the graph

	
from langgraph.graph import StateGraph, START, END
graph_builder = StateGraph(State)
Copy

We define the tool for search

	
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
search_tool = TavilySearchResults(api_wrapper=wrapper, max_results=2)
Copy

Now we create the human assistance tool. In this tool, we will fill in the state keys within our human_assistance tool. This allows a human to review the information before it is stored in the state. We will use Command again, this time to emit a state update from inside our tool.

	
from langchain_core.messages import ToolMessage
from langchain_core.tools import InjectedToolCallId, tool
from langgraph.types import Command, interrupt
@tool
# Note that because we are generating a ToolMessage for a state update, we
# generally require the ID of the corresponding tool call. We can use
# LangChain's InjectedToolCallId to signal that this argument should not
# be revealed to the model in the tool's schema.
def human_assistance(
name: str, birthday: str, tool_call_id: Annotated[str, InjectedToolCallId]
) -> str:
"""
Request assistance from a human expert. Use this tool ONLY ONCE per conversation.
After receiving the expert's response, you should provide an elaborated response to the user based on the information received
based on the information received, without calling this tool again.
Args:
query: The query to ask the human expert.
Returns:
The response from the human expert.
"""
human_response = interrupt(
{
"question": "Is this correct?",
"name": name,
"birthday": birthday,
},
)
# If the information is correct, update the state as-is.
if human_response.get("correct", "").lower().startswith("y"):
verified_name = name
verified_birthday = birthday
response = "Correct"
# Otherwise, receive information from the human reviewer.
else:
verified_name = human_response.get("name", name)
verified_birthday = human_response.get("birthday", birthday)
response = f"Made a correction: {human_response}"
# This time we explicitly update the state with a ToolMessage inside
# the tool.
state_update = {
"name": verified_name,
"birthday": verified_birthday,
"messages": [ToolMessage(response, tool_call_id=tool_call_id)],
}
# We return a Command object in the tool to update our state.
return Command(update=state_update)
Copy

We have used ToolMessage which is used to pass the result of executing a tool back to a model and InjectedToolCallId

We create a list of tools

	
tools_list = [search_tool, human_assistance]
Copy

Next, the LLM with the bind_tools and we add it to the graph

	
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from langchain_anthropic import ChatAnthropic
# Create the LLM
llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
# Modification: tell the LLM which tools it can call
llm_with_tools = llm.bind_tools(tools_list)
# Define the chatbot function
def chatbot_function(state: State):
message = llm_with_tools.invoke(state["messages"])
# Because we will be interrupting during tool execution,
# we disable parallel tool calling to avoid repeating any
# tool invocations when we resume.
assert len(message.tool_calls) <= 1
return {opening_brace}"messages": [message]}
# Add the chatbot node
graph_builder.add_node("chatbot_node", chatbot_function)
Copy
	
<langgraph.graph.state.StateGraph at 0x120b4f380>

We add the tool to the graph

	
from langgraph.prebuilt import ToolNode, tools_condition
tool_node = ToolNode(tools=tools_list)
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges("chatbot_node", tools_condition)
graph_builder.add_edge("tools", "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x120b4f380>

We add the START node to the graph

	
graph_builder.add_edge(START, "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x120b4f380>

We create a checkpointer MemorySaver.

	
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
Copy

We compile the graph with the checkpointer

	
graph = graph_builder.compile(checkpointer=memory)
Copy

We represent it graphically

from IPython.display import Image, display
      
      try:
          display(Image(graph.get_graph().draw_mermaid_png()))
      except Exception as e:
          print(f"Error al visualizar el grafo: {e}")
      
image uv 37

Let's ask our chatbot to find the "birthday" of the LangGraph library. We will direct the chatbot to the human_assistance tool once it has the required information. The arguments name and birthday are mandatory for the human_assistance tool, so they prompt the chatbot to generate proposals for these fields.

user_input = (
          "Can you look up when LangGraph was released? "
          "When you have the answer, use the human_assistance tool for review."
      )
      config = {"configurable": {"thread_id": "1"}}
      
      events = graph.stream(
          {"messages": [{"role": "user", "content": user_input}]},
          config,
          stream_mode="values",
      )
      for event in events:
          if "messages" in event:
              event["messages"][-1].pretty_print()
      
================================ Human Message =================================
      
      Can you look up when LangGraph was released? When you have the answer, use the human_assistance tool for review.
      
Failed to multipart ingest runs: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      
================================== Ai Message ==================================
      
      [{'text': "I'll help you look up when LangGraph was released, and then I'll use the human_assistance tool for review as requested.\n\nFirst, let me search for information about LangGraph\'s release date:", 'type': 'text'}, {'id': 'toolu_011KHWFxYbFnUvGEF6MPt3dE', 'input': {'query': 'LangGraph release date when was LangGraph released'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
      Tool Calls:
        tavily_search_results_json (toolu_011KHWFxYbFnUvGEF6MPt3dE)
       Call ID: toolu_011KHWFxYbFnUvGEF6MPt3dE
        Args:
          query: LangGraph release date when was LangGraph released
      
Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      
================================= Tool Message =================================
      Name: tavily_search_results_json
      
      [{"title": "LangGraph Studio: The first agent IDE | by Bhavik Jikadara - Medium", "url": "https://bhavikjikadara.medium.com/langgraph-studio-the-first-agent-ide-468132628274", "content": "LangGraph, launched in January 2023, is a low-level orchestration framework designed for building controllable and complex agentic applications.", "score": 0.80405265}, {"title": "langgraph - PyPI", "url": "https://pypi.org/project/langgraph/", "content": "langgraph · PyPI\nSkip to main content Switch to mobile version\n\nSearch PyPI  Search\n\nHelp\nSponsors\nLog in\nRegister\n\nMenu\n\nHelp\nSponsors\nLog in\nRegister\n\nSearch PyPI  Search\nlanggraph 0.2.70\npip install langgraph Copy PIP instructions\nLatest versionReleased: Feb 6, 2025\nBuilding stateful, multi-actor applications with LLMs\nNavigation\n\nProject description\nRelease history\nDownload files [...] 0.2.20 Sep 13, 2024\n\n0.2.19 Sep 6, 2024\n\n0.2.18 Sep 6, 2024\n\n0.2.17 Sep 5, 2024\n\n0.2.16 Sep 1, 2024\n\n0.2.15 Aug 30, 2024\n\n0.2.14 Aug 24, 2024\n\n0.2.13 Aug 23, 2024\n\n0.2.12 Aug 22, 2024\n\n0.2.11 Aug 22, 2024\n\n0.2.10 Aug 21, 2024\n\n0.2.9 Aug 21, 2024\n\n0.2.8 Aug 21, 2024\n\n0.2.7 Aug 21, 2024\n\n0.2.7a0 pre-release Aug 21, 2024\n\n0.2.6 Aug 21, 2024\n\n0.2.5 Aug 21, 2024\n\n0.2.5a0 pre-release Aug 20, 2024\n\n0.2.4 Aug 15, 2024\n\n0.2.3 Aug 8, 2024\n\n0.2.2 Aug 7, 2024\n\n0.2.1 Aug 7, 2024\n\n0.2.0 Aug 7, 2024 [...] Download URL: langgraph-0.2.70.tar.gz\nUpload date: Feb 6, 2025\nSize: 129.7 kB\nTags: Source\nUploaded using Trusted Publishing? Yes\nUploaded via: twine/6.1.0 CPython/3.12.8", "score": 0.75659186}]
      
Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      
================================== Ai Message ==================================
      
      [{'text': 'Based on my search, I found that LangGraph was launched in January 2023. However, I noticed some inconsistencies in the information, as one source mentions it was launched in January 2023, while the PyPI page shows a version history starting from 2024.\n\nLet me request human assistance to verify this information:', 'type': 'text'}, {'id': 'toolu_019EopKn8bLi3ksvUVY2Mt5p', 'input': {'name': 'LangGraph', 'birthday': 'January 2023'}, 'name': 'human_assistance', 'type': 'tool_use'}]
      Tool Calls:
        human_assistance (toolu_019EopKn8bLi3ksvUVY2Mt5p)
       Call ID: toolu_019EopKn8bLi3ksvUVY2Mt5p
        Args:
          name: LangGraph
          birthday: January 2023
      
Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      

It has stopped due to the interrupt in the human_assistance tool. In this case, the chatbot, using the search tool, determined that the date of LangGraph is January 2023, but it is not the exact date; it is January 17, 2024, so we can enter it ourselves.

	
human_command = Command(
resume={opening_brace}
"name": "LangGraph",
"birthday": "Jan 17, 2024",
},
)
events = graph.stream(human_command, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================== Ai Message ==================================
[{'text': 'Based on my search, I found that LangGraph was launched in January 2023. However, I noticed some inconsistencies in the information, as one source mentions it was launched in January 2023, while the PyPI page shows a version history starting from 2024. Let me request human assistance to verify this information:', 'type': 'text'}, {'id': 'toolu_019EopKn8bLi3ksvUVY2Mt5p', 'input': {'name': 'LangGraph', 'birthday': 'January 2023'}, 'name': 'human_assistance', 'type': 'tool_use'}]
Tool Calls:
human_assistance (toolu_019EopKn8bLi3ksvUVY2Mt5p)
Call ID: toolu_019EopKn8bLi3ksvUVY2Mt5p
Args:
name: LangGraph
birthday: January 2023
================================= Tool Message =================================
Name: human_assistance
Made a correction: {'name': 'LangGraph', 'birthday': 'Jan 17, 2024'}
================================== Ai Message ==================================
Thank you for the expert review and correction! Based on the human expert's feedback, I can now provide you with the accurate information:
LangGraph was released on January 17, 2024, not January 2023 as one of the search results incorrectly stated.
This is an important correction, as it means LangGraph is a relatively recent framework in the LLM orchestration space, having been available for less than a year at this point. LangGraph is developed by LangChain and is designed for building stateful, multi-actor applications with LLMs.
	
snapshot = graph.get_state(config)
{opening_brace}k: v for k, v in snapshot.values.items() if k in ("name", "birthday")}
Copy
	
{'name': 'LangGraph', 'birthday': 'Jan 17, 2024'}

Now the date is correct thanks to human intervention to modify the state values

I rewrite all the code to make it easier to understand

	
import os
import dotenv
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph.message import add_messages
from langgraph.graph import StateGraph, START, END
from langgraph.types import Command, interrupt
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.checkpoint.memory import MemorySaver
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import ToolMessage
from langchain_core.tools import InjectedToolCallId, tool
from langchain_anthropic import ChatAnthropic
dotenv.load_dotenv()
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
# State
class State(TypedDict):
messages: Annotated[list, add_messages]
name: str
birthday: str
# Tools
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
search_tool = TavilySearchResults(api_wrapper=wrapper, max_results=2)
@tool
# Note that because we are generating a ToolMessage for a state update, we
# generally require the ID of the corresponding tool call. We can use
# LangChain's InjectedToolCallId to signal that this argument should not
# be revealed to the model in the tool's schema.
def human_assistance(
name: str, birthday: str, tool_call_id: Annotated[str, InjectedToolCallId]
) -> str:
"""
Request assistance from a human expert. Use this tool ONLY ONCE per conversation.
After receiving the expert's response, you should provide an elaborated response to the user based on the information received
based on the information received, without calling this tool again.
Args:
query: The query to ask the human expert.
Returns:
The response from the human expert.
"""
human_response = interrupt(
{opening_brace}
"question": "Is this correct?",
"name": name,
"birthday": birthday,
},
)
# If the information is correct, update the state as-is.
if human_response.get("correct", "").lower().startswith("y"):
verified_name = name
verified_birthday = birthday
response = "Correct"
# Otherwise, receive information from the human reviewer.
else:
verified_name = human_response.get("name", name)
verified_birthday = human_response.get("birthday", birthday)
response = f"Made a correction: {human_response}"
# This time we explicitly update the state with a ToolMessage inside
# the tool.
state_update = {opening_brace}
"name": verified_name,
"birthday": verified_birthday,
"messages": [ToolMessage(response, tool_call_id=tool_call_id)],
{closing_brace}
# We return a Command object in the tool to update our state.
return Command(update=state_update)
tools_list = [search_tool, human_assistance]
tool_node = ToolNode(tools=tools_list)
# Create the LLM
llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
llm_with_tools = llm.bind_tools(tools_list)
# Define the chatbot function
def chatbot_function(state: State):
message = llm_with_tools.invoke(state["messages"])
# Because we will be interrupting during tool execution,
# we disable parallel tool calling to avoid repeating any
# tool invocations when we resume.
assert len(message.tool_calls) <= 1
return {opening_brace}"messages": [message]}
# Graph
graph_builder = StateGraph(State)
# Nodes
graph_builder.add_node("tools", tool_node)
graph_builder.add_node("chatbot_node", chatbot_function)
# Edges
graph_builder.add_edge(START, "chatbot_node")
graph_builder.add_conditional_edges("chatbot_node", tools_condition)
graph_builder.add_edge("tools", "chatbot_node")
# Checkpointer
memory = MemorySaver()
# Compile
graph = graph_builder.compile(checkpointer=memory)
# Visualize
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception as e:
print(f"Error al visualizar el grafo: {e}")
Copy
	
Error al visualizar el grafo: Failed to reach https://mermaid.ink/ API while trying to render your graph after 1 retries. To resolve this issue:
1. Check your internet connection and try again
2. Try with higher retry settings: `draw_mermaid_png(..., max_retries=5, retry_delay=2.0)`
3. Use the Pyppeteer rendering method which will render your graph locally in a browser: `draw_mermaid_png(..., draw_method=MermaidDrawMethod.PYPPETEER)`

Let's ask our chatbot to find the "birthday" of the LangGraph library.

user_input = (
          "Can you look up when LangGraph was released? "
          "When you have the answer, use the human_assistance tool for review."
      )
      config = {"configurable": {"thread_id": "1"}}
      
      events = graph.stream(
          {"messages": [{"role": "user", "content": user_input}]},
          config,
          stream_mode="values",
      )
      for event in events:
          if "messages" in event:
              event["messages"][-1].pretty_print()
      
================================ Human Message =================================
      
      Can you look up when LangGraph was released? When you have the answer, use the human_assistance tool for review.
      
Failed to multipart ingest runs: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      
================================== Ai Message ==================================
      
      [{'text': "I'll look up when LangGraph was released and then get human verification of the information.", 'type': 'text'}, {'id': 'toolu_017SLLSEnFQZVdBpj85BKHyy', 'input': {'query': 'when was LangGraph released launch date'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
      Tool Calls:
        tavily_search_results_json (toolu_017SLLSEnFQZVdBpj85BKHyy)
       Call ID: toolu_017SLLSEnFQZVdBpj85BKHyy
        Args:
          query: when was LangGraph released launch date
      
Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      
================================= Tool Message =================================
      Name: tavily_search_results_json
      
      [{"title": "LangChain Introduces LangGraph Studio: The First Agent IDE for ...", "url": "https://www.marktechpost.com/2024/08/03/langchain-introduces-langgraph-studio-the-first-agent-ide-for-visualizing-interacting-with-and-debugging-complex-agentic-applications/", "content": "LangGraph, launched in January 2023, is a highly controllable, low-level orchestration framework for building agentic applications. Since its inception, it has undergone significant improvements, leading to a stable 0.1 release in June. LangGraph features a persistence layer enabling human-in-the-loop interactions and excels at building complex applications requiring domain-specific cognitive architecture.", "score": 0.83742094}, {"title": "LangGraph Studio: The first agent IDE | by Bhavik Jikadara - Medium", "url": "https://bhavikjikadara.medium.com/langgraph-studio-the-first-agent-ide-468132628274", "content": "LangGraph, launched in January 2023, is a low-level orchestration framework designed for building controllable and complex agentic applications. It’s beneficial for creating applications requiring highly domain-specific cognitive architecture and human-in-the-loop interactions. LangGraph is open source, available in Python and JavaScript, and integrates seamlessly with LangSmith, whether or not you use LangChain.\n\nLangGraph: A Comprehensive Guide for Beginners", "score": 0.79369855}]
      
Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      
================================== Ai Message ==================================
      
      [{'text': "Based on my search, I found that LangGraph was launched in January 2023. It's described as a low-level orchestration framework for building agentic applications. Since its release, it has seen significant improvements, including a stable 0.1 release in June (presumably 2024).\n\nLet me now get human verification of this information:", 'type': 'text'}, {'id': 'toolu_016h3391yFhtPDhQvwjNgs7W', 'input': {'name': 'Information Verification', 'birthday': 'January 2023'}, 'name': 'human_assistance', 'type': 'tool_use'}]
      Tool Calls:
        human_assistance (toolu_016h3391yFhtPDhQvwjNgs7W)
       Call ID: toolu_016h3391yFhtPDhQvwjNgs7W
        Args:
          name: Information Verification
          birthday: January 2023
      
Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      

It has stopped due to the interrupt in the human_assistance tool. In this case, the chatbot, using the search tool, determined that the date of LangGraph is January 2023, but it is not the exact date; it is January 17, 2024, so we can enter it ourselves.

	
human_command = Command(
resume={opening_brace}
"name": "LangGraph",
"birthday": "Jan 17, 2024",
},
)
events = graph.stream(human_command, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================== Ai Message ==================================
[{'text': "Based on my search, I found that LangGraph was launched in January 2023. It's described as a low-level orchestration framework for building agentic applications. Since its release, it has seen significant improvements, including a stable 0.1 release in June (presumably 2024). Let me now get human verification of this information:", 'type': 'text'}, {'id': 'toolu_016h3391yFhtPDhQvwjNgs7W', 'input': {'name': 'Information Verification', 'birthday': 'January 2023'}, 'name': 'human_assistance', 'type': 'tool_use'}]
Tool Calls:
human_assistance (toolu_016h3391yFhtPDhQvwjNgs7W)
Call ID: toolu_016h3391yFhtPDhQvwjNgs7W
Args:
name: Information Verification
birthday: January 2023
================================= Tool Message =================================
Name: human_assistance
Made a correction: {'name': 'LangGraph', 'birthday': 'Jan 17, 2024'}
================================== Ai Message ==================================
Thank you for the expert correction! I need to update my response with the accurate information.
LangGraph was actually released on January 17, 2024 - not January 2023 as I initially found in my search results. This is a significant correction, as it means LangGraph is a much more recent framework than the search results indicated.
The expert has provided the specific date (January 17, 2024) for LangGraph's release, making it a fairly new tool in the AI orchestration ecosystem. This timing aligns better with the mention of its stable 0.1 release in June 2024, as this would be about 5 months after its initial launch.
	
snapshot = graph.get_state(config)
{opening_brace}k: v for k, v in snapshot.values.items() if k in ("name", "birthday")}
Copy
	
{'name': 'LangGraph', 'birthday': 'Jan 17, 2024'}

Now the date is correct thanks to human intervention to modify the state values

Manual State Updatelink image 108

LangGraph provides a high degree of control over the application state. For example, at any point (even when interrupted), we can manually overwrite a state key using graph.update_state:

Let's update the name of the state to LangGraph (library).

	
graph.update_state(config, {opening_brace}"name": "LangGraph (library)"})
Copy
	
{'configurable': {'thread_id': '1',
'checkpoint_ns': '',
'checkpoint_id': '1f010a5a-8a70-618e-8006-89107653db68'}}

If we now check the state with graph.get_state(config), we will see that the name has been updated.

	
snapshot = graph.get_state(config)
{opening_brace}k: v for k, v in snapshot.values.items() if k in ("name", "birthday")}
Copy
	
{'name': 'LangGraph (library)', 'birthday': 'Jan 17, 2024'}

Manual status updates will generate a trace in LangSmith. They can be used to control human in the loop workflows, as can be seen in this guide.

Checkpointslink image 109

In a typical chatbot workflow, the user interacts with the chatbot one or more times to accomplish a task. In the previous sections, we saw how to add memory and a human in the loop to be able to verify our graph state and control future responses.

But, maybe a user wants to start from a previous response and wants to branch to explore a separate outcome. This is useful for agent applications, when a flow fails they can revert to a previous checkpoint and try another strategy.

LangGraph provides this possibility through the checkpoints

First we load the values of the API keys

	
import os
import dotenv
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
Copy

We create the new state

	
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
Copy

Now we create the graph

	
from langgraph.graph import StateGraph, START, END
graph_builder = StateGraph(State)
Copy

We define the tool for search

	
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
search_tool = TavilySearchResults(api_wrapper=wrapper, max_results=2)
Copy

We create a list of tools

	
tools_list = [search_tool]
Copy

Next, the LLM with the bind_tools and we add it to the graph

	
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from huggingface_hub import login
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
# Create the LLM
login(token=HUGGINGFACE_TOKEN)
MODEL = "Qwen/Qwen2.5-72B-Instruct"
model = HuggingFaceEndpoint(
repo_id=MODEL,
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
# Create the chat model
llm = ChatHuggingFace(llm=model)
# Modification: tell the LLM which tools it can call
llm_with_tools = llm.bind_tools(tools_list)
# Define the chatbot function
def chatbot_function(state: State):
message = llm_with_tools.invoke(state["messages"])
return {opening_brace}"messages": [message]}
# Add the chatbot node
graph_builder.add_node("chatbot_node", chatbot_function)
Copy
	
<langgraph.graph.state.StateGraph at 0x10d8ce7b0>

We add the tool to the graph

	
from langgraph.prebuilt import ToolNode, tools_condition
tool_node = ToolNode(tools=tools_list)
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges("chatbot_node", tools_condition)
graph_builder.add_edge("tools", "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x10d8ce7b0>

We add the START node to the graph

	
graph_builder.add_edge(START, "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x10d8ce7b0>

We create a checkpointer MemorySaver.

	
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
Copy

We compile the graph with the checkpointer

	
graph = graph_builder.compile(checkpointer=memory)
Copy

We represent it graphically

from IPython.display import Image, display
      
      try:
          display(Image(graph.get_graph().draw_mermaid_png()))
      except Exception as e:
          print(f"Error al visualizar el grafo: {e}")
      
image uv 38

Let's make our graph take a couple of steps. Each step will be saved in the state history.

We make the first call to the model

	
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1"{closing_brace}{closing_brace}
user_input = (
"I'm learning LangGraph. "
"Could you do some research on it for me?"
)
events = graph.stream(
{opening_brace}"messages": [{"role": "user","content": user_input},],},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
I'm learning LangGraph. Could you do some research on it for me?
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangGraph Quickstart - GitHub Pages", "url": "https://langchain-ai.github.io/langgraph/tutorials/introduction/", "content": "[](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-9-1)Assistant: LangGraph is a library designed to help build stateful multi-agent applications using language models. It provides tools for creating workflows and state machines to coordinate multiple AI agents or language model interactions. LangGraph is built on top of LangChain, leveraging its components while adding graph-based coordination capabilities. It's particularly useful for developing more complex, [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-6) LangGraph is a library designed for building stateful, multi-actor applications with Large Language Models (LLMs). It's particularly useful for creating agent and multi-agent workflows. [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-7) [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-8)2. Developer: [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-19)LangGraph is likely a framework or library designed specifically for creating AI agents with advanced capabilities. Here are a few points to consider based on this recommendation: [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-20)", "score": 0.9328032}, {opening_brace}"title": "langchain-ai/langgraph: Build resilient language agents as graphs.", "url": "https://github.com/langchain-ai/langgraph", "content": "LangGraph — used by Replit, Uber, LinkedIn, GitLab and more — is a low-level orchestration framework for building controllable agents. While langchain provides integrations and composable components to streamline LLM application development, the LangGraph library enables agent orchestration — offering customizable architectures, long-term memory, and human-in-the-loop to reliably handle complex tasks. ``` pip install -U langgraph ```", "score": 0.8884594}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangGraph Quickstart - GitHub Pages", "url": "https://langchain-ai.github.io/langgraph/tutorials/introduction/", "content": "[](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-9-1)Assistant: LangGraph is a library designed to help build stateful multi-agent applications using language models. It provides tools for creating workflows and state machines to coordinate multiple AI agents or language model interactions. LangGraph is built on top of LangChain, leveraging its components while adding graph-based coordination capabilities. It's particularly useful for developing more complex, [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-6) LangGraph is a library designed for building stateful, multi-actor applications with Large Language Models (LLMs). It's particularly useful for creating agent and multi-agent workflows. [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-7) [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-8)2. Developer: [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-19)LangGraph is likely a framework or library designed specifically for creating AI agents with advanced capabilities. Here are a few points to consider based on this recommendation: [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-20)", "score": 0.9328032}, {opening_brace}"title": "langchain-ai/langgraph: Build resilient language agents as graphs.", "url": "https://github.com/langchain-ai/langgraph", "content": "LangGraph — used by Replit, Uber, LinkedIn, GitLab and more — is a low-level orchestration framework for building controllable agents. While langchain provides integrations and composable components to streamline LLM application development, the LangGraph library enables agent orchestration — offering customizable architectures, long-term memory, and human-in-the-loop to reliably handle complex tasks. ``` pip install -U langgraph ```", "score": 0.8884594}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph tutorial and documentation
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangGraph Quickstart - GitHub Pages", "url": "https://langchain-ai.github.io/langgraph/tutorials/introduction/", "content": "[](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-66-36)5. Documentation: The LangGraph documentation has been revamped, which should make it easier for learners like yourself to understand and use the tool. [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-66-37) [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-28) [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-29)1. Search for the official LangGraph documentation or website to learn more about its features and how to use it. [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-30)2. Look for tutorials or guides specifically focused on building AI agents with LangGraph. [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-9-1)Assistant: LangGraph is a library designed to help build stateful multi-agent applications using language models. It provides tools for creating workflows and state machines to coordinate multiple AI agents or language model interactions. LangGraph is built on top of LangChain, leveraging its components while adding graph-based coordination capabilities. It's particularly useful for developing more complex,", "score": 0.8775715}, {opening_brace}"title": "Tutorial 1-Getting Started With LangGraph- Building Stateful Multi AI ...", "url": "https://www.youtube.com/watch?v=gqvFmK7LpDo", "content": "and we will also use Lang Smith so let's go ahead and execute this lsmith I hope everybody knows what exactly is so till then I will also go ahead and show you the documentation page of Lang graph so this is what langra is all about right it has python it has it supports JavaScript and all but uh uh if I just go ahead and show you Lang graph tutorials right so here you can see this is the tutorial right and uh not this sorry uh let's see yes yes here you go right in the Lang graph page it", "score": 0.80405265}]
================================== Ai Message ==================================
LangGraph is a powerful library designed for building stateful, multi-agent applications using Large Language Models (LLMs). Here are some key points about LangGraph:
### Overview
- **Purpose**: LangGraph is specifically designed to create complex workflows and state machines to coordinate multiple AI agents or language model interactions. It is particularly useful for developing sophisticated multi-agent systems.
- **Framework**: It is built on top of LangChain, leveraging its components and adding graph-based coordination capabilities.
- **Features**:
- **Customizable Architectures**: Allows you to design and implement custom workflows and state machines.
- **Long-Term Memory**: Supports long-term memory for agents, enabling them to maintain context over time.
- **Human-in-the-Loop**: Facilitates human interaction in the workflow, making it easier to handle complex tasks that require human oversight.
### Getting Started
- **Installation**:
```bash
pip install -U langgraph
```
- **Documentation**: The LangGraph documentation has been revamped to make it easier for learners to understand and use the tool. You can find the official documentation [here](https://langchain-ai.github.io/langgraph/tutorials/introduction/).
### Use Cases
- **Multi-Agent Systems**: Ideal for building systems where multiple AI agents need to interact and coordinate their actions.
- **Complex Task Handling**: Suitable for tasks that require multiple steps and decision-making processes.
- **Custom Workflows**: Enables the creation of custom workflows tailored to specific use cases.
### Tutorials and Resources
- **Official Documentation**: The official LangGraph documentation is a comprehensive resource for learning about its features and usage.
- **Tutorials**: Look for tutorials and guides specifically focused on building AI agents with LangGraph. You can find a tutorial video [here](https://www.youtube.com/watch?v=gqvFmK7LpDo).
### Companies Using LangGraph
- **Replit, Uber, LinkedIn, GitLab, and more**: These companies are using LangGraph to build resilient and controllable language agents.
### Next Steps
1. **Review the Documentation**: Start by going through the official LangGraph documentation to get a deeper understanding of its features and capabilities.
2. **Follow Tutorials**: Watch tutorials and follow step-by-step guides to build your first multi-agent application.
3. **Experiment with Examples**: Try out the examples provided in the documentation to get hands-on experience with LangGraph.
If you have any specific questions or need further assistance, feel free to ask!

And now the second call

	
user_input = (
"Ya that's helpful. Maybe I'll "
"build an autonomous agent with it!"
)
events = graph.stream(
{opening_brace}"messages": [{"role": "user","content": user_input},],},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Ya that's helpful. Maybe I'll build an autonomous agent with it!
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph tutorial build autonomous agent
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangGraph Tutorial: Building LLM Agents with LangChain's ... - Zep", "url": "https://www.getzep.com/ai-agents/langgraph-tutorial", "content": "This article focuses on building agents with LangGraph rather than LangChain. It provides a tutorial for building LangGraph agents, beginning with a discussion of LangGraph and its components. These concepts are reinforced by building a LangGraph agent from scratch and managing conversation memory with LangGraph agents. Finally, we use Zep's long-term memory for egents to create an agent that remembers previous conversations and user facts. ‍ Summary of key LangGraph tutorial concepts [...] human intervention, and the ability to handle complex workflows with cycles and branches. Building a LangGraph agent | Creating a LangGraph agent is the best way to understand the core concepts of nodes, edges, and state. The LangGraph Python libraries are modular and provide the functionality to build a stateful graph by incrementally adding nodes and edges.Incorporating tools enables an agent to perform specific tasks and access", "score": 0.8338803}, {opening_brace}"title": "Build Autonomous AI Agents with ReAct and LangGraph Tools", "url": "https://www.youtube.com/watch?v=ZfjaIshGkmk", "content": "LangGraph Intro - Build Autonomous AI Agents with ReAct and LangGraph Tools GrabDuck! 4110 subscribers 18 likes 535 views 21 Jan 2025 In this video, LangGraph Intro: Build Autonomous AI Agents with ReAct and LangGraph Tools, we dive into creating a powerful agentic system where the LLM decides when to trigger tools and when to finalize results. You’ll see how to build a generic agent architecture using the ReAct principle, applying it to real-world examples like analyzing Tesla stock data. [...] reasoning like what they're doing so uh it's this way you're using tool and this is another thing from longchain core library and here you define the function and then you have to Define name description there are other parameters like for example you can provide very specific description of all the parameters like why you need them which one are those Etc but it's a bit over complicated for this tutorial I'm skipping it and uh interesting thing this one return direct is false and this is uh [...] Whether you’re wondering how to create AI agents, looking for a LangGraph tutorial, or eager to explore the power of LangChain agents, this video is packed with valuable insights to help you get started. Support the channel while you shop on Amazon! Use my affiliate link https://amzn.to/4hssSvT Every purchase via this Amazon link helps keep our content free for you! 🌟 Related Courses & Tutorials", "score": 0.8286204}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph tutorial build autonomous agent
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangGraph Tutorial: Building LLM Agents with LangChain's ... - Zep", "url": "https://www.getzep.com/ai-agents/langgraph-tutorial", "content": "This article focuses on building agents with LangGraph rather than LangChain. It provides a tutorial for building LangGraph agents, beginning with a discussion of LangGraph and its components. These concepts are reinforced by building a LangGraph agent from scratch and managing conversation memory with LangGraph agents. Finally, we use Zep's long-term memory for egents to create an agent that remembers previous conversations and user facts. ‍ Summary of key LangGraph tutorial concepts [...] human intervention, and the ability to handle complex workflows with cycles and branches. Building a LangGraph agent | Creating a LangGraph agent is the best way to understand the core concepts of nodes, edges, and state. The LangGraph Python libraries are modular and provide the functionality to build a stateful graph by incrementally adding nodes and edges.Incorporating tools enables an agent to perform specific tasks and access", "score": 0.8338803}, {opening_brace}"title": "Build Autonomous AI Agents with ReAct and LangGraph Tools", "url": "https://www.youtube.com/watch?v=ZfjaIshGkmk", "content": "LangGraph Intro - Build Autonomous AI Agents with ReAct and LangGraph Tools GrabDuck! 4110 subscribers 18 likes 535 views 21 Jan 2025 In this video, LangGraph Intro: Build Autonomous AI Agents with ReAct and LangGraph Tools, we dive into creating a powerful agentic system where the LLM decides when to trigger tools and when to finalize results. You’ll see how to build a generic agent architecture using the ReAct principle, applying it to real-world examples like analyzing Tesla stock data. [...] reasoning like what they're doing so uh it's this way you're using tool and this is another thing from longchain core library and here you define the function and then you have to Define name description there are other parameters like for example you can provide very specific description of all the parameters like why you need them which one are those Etc but it's a bit over complicated for this tutorial I'm skipping it and uh interesting thing this one return direct is false and this is uh [...] Whether you’re wondering how to create AI agents, looking for a LangGraph tutorial, or eager to explore the power of LangChain agents, this video is packed with valuable insights to help you get started. Support the channel while you shop on Amazon! Use my affiliate link https://amzn.to/4hssSvT Every purchase via this Amazon link helps keep our content free for you! 🌟 Related Courses & Tutorials", "score": 0.8286204}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph tutorial build autonomous agent
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangGraph Tutorial: Building LLM Agents with LangChain's ... - Zep", "url": "https://www.getzep.com/ai-agents/langgraph-tutorial", "content": "This article focuses on building agents with LangGraph rather than LangChain. It provides a tutorial for building LangGraph agents, beginning with a discussion of LangGraph and its components. These concepts are reinforced by building a LangGraph agent from scratch and managing conversation memory with LangGraph agents. Finally, we use Zep's long-term memory for egents to create an agent that remembers previous conversations and user facts. ‍ Summary of key LangGraph tutorial concepts [...] human intervention, and the ability to handle complex workflows with cycles and branches. Building a LangGraph agent | Creating a LangGraph agent is the best way to understand the core concepts of nodes, edges, and state. The LangGraph Python libraries are modular and provide the functionality to build a stateful graph by incrementally adding nodes and edges.Incorporating tools enables an agent to perform specific tasks and access", "score": 0.8338803}, {opening_brace}"title": "Build Autonomous AI Agents with ReAct and LangGraph Tools", "url": "https://www.youtube.com/watch?v=ZfjaIshGkmk", "content": "LangGraph Intro - Build Autonomous AI Agents with ReAct and LangGraph Tools GrabDuck! 4110 subscribers 18 likes 535 views 21 Jan 2025 In this video, LangGraph Intro: Build Autonomous AI Agents with ReAct and LangGraph Tools, we dive into creating a powerful agentic system where the LLM decides when to trigger tools and when to finalize results. You’ll see how to build a generic agent architecture using the ReAct principle, applying it to real-world examples like analyzing Tesla stock data. [...] reasoning like what they're doing so uh it's this way you're using tool and this is another thing from longchain core library and here you define the function and then you have to Define name description there are other parameters like for example you can provide very specific description of all the parameters like why you need them which one are those Etc but it's a bit over complicated for this tutorial I'm skipping it and uh interesting thing this one return direct is false and this is uh [...] Whether you’re wondering how to create AI agents, looking for a LangGraph tutorial, or eager to explore the power of LangChain agents, this video is packed with valuable insights to help you get started. Support the channel while you shop on Amazon! Use my affiliate link https://amzn.to/4hssSvT Every purchase via this Amazon link helps keep our content free for you! 🌟 Related Courses & Tutorials", "score": 0.8286204}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph tutorial for building autonomous AI agents
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "How to Build AI Agents with LangGraph: A Step-by-Step Guide", "url": "https://medium.com/@lorevanoudenhove/how-to-build-ai-agents-with-langgraph-a-step-by-step-guide-5d84d9c7e832", "content": "By following these steps, you have successfully created an AI assistant using LangGraph that can calculate solar panel energy savings based on user inputs. This tutorial demonstrates the power of LangGraph in managing complex, multi-step processes and highlights how to leverage advanced AI tools to solve real-world challenges efficiently. Whether you’re developing AI agents for customer support, energy management, or other applications, LangGraph provides the flexibility, scalability, and [...] Step 7: Build the Graph Structure In this step, we construct the graph structure for the AI assistant using LangGraph, which controls how the assistant processes user input, triggers tools, and moves between stages. The graph defines nodes for the core actions (like invoking the assistant and tool) and edges that dictate the flow between these nodes. [...] Now that we have a solid understanding of what LangGraph is and how it enhances AI development, let’s dive into a practical example. In this scenario, we’ll build an AI agent designed to calculate potential energy savings for solar panels based on user input. This agent can be implemented as a lead generation tool on a solar panel seller’s website, where it interacts with potential customers, offering personalized savings estimates. By gathering key data such as monthly electricity costs, this", "score": 0.8576849}, {opening_brace}"title": "Building AI Agents with LangGraph: A Beginner's Guide - YouTube", "url": "https://www.youtube.com/watch?v=assrhPxNdSk", "content": "In this tutorial, we'll break down the fundamentals of building AI agents using LangGraph! Whether you're new to AI development or looking", "score": 0.834852}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph tutorial step-by-step
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "How to Build AI Agents with LangGraph: A Step-by-Step Guide", "url": "https://medium.com/@lorevanoudenhove/how-to-build-ai-agents-with-langgraph-a-step-by-step-guide-5d84d9c7e832", "content": "By following these steps, you have successfully created an AI assistant using LangGraph that can calculate solar panel energy savings based on user inputs. This tutorial demonstrates the power of LangGraph in managing complex, multi-step processes and highlights how to leverage advanced AI tools to solve real-world challenges efficiently. Whether you’re developing AI agents for customer support, energy management, or other applications, LangGraph provides the flexibility, scalability, and [...] Step 7: Build the Graph Structure In this step, we construct the graph structure for the AI assistant using LangGraph, which controls how the assistant processes user input, triggers tools, and moves between stages. The graph defines nodes for the core actions (like invoking the assistant and tool) and edges that dictate the flow between these nodes. [...] In this article, we’ll explore how LangGraph transforms AI development and provide a step-by-step guide on how to build your own AI agent using an example that computes energy savings for solar panels. This example will showcase how LangGraph’s unique features can create intelligent, adaptable, and real-world-ready AI systems. What is LangGraph?", "score": 0.86441374}, {opening_brace}"title": "What Is LangGraph and How to Use It? - DataCamp", "url": "https://www.datacamp.com/tutorial/langgraph-tutorial", "content": "Building a Simple LangGraph Application Here’s a step-by-step example of creating a basic chatbot application using LangGraph. Step 1: Define the StateGraph Define a StateGraph object to structure the chatbot as a state machine. The State is a class object defined with a single key messages of type List and uses the add_messages() function to append new messages rather than overwrite them. from typing import Annotated from typing_extensions import TypedDict [...] Getting Started With LangGraph Installation Basic Concepts Building a Simple LangGraph Application Step 1: Define the StateGraph Step 2: Initialize an LLM and add it as a Chatbot node Step 3: Set edges Step 5: Run the chatbot Advanced LangGraph Features Custom node types Edge types Error handling Real-World Applications of LangGraph Chatbots Autonomous agents Multi-Agent systems Workflow automation tools Recommendation systems Personalized learning environments Conclusion", "score": 0.82492816}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph tutorial for beginners
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Introduction to LangGraph: A Beginner's Guide - Medium", "url": "https://medium.com/@cplog/introduction-to-langgraph-a-beginners-guide-14f9be027141", "content": "Conclusion LangGraph is a versatile tool for building complex, stateful applications with LLMs. By understanding its core concepts and working through simple examples, beginners can start to leverage its power for their projects. Remember to pay attention to state management, conditional edges, and ensuring there are no dead-end nodes in your graph. Happy coding! [...] LangGraph is a powerful tool for building stateful, multi-actor applications with Large Language Models (LLMs). It extends the LangChain library, allowing you to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. In this article, we’ll introduce LangGraph, walk you through its basic concepts, and share some insights and common points of confusion for beginners. What is LangGraph?", "score": 0.8793233}, {opening_brace}"title": "LangGraph Tutorial: A Comprehensive Guide for Beginners", "url": "https://blog.futuresmart.ai/langgraph-tutorial-for-beginners", "content": "These examples highlight how LangGraph helps bridge the gap between AI capabilities and the complexities of real-world situations. Conclusion This concludes our LangGraph tutorial! As you've learned, LangGraph enables the creation of AI applications that go beyond simple input-output loops by offering a framework for building stateful, agent-driven systems. You've gained hands-on experience defining graphs, managing state, and incorporating tools. [...] LangGraph, a powerful library within the LangChain ecosystem, provides an elegant solution for building and managing multi-agent LLM applications. By representing workflows as cyclical graphs, LangGraph allows developers to orchestrate the interactions of multiple LLM agents, ensuring smooth communication and efficient execution of complex tasks. [...] LangGraph Tutorial: A Comprehensive Guide for Beginners FutureSmart AI Blog Follow FutureSmart AI Blog Follow LangGraph Tutorial: A Comprehensive Guide for Beginners +1 Rounak Show with 1 co-author ·Oct 1, 2024·12 min read Table of contents Introduction Understanding LangGraph Key Concepts Graph Structures State Management Getting Started with LangGraph Installation Creating a Basic Chatbot in LangGraph", "score": 0.8684817}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph tutorial for beginners
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Introduction to LangGraph: A Beginner's Guide - Medium", "url": "https://medium.com/@cplog/introduction-to-langgraph-a-beginners-guide-14f9be027141", "content": "Conclusion LangGraph is a versatile tool for building complex, stateful applications with LLMs. By understanding its core concepts and working through simple examples, beginners can start to leverage its power for their projects. Remember to pay attention to state management, conditional edges, and ensuring there are no dead-end nodes in your graph. Happy coding! [...] LangGraph is a powerful tool for building stateful, multi-actor applications with Large Language Models (LLMs). It extends the LangChain library, allowing you to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. In this article, we’ll introduce LangGraph, walk you through its basic concepts, and share some insights and common points of confusion for beginners. What is LangGraph?", "score": 0.8793233}, {opening_brace}"title": "LangGraph Tutorial: A Comprehensive Guide for Beginners", "url": "https://blog.futuresmart.ai/langgraph-tutorial-for-beginners", "content": "These examples highlight how LangGraph helps bridge the gap between AI capabilities and the complexities of real-world situations. Conclusion This concludes our LangGraph tutorial! As you've learned, LangGraph enables the creation of AI applications that go beyond simple input-output loops by offering a framework for building stateful, agent-driven systems. You've gained hands-on experience defining graphs, managing state, and incorporating tools. [...] LangGraph, a powerful library within the LangChain ecosystem, provides an elegant solution for building and managing multi-agent LLM applications. By representing workflows as cyclical graphs, LangGraph allows developers to orchestrate the interactions of multiple LLM agents, ensuring smooth communication and efficient execution of complex tasks. [...] LangGraph Tutorial: A Comprehensive Guide for Beginners FutureSmart AI Blog Follow FutureSmart AI Blog Follow LangGraph Tutorial: A Comprehensive Guide for Beginners +1 Rounak Show with 1 co-author ·Oct 1, 2024·12 min read Table of contents Introduction Understanding LangGraph Key Concepts Graph Structures State Management Getting Started with LangGraph Installation Creating a Basic Chatbot in LangGraph", "score": 0.8684817}]
================================== Ai Message ==================================
Building an autonomous AI agent with LangGraph can be a rewarding and exciting project! Here's a step-by-step guide to get you started, based on the tutorials and resources available:
### Step 1: Understand the Basics of LangGraph
LangGraph is a library that extends the capabilities of LangChain, focusing on building stateful, multi-actor applications using Large Language Models (LLMs). It allows you to coordinate multiple LLM agents in complex workflows, using a graph-based approach.
### Step 2: Install LangGraph
First, you need to install the LangGraph library. You can do this using pip:
```bash
pip install -U langgraph
```
### Step 3: Define the Graph Structure
The core of LangGraph is the graph structure, which defines the nodes (actions or states) and edges (transitions between nodes).
#### Example: Solar Panel Energy Savings Calculator
Let's build a simple AI agent that calculates potential energy savings for solar panels based on user input.
1. **Define the StateGraph**:
- **Nodes**: These represent actions or states in your application.
- **Edges**: These define the transitions between nodes.
```python
from langgraph import StateGraph, State, Edge
# Define the nodes
start_node = State(key="start", action="greet_user")
input_node = State(key="input", action="get_user_input")
calculate_node = State(key="calculate", action="calculate_savings")
result_node = State(key="result", action="show_results")
# Define the edges
start_to_input = Edge(from_node=start_node, to_node=input_node)
input_to_calculate = Edge(from_node=input_node, to_node=calculate_node)
calculate_to_result = Edge(from_node=calculate_node, to_node=result_node)
# Create the graph
graph = StateGraph()
graph.add_state(start_node)
graph.add_state(input_node)
graph.add_state(calculate_node)
graph.add_state(result_node)
graph.add_edge(start_to_input)
graph.add_edge(input_to_calculate)
graph.add_edge(calculate_to_result)
```
### Step 4: Define the Actions
Each node in the graph has an associated action. These actions are Python functions that perform specific tasks.
```python
def greet_user(state, context):
return {"message": "Hello! I can help you calculate energy savings for solar panels."}
def get_user_input(state, context):
return {"message": "Please provide your monthly electricity cost in dollars."}
def calculate_savings(state, context):
# Example calculation (you can use real data and more complex logic here)
monthly_cost = float(input("Enter your monthly electricity cost: "))
savings_per_kWh = 0.10 # Example savings rate
annual_savings = monthly_cost * 12 * savings_per_kWh
return {"savings": annual_savings}
def show_results(state, context):
annual_savings = context.get("savings")
return {"message": f"Your annual savings with solar panels could be ${annual_savings:.2f}."}
```
### Step 5: Run the Graph
Finally, you can run the graph to see how the agent processes user input and performs the calculations.
```python
# Initialize the graph and run it
context = {opening_brace}{closing_brace}
current_node = start_node
while current_node:
action_result = current_node.action(current_node, context)
print(action_result["message"])
if "savings" in action_result:
context["savings"] = action_result["savings"]
current_node = graph.get_next_node(current_node, action_result)
```
### Step 6: Enhance with Advanced Features
Once you have the basic structure in place, you can enhance your agent with advanced features such as:
- **Long-term Memory**: Use external storage (e.g., Zep) to remember user conversations and preferences.
- **Conditional Edges**: Define conditions for transitions between nodes to handle different scenarios.
- **Human-in-the-Loop**: Allow human intervention for complex tasks or error handling.
### Additional Resources
- **Official Documentation**: [LangGraph Documentation](https://langchain-ai.github.io/langgraph/tutorials/introduction/)
- **Comprehensive Guide**: [LangGraph Tutorial for Beginners](https://blog.futuresmart.ai/langgraph-tutorial-for-beginners)
- **Example Project**: [Building AI Agents with LangGraph](https://medium.com/@lorevanoudenhove/how-to-build-ai-agents-with-langgraph-a-step-by-step-guide-5d84d9c7e832)
### Conclusion
By following these steps, you can build a robust and flexible AI agent using LangGraph. Start with simple examples and gradually add more complex features to create powerful, stateful, and multi-actor applications. Happy coding!

Now that we have made two calls to the model, let's check the status history.

	
to_replay = None
for state in graph.get_state_history(config):
print(f"Num Messages: {len(state.values["messages"])}, Next: {state.next}, checkpoint id = {state.config["configurable"]['checkpoint_id']}")
print("-" * 80)
# Get state when first iteracction us done
if len(state.next) == 0:
to_replay = state
Copy
	
Num Messages: 24, Next: (), checkpoint id = 1f027f2f-e5b4-6c84-8018-9fcb33b5f397
--------------------------------------------------------------------------------
Num Messages: 23, Next: ('chatbot_node',), checkpoint id = 1f027f2f-e414-6b0e-8017-3ad465b70767
--------------------------------------------------------------------------------
Num Messages: 22, Next: ('tools',), checkpoint id = 1f027f2f-d382-6692-8016-fcfaf9c9a9f7
--------------------------------------------------------------------------------
Num Messages: 21, Next: ('chatbot_node',), checkpoint id = 1f027f2f-d1cf-6930-8015-f64aa0e6f750
--------------------------------------------------------------------------------
Num Messages: 20, Next: ('tools',), checkpoint id = 1f027f2f-bca9-6164-8014-86452cb10d83
--------------------------------------------------------------------------------
Num Messages: 19, Next: ('chatbot_node',), checkpoint id = 1f027f2f-bac1-6d24-8013-b539f3e4cedb
--------------------------------------------------------------------------------
Num Messages: 18, Next: ('tools',), checkpoint id = 1f027f2f-aa0e-69fa-8012-4ca2d9109f4e
--------------------------------------------------------------------------------
Num Messages: 17, Next: ('chatbot_node',), checkpoint id = 1f027f2f-a861-62c4-8011-5707badab130
--------------------------------------------------------------------------------
Num Messages: 16, Next: ('tools',), checkpoint id = 1f027f2f-93cf-6112-8010-ee536e76cdf7
--------------------------------------------------------------------------------
Num Messages: 15, Next: ('chatbot_node',), checkpoint id = 1f027f2f-91f5-63fa-800f-6ff45b0ebf86
--------------------------------------------------------------------------------
Num Messages: 14, Next: ('tools',), checkpoint id = 1f027f2f-7e07-6190-800e-e0269b0cb0f4
--------------------------------------------------------------------------------
Num Messages: 13, Next: ('chatbot_node',), checkpoint id = 1f027f2f-7bf9-62a4-800d-bd2bf25381ac
--------------------------------------------------------------------------------
Num Messages: 12, Next: ('tools',), checkpoint id = 1f027f2f-639f-6172-800c-e54c8b1b1f4a
--------------------------------------------------------------------------------
Num Messages: 11, Next: ('chatbot_node',), checkpoint id = 1f027f2f-621b-6972-800b-184a824ce9cb
--------------------------------------------------------------------------------
Num Messages: 10, Next: ('tools',), checkpoint id = 1f027f2f-56df-66a8-800a-d56ee9317382
--------------------------------------------------------------------------------
Num Messages: 9, Next: ('chatbot_node',), checkpoint id = 1f027f2f-5546-60d0-8009-41ee7c932b49
--------------------------------------------------------------------------------
Num Messages: 8, Next: ('__start__',), checkpoint id = 1f027f2f-5542-6ff2-8008-e2f4e8278c23
--------------------------------------------------------------------------------
Num Messages: 8, Next: (), checkpoint id = 1f027f2c-8873-61d6-8007-8a1c60438002
--------------------------------------------------------------------------------
Num Messages: 7, Next: ('chatbot_node',), checkpoint id = 1f027f2c-8504-663a-8006-517227b123b6
--------------------------------------------------------------------------------
Num Messages: 6, Next: ('tools',), checkpoint id = 1f027f2c-75dc-6248-8005-e198dd299848
--------------------------------------------------------------------------------
Num Messages: 5, Next: ('chatbot_node',), checkpoint id = 1f027f2c-7448-69d6-8004-e3c6d5c4c5a4
--------------------------------------------------------------------------------
Num Messages: 4, Next: ('tools',), checkpoint id = 1f027f2c-627b-6f6e-8003-22208fac7c89
--------------------------------------------------------------------------------
Num Messages: 3, Next: ('chatbot_node',), checkpoint id = 1f027f2c-6122-6190-8002-b745c42a724e
--------------------------------------------------------------------------------
Num Messages: 2, Next: ('tools',), checkpoint id = 1f027f2c-4c4c-6720-8001-8a1c73b894c1
--------------------------------------------------------------------------------
Num Messages: 1, Next: ('chatbot_node',), checkpoint id = 1f027f2c-4a91-6278-8000-56b65f6d77cd
--------------------------------------------------------------------------------
Num Messages: 0, Next: ('__start__',), checkpoint id = 1f027f2c-4a8d-6a1a-bfff-2f7cbde97290
--------------------------------------------------------------------------------

We have saved in to_replay the state of the graph when it gave us the first response, just before introducing the second message. We can revert to a past state and continue the flow from there.

The checkpoint configuration contains the checkpoint_id, which is a timestamp of the flow. We can check it to verify that we are in the state we want to be in.

	
print(to_replay.config)
Copy
	
{'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1f027f2c-8873-61d6-8007-8a1c60438002'}}

If we look at the list of states from before, we see that the ID matches the moment of introducing the second message

Giving this checkpoint_id to LangGraph loads the state at that moment in the flow. So we create a new message and pass it to the graph

	
user_input = (
"Thanks"
)
# The `checkpoint_id` in the `to_replay.config` corresponds to a state we've persisted to our checkpointer.
events = graph.stream({"messages": [{"role": "user","content": user_input},],},
to_replay.config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Thanks
================================== Ai Message ==================================
You're welcome! If you have any more questions about LangGraph or any other topics, feel free to ask. Happy learning! 🚀
	
for state in graph.get_state_history(config):
print(f"Num Messages: {len(state.values["messages"])}, Next: {state.next}, checkpoint id = {state.config["configurable"]['checkpoint_id']}")
print("-" * 80)
Copy
	
Num Messages: 10, Next: (), checkpoint id = 1f027f43-71ae-67e0-800a-d84a557441fc
--------------------------------------------------------------------------------
Num Messages: 9, Next: ('chatbot_node',), checkpoint id = 1f027f43-5b1f-6ad8-8009-34f409789bc4
--------------------------------------------------------------------------------
Num Messages: 8, Next: ('__start__',), checkpoint id = 1f027f43-5b1b-68a2-8008-fbbcbd1c175e
--------------------------------------------------------------------------------
Num Messages: 24, Next: (), checkpoint id = 1f027f2f-e5b4-6c84-8018-9fcb33b5f397
--------------------------------------------------------------------------------
Num Messages: 23, Next: ('chatbot_node',), checkpoint id = 1f027f2f-e414-6b0e-8017-3ad465b70767
--------------------------------------------------------------------------------
Num Messages: 22, Next: ('tools',), checkpoint id = 1f027f2f-d382-6692-8016-fcfaf9c9a9f7
--------------------------------------------------------------------------------
Num Messages: 21, Next: ('chatbot_node',), checkpoint id = 1f027f2f-d1cf-6930-8015-f64aa0e6f750
--------------------------------------------------------------------------------
Num Messages: 20, Next: ('tools',), checkpoint id = 1f027f2f-bca9-6164-8014-86452cb10d83
--------------------------------------------------------------------------------
Num Messages: 19, Next: ('chatbot_node',), checkpoint id = 1f027f2f-bac1-6d24-8013-b539f3e4cedb
--------------------------------------------------------------------------------
Num Messages: 18, Next: ('tools',), checkpoint id = 1f027f2f-aa0e-69fa-8012-4ca2d9109f4e
--------------------------------------------------------------------------------
Num Messages: 17, Next: ('chatbot_node',), checkpoint id = 1f027f2f-a861-62c4-8011-5707badab130
--------------------------------------------------------------------------------
Num Messages: 16, Next: ('tools',), checkpoint id = 1f027f2f-93cf-6112-8010-ee536e76cdf7
--------------------------------------------------------------------------------
Num Messages: 15, Next: ('chatbot_node',), checkpoint id = 1f027f2f-91f5-63fa-800f-6ff45b0ebf86
--------------------------------------------------------------------------------
Num Messages: 14, Next: ('tools',), checkpoint id = 1f027f2f-7e07-6190-800e-e0269b0cb0f4
--------------------------------------------------------------------------------
Num Messages: 13, Next: ('chatbot_node',), checkpoint id = 1f027f2f-7bf9-62a4-800d-bd2bf25381ac
--------------------------------------------------------------------------------
Num Messages: 12, Next: ('tools',), checkpoint id = 1f027f2f-639f-6172-800c-e54c8b1b1f4a
--------------------------------------------------------------------------------
Num Messages: 11, Next: ('chatbot_node',), checkpoint id = 1f027f2f-621b-6972-800b-184a824ce9cb
--------------------------------------------------------------------------------
Num Messages: 10, Next: ('tools',), checkpoint id = 1f027f2f-56df-66a8-800a-d56ee9317382
--------------------------------------------------------------------------------
Num Messages: 9, Next: ('chatbot_node',), checkpoint id = 1f027f2f-5546-60d0-8009-41ee7c932b49
--------------------------------------------------------------------------------
Num Messages: 8, Next: ('__start__',), checkpoint id = 1f027f2f-5542-6ff2-8008-e2f4e8278c23
--------------------------------------------------------------------------------
Num Messages: 8, Next: (), checkpoint id = 1f027f2c-8873-61d6-8007-8a1c60438002
--------------------------------------------------------------------------------
Num Messages: 7, Next: ('chatbot_node',), checkpoint id = 1f027f2c-8504-663a-8006-517227b123b6
--------------------------------------------------------------------------------
Num Messages: 6, Next: ('tools',), checkpoint id = 1f027f2c-75dc-6248-8005-e198dd299848
--------------------------------------------------------------------------------
Num Messages: 5, Next: ('chatbot_node',), checkpoint id = 1f027f2c-7448-69d6-8004-e3c6d5c4c5a4
--------------------------------------------------------------------------------
Num Messages: 4, Next: ('tools',), checkpoint id = 1f027f2c-627b-6f6e-8003-22208fac7c89
--------------------------------------------------------------------------------
Num Messages: 3, Next: ('chatbot_node',), checkpoint id = 1f027f2c-6122-6190-8002-b745c42a724e
--------------------------------------------------------------------------------
Num Messages: 2, Next: ('tools',), checkpoint id = 1f027f2c-4c4c-6720-8001-8a1c73b894c1
--------------------------------------------------------------------------------
Num Messages: 1, Next: ('chatbot_node',), checkpoint id = 1f027f2c-4a91-6278-8000-56b65f6d77cd
--------------------------------------------------------------------------------
Num Messages: 0, Next: ('__start__',), checkpoint id = 1f027f2c-4a8d-6a1a-bfff-2f7cbde97290
--------------------------------------------------------------------------------

We can see in the history that the graph executed everything we did first, but then it overwrote the history and ran from an earlier point.

I rewrite the entire graph together

	
import os
import dotenv
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph.message import add_messages
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.checkpoint.memory import MemorySaver
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from huggingface_hub import login
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
from IPython.display import Image, display
class State(TypedDict):
messages: Annotated[list, add_messages]
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
# Tools
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
search_tool = TavilySearchResults(api_wrapper=wrapper, max_results=2)
tools_list = [search_tool]
tool_node = ToolNode(tools=tools_list)
# Create the LLM
login(token=HUGGINGFACE_TOKEN)
MODEL = "Qwen/Qwen2.5-72B-Instruct"
model = HuggingFaceEndpoint(
repo_id=MODEL,
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
# Create the chat model
llm = ChatHuggingFace(llm=model)
# Modification: tell the LLM which tools it can call
llm_with_tools = llm.bind_tools(tools_list)
# Define the chatbot function
def chatbot_function(state: State):
message = llm_with_tools.invoke(state["messages"])
return {opening_brace}"messages": [message]}
# Create the graph
graph_builder = StateGraph(State)
# Add nodes
graph_builder.add_node("chatbot_node", chatbot_function)
graph_builder.add_node("tools", tool_node)
graph_builder.add_edge("tools", "chatbot_node")
# Add edges
graph_builder.add_edge(START, "chatbot_node")
graph_builder.add_conditional_edges("chatbot_node", tools_condition)
# Add checkpointer
memory = MemorySaver()
# Compile
graph = graph_builder.compile(checkpointer=memory)
# Visualize
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception as e:
print(f"Error al visualizar el grafo: {e}")
Copy
	
Error al visualizar el grafo: Failed to reach https://mermaid.ink/ API while trying to render your graph after 1 retries. To resolve this issue:
1. Check your internet connection and try again
2. Try with higher retry settings: `draw_mermaid_png(..., max_retries=5, retry_delay=2.0)`
3. Use the Pyppeteer rendering method which will render your graph locally in a browser: `draw_mermaid_png(..., draw_method=MermaidDrawMethod.PYPPETEER)`

We make the first call to the model

	
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1"{closing_brace}{closing_brace}
user_input = (
"I'm learning LangGraph. "
"Could you do some research on it for me?"
)
events = graph.stream(
{opening_brace}"messages": [{"role": "user","content": user_input},],},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
I'm learning LangGraph. Could you do some research on it for me?
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "What is LangGraph? - IBM", "url": "https://www.ibm.com/think/topics/langgraph", "content": "LangGraph, created by LangChain, is an open source AI agent framework designed to build, deploy and manage complex generative AI agent workflows. It provides a set of tools and libraries that enable users to create, run and optimize large language models (LLMs) in a scalable and efficient manner. At its core, LangGraph uses the power of graph-based architectures to model and manage the intricate relationships between various components of an AI agent workflow. [...] Agent systems: LangGraph provides a framework for building agent-based systems, which can be used in applications such as robotics, autonomous vehicles or video games. LLM applications: By using LangGraph’s capabilities, developers can build more sophisticated AI models that learn and improve over time. Norwegian Cruise Line uses LangGraph to compile, construct and refine guest-facing AI solutions. This capability allows for improved and personalized guest experiences. [...] By using a graph-based architecture, LangGraph enables users to scale artificial intelligence workflows without slowing down or sacrificing efficiency. LangGraph uses enhanced decision-making by modeling complex relationships between nodes, which means it uses AI agents to analyze their past actions and feedback. In the world of LLMs, this process is referred to as reflection.", "score": 0.9353998}, {opening_brace}"title": "LangGraph Quickstart - GitHub Pages", "url": "https://langchain-ai.github.io/langgraph/tutorials/introduction/", "content": "[](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-9-1)Assistant: LangGraph is a library designed to help build stateful multi-agent applications using language models. It provides tools for creating workflows and state machines to coordinate multiple AI agents or language model interactions. LangGraph is built on top of LangChain, leveraging its components while adding graph-based coordination capabilities. It's particularly useful for developing more complex, [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-6) LangGraph is a library designed for building stateful, multi-actor applications with Large Language Models (LLMs). It's particularly useful for creating agent and multi-agent workflows. [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-7) [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-8)2. Developer: [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-19)LangGraph is likely a framework or library designed specifically for creating AI agents with advanced capabilities. Here are a few points to consider based on this recommendation: [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-20)", "score": 0.9328032}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "What is LangGraph? - IBM", "url": "https://www.ibm.com/think/topics/langgraph", "content": "LangGraph, created by LangChain, is an open source AI agent framework designed to build, deploy and manage complex generative AI agent workflows. It provides a set of tools and libraries that enable users to create, run and optimize large language models (LLMs) in a scalable and efficient manner. At its core, LangGraph uses the power of graph-based architectures to model and manage the intricate relationships between various components of an AI agent workflow. [...] Agent systems: LangGraph provides a framework for building agent-based systems, which can be used in applications such as robotics, autonomous vehicles or video games. LLM applications: By using LangGraph’s capabilities, developers can build more sophisticated AI models that learn and improve over time. Norwegian Cruise Line uses LangGraph to compile, construct and refine guest-facing AI solutions. This capability allows for improved and personalized guest experiences. [...] By using a graph-based architecture, LangGraph enables users to scale artificial intelligence workflows without slowing down or sacrificing efficiency. LangGraph uses enhanced decision-making by modeling complex relationships between nodes, which means it uses AI agents to analyze their past actions and feedback. In the world of LLMs, this process is referred to as reflection.", "score": 0.9353998}, {opening_brace}"title": "LangGraph Quickstart - GitHub Pages", "url": "https://langchain-ai.github.io/langgraph/tutorials/introduction/", "content": "[](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-9-1)Assistant: LangGraph is a library designed to help build stateful multi-agent applications using language models. It provides tools for creating workflows and state machines to coordinate multiple AI agents or language model interactions. LangGraph is built on top of LangChain, leveraging its components while adding graph-based coordination capabilities. It's particularly useful for developing more complex, [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-6) LangGraph is a library designed for building stateful, multi-actor applications with Large Language Models (LLMs). It's particularly useful for creating agent and multi-agent workflows. [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-7) [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-8)2. Developer: [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-19)LangGraph is likely a framework or library designed specifically for creating AI agents with advanced capabilities. Here are a few points to consider based on this recommendation: [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-20)", "score": 0.9328032}]
================================== Ai Message ==================================
LangGraph is an open-source AI agent framework developed by LangChain, designed to build, deploy, and manage complex generative AI agent workflows. Here are some key points about LangGraph:
### Overview
- **Purpose**: LangGraph is aimed at creating, running, and optimizing large language models (LLMs) in a scalable and efficient manner.
- **Graph-Based Architecture**: It uses graph-based architectures to model and manage the intricate relationships between various components of an AI agent workflow.
### Features
- **Agent Systems**: LangGraph provides a framework for building agent-based systems, which can be used in applications such as robotics, autonomous vehicles, or video games.
- **LLM Applications**: Developers can build more sophisticated AI models that learn and improve over time. For example, Norwegian Cruise Line uses LangGraph to compile, construct, and refine guest-facing AI solutions, enhancing personalized guest experiences.
- **Scalability**: By using a graph-based architecture, LangGraph enables users to scale artificial intelligence workflows without sacrificing efficiency.
- **Enhanced Decision-Making**: LangGraph uses AI agents to analyze their past actions and feedback, a process referred to as "reflection" in the context of LLMs.
### Developer Resources
- **Quickstart Guide**: The LangGraph Quickstart guide on GitHub provides a detailed introduction to building stateful multi-agent applications using language models. It covers tools for creating workflows and state machines to coordinate multiple AI agents or language model interactions.
- **Built on LangChain**: LangGraph is built on top of LangChain, leveraging its components while adding graph-based coordination capabilities. This makes it particularly useful for developing more complex, stateful, multi-actor applications with LLMs.
### Further Reading
- **What is LangGraph? - IBM**: [Link](https://www.ibm.com/think/topics/langgraph)
- **LangGraph Quickstart - GitHub Pages**: [Link](https://langchain-ai.github.io/langgraph/tutorials/introduction/)
These resources should provide a solid foundation for understanding and getting started with LangGraph. If you have any specific questions or need further details, feel free to ask!

And now the second call

	
user_input = (
"Ya that's helpful. Maybe I'll "
"build an autonomous agent with it!"
)
events = graph.stream(
{opening_brace}"messages": [{"role": "user","content": user_input},],},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Ya that's helpful. Maybe I'll build an autonomous agent with it!
================================== Ai Message ==================================
That sounds like an exciting project! Building an autonomous agent using LangGraph can be a rewarding experience. Here are some steps and tips to help you get started:
### 1. **Understand the Basics of LangGraph**
- **Read the Documentation**: Start with the official LangGraph documentation and quickstart guide. This will give you a solid understanding of the framework's capabilities and how to use its tools.
- **Quickstart Guide**: [LangGraph Quickstart - GitHub Pages](https://langchain-ai.github.io/langgraph/tutorials/introduction/)
### 2. **Set Up Your Development Environment**
- **Install LangChain and LangGraph**: Ensure you have the necessary dependencies installed. LangGraph is built on top of LangChain, so you'll need to set up both.
```bash
pip install langchain langgraph
```
### 3. **Define Your Agent's Objectives**
- **Identify the Use Case**: What specific tasks do you want your autonomous agent to perform? This could be anything from navigating a virtual environment, responding to user queries, or managing a robotic system.
- **Define the State and Actions**: Determine the states your agent can be in and the actions it can take. This will help you design the state machine and workflows.
### 4. **Design the Graph-Based Workflow**
- **Create Nodes and Edges**: In LangGraph, you'll define nodes (agents or components) and edges (interactions or transitions). Each node can represent a different part of your agent's functionality.
- **Define State Transitions**: Use the graph-based architecture to define how the agent transitions between different states based on actions and events.
### 5. **Implement the Agent**
- **Write the Code**: Start coding your agent using the LangGraph library. You can use the provided tools to create and manage the agent's workflows.
- **Example**: Here’s a simple example to get you started:
```python
from langgraph import AgentGraph, Node, Edge
# Define nodes
node1 = Node("Sensor", process=sensor_process)
node2 = Node("Decision", process=decision_process)
node3 = Node("Actuator", process=actuator_process)
# Define edges
edge1 = Edge(node1, node2)
edge2 = Edge(node2, node3)
# Create the agent graph
agent_graph = AgentGraph()
agent_graph.add_node(node1)
agent_graph.add_node(node2)
agent_graph.add_node(node3)
agent_graph.add_edge(edge1)
agent_graph.add_edge(edge2)
# Run the graph
agent_graph.run()
```
### 6. **Test and Iterate**
- **Run Simulations**: Test your agent in a simulated environment to see how it performs. Use this to identify and fix any issues.
- **Refine the Model**: Based on the test results, refine your agent's model and workflows. You can add more nodes, edges, or improve the decision-making processes.
### 7. **Deploy and Monitor**
- **Deploy the Agent**: Once you are satisfied with the performance, you can deploy your agent in the real world or a production environment.
- **Monitor and Maintain**: Continuously monitor the agent's performance and make adjustments as needed. Use feedback loops to improve the agent over time.
### 8. **Community and Support**
- **Join the Community**: Engage with the LangChain and LangGraph community. You can find support, share ideas, and get feedback from other developers.
- **GitHub**: [LangGraph GitHub](https://github.com/langchain-ai/langgraph)
- **Forums and Discussion Boards**: Check out forums and discussion boards related to LangGraph and LangChain.
### Additional Resources
- **Tutorials and Examples**: Look for tutorials and example projects to get more hands-on experience.
- **Research Papers and Articles**: Read research papers and articles to deepen your understanding of AI agent design and graph-based architectures.
Good luck with your project! If you have any specific questions or need further guidance, feel free to ask.

We see the status history

	
to_replay = None
for state in graph.get_state_history(config):
print(f"Num Messages: {len(state.values["messages"])}, Next: {state.next}, checkpoint id = {state.config["configurable"]['checkpoint_id']}")
print("-" * 80)
# Get state when first iteracction us done
if len(state.next) == 0:
to_replay = state
Copy
	
Num Messages: 8, Next: (), checkpoint id = 1f03263e-a96c-6446-8008-d2c11df0b6cb
--------------------------------------------------------------------------------
Num Messages: 7, Next: ('chatbot_node',), checkpoint id = 1f03263d-7a35-6660-8007-a37d4b584c88
--------------------------------------------------------------------------------
Num Messages: 6, Next: ('__start__',), checkpoint id = 1f03263d-7a32-624e-8006-6509bbf32ebe
--------------------------------------------------------------------------------
Num Messages: 6, Next: (), checkpoint id = 1f03263d-7a1a-6f36-8005-f10b5d83f22c
--------------------------------------------------------------------------------
Num Messages: 5, Next: ('chatbot_node',), checkpoint id = 1f03263c-c53f-6666-8004-c6d35868dd73
--------------------------------------------------------------------------------
Num Messages: 4, Next: ('tools',), checkpoint id = 1f03263c-b14b-68f8-8003-28558fa38dbc
--------------------------------------------------------------------------------
Num Messages: 3, Next: ('chatbot_node',), checkpoint id = 1f03263c-a66b-6276-8002-2dc89fca4d99
--------------------------------------------------------------------------------
Num Messages: 2, Next: ('tools',), checkpoint id = 1f03263c-8c7c-68ec-8001-fb8a9aa300b0
--------------------------------------------------------------------------------
Num Messages: 1, Next: ('chatbot_node',), checkpoint id = 1f03263c-6d06-68d2-8000-ced2e7b8538f
--------------------------------------------------------------------------------
Num Messages: 0, Next: ('__start__',), checkpoint id = 1f03263c-6cdb-63e4-bfff-c644b57cee28
--------------------------------------------------------------------------------
	
print(to_replay.config)
Copy
	
{'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1f03263d-7a1a-6f36-8005-f10b5d83f22c'}}

Giving this checkpoint_id to LangGraph loads the state at that point in the flow. So we create a new message and pass it to the graph.

	
user_input = (
"Thanks"
)
# The `checkpoint_id` in the `to_replay.config` corresponds to a state we've persisted to our checkpointer.
events = graph.stream({"messages": [{"role": "user","content": user_input},],},
to_replay.config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Thanks
================================== Ai Message ==================================
You're welcome! If you have any more questions about LangGraph or any other topic, feel free to reach out. Happy learning! 😊
	
for state in graph.get_state_history(config):
print(f"Num Messages: {len(state.values["messages"])}, Next: {state.next}, checkpoint id = {state.config["configurable"]['checkpoint_id']}")
print("-" * 80)
Copy
	
Num Messages: 8, Next: (), checkpoint id = 1f03263f-fcb9-63a0-8008-e8c4a3fb44f9
--------------------------------------------------------------------------------
Num Messages: 7, Next: ('chatbot_node',), checkpoint id = 1f03263f-eb3b-663c-8007-72da4d16bf64
--------------------------------------------------------------------------------
Num Messages: 6, Next: ('__start__',), checkpoint id = 1f03263f-eb36-6ac4-8006-a2333805d5d6
--------------------------------------------------------------------------------
Num Messages: 8, Next: (), checkpoint id = 1f03263e-a96c-6446-8008-d2c11df0b6cb
--------------------------------------------------------------------------------
Num Messages: 7, Next: ('chatbot_node',), checkpoint id = 1f03263d-7a35-6660-8007-a37d4b584c88
--------------------------------------------------------------------------------
Num Messages: 6, Next: ('__start__',), checkpoint id = 1f03263d-7a32-624e-8006-6509bbf32ebe
--------------------------------------------------------------------------------
Num Messages: 6, Next: (), checkpoint id = 1f03263d-7a1a-6f36-8005-f10b5d83f22c
--------------------------------------------------------------------------------
Num Messages: 5, Next: ('chatbot_node',), checkpoint id = 1f03263c-c53f-6666-8004-c6d35868dd73
--------------------------------------------------------------------------------
Num Messages: 4, Next: ('tools',), checkpoint id = 1f03263c-b14b-68f8-8003-28558fa38dbc
--------------------------------------------------------------------------------
Num Messages: 3, Next: ('chatbot_node',), checkpoint id = 1f03263c-a66b-6276-8002-2dc89fca4d99
--------------------------------------------------------------------------------
Num Messages: 2, Next: ('tools',), checkpoint id = 1f03263c-8c7c-68ec-8001-fb8a9aa300b0
--------------------------------------------------------------------------------
Num Messages: 1, Next: ('chatbot_node',), checkpoint id = 1f03263c-6d06-68d2-8000-ced2e7b8538f
--------------------------------------------------------------------------------
Num Messages: 0, Next: ('__start__',), checkpoint id = 1f03263c-6cdb-63e4-bfff-c644b57cee28
--------------------------------------------------------------------------------

Continue reading

Last posts -->

Have you seen these projects?

Horeca chatbot

Horeca chatbot Horeca chatbot
Python
LangChain
PostgreSQL
PGVector
React
Kubernetes
Docker
GitHub Actions

Chatbot conversational for cooks of hotels and restaurants. A cook, kitchen manager or room service of a hotel or restaurant can talk to the chatbot to get information about recipes and menus. But it also implements agents, with which it can edit or create new recipes or menus

Subtify

Subtify Subtify
Python
Whisper
Spaces

Subtitle generator for videos in the language you want. Also, it puts a different color subtitle to each person

View all projects -->

Do you want to apply AI in your project? Contact me!

Do you want to improve with these tips?

Last tips -->

Use this locally

Hugging Face spaces allow us to run models with very simple demos, but what if the demo breaks? Or if the user deletes it? That's why I've created docker containers with some interesting spaces, to be able to use them locally, whatever happens. In fact, if you click on any project view button, it may take you to a space that doesn't work.

Flow edit

Flow edit Flow edit

FLUX.1-RealismLora

FLUX.1-RealismLora FLUX.1-RealismLora
View all containers -->

Do you want to apply AI in your project? Contact me!

Do you want to train your model with these datasets?

short-jokes-dataset

Dataset with jokes in English

opus100

Dataset with translations from English to Spanish

netflix_titles

Dataset with Netflix movies and series

View more datasets -->