LangGraph: Revolución en tus agentes de IA

LangGraph: Revolución en tus agentes de IA LangGraph: Revolución en tus agentes de IA

LangGraphlink image 0

LangGraph es un marco de orquestación de bajo nivel para construir agentes controlables

Mientras que LangChain proporciona integraciones y componentes para agilizar el desarrollo de aplicaciones LLM, la biblioteca LangGraph permite la orquestación de agentes, ofreciendo arquitecturas personalizables, memoria a largo plazo y human in the loop para manejar de manera confiable tareas complejas.

En este post vamos a deshabilitar LangSmith, que es una herramienta de depuración de grafos. Lo vamos a deshabilitar para no añadir más complejidad al post y centrarnos únicamente en LangGraph

¿Cómo funciona LangGraph?link image 1

LangGraph se basa en tres componentes:

  • Nodos: Representan las unidades de procesamiento de la aplicación, como llamar a un LLM, o a una herramienta. Son funciones de Python que se ejecutan cuando se llama al nodo.
    • Tomar el estado como entrada
    • Realizan alguna operación
    • Devuelven el estado actualizado
  • Edges: Representan las transiciones entre los nodos. Definen la lógica de cómo se va a ejecutar el grafo, es decir, qué nodo se va a ejecutar después de otro. Pueden ser:
    • Directos: Van de un nodo a otro
    • Condicionales: Dependen de una condición
  • State: Representa el estado de la aplicación, es decir, contiene toda la información necesaria para la aplicación. Se mantiene durante la ejecución de la aplicación. Es definido por el usuario, así que hay que pensar muy bien qué se va a guardar en él.

LangGraph concept

Todos los grafos de LangGraph comienzan desde un nodo START y terminan en un nodo END.

Instalación de LangGraphlink image 2

Para instalar LangGraph se puede usar pip:

pip install -U langgraph

o instalar desde Conda:

conda install langgraph

Instalación de módulo de Hugging Face y Anthropiclink image 3

Vamos a usar un modelo de lenguaje de Hugging Face, por lo que necesitamos instalar su paquete de langgraph.

pip install langchain-huggingface

Para una parte vamos a usar Sonnet 3.7, luego explicaremos por qué. Así que tambien instalamos el paquere de Anthropic.

pip install langchain_anthropic

API KEY de Hugging Facelink image 4

Vamos a usar Qwen/Qwen2.5-72B-Instruct a través de Hugging Face Inference Endpoints, por lo que necesitamos una API KEY.

Para poder usar el Inference Endpoints de HuggingFace, lo primero que necesitas es tener una cuenta en HuggingFace. Una vez la tengas, hay que ir a Access tokens en la configuración de tu perfil y generar un nuevo token.

Hay que ponerle un nombre. En mi caso, le voy a poner langgraph y habilitar el permiso Make calls to inference providers. Nos creará un token que tendremos que copiar

Para gestionar el token, vamos a crear un archivo en la misma ruta en la que estemos trabajando llamado .env y vamos a poner el token que hemos copiado en el archivo de la siguiente manera:

HUGGINGFACE_LANGGRAPH="hf_...."

Ahora, para poder obtener el token, necesitamos tener instalado dotenv, que lo instalamos mediante

pip install python-dotenv

Ejecutamos lo siguiente

	
import os
import dotenv
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
Copy

Ahora que tenemos un token, creamos un cliente. Para ello, necesitamos tener instalada la librería huggingface_hub. La instalamos mediante conda o pip.

pip install --upgrade huggingface_hub

o

conda install -c conda-forge huggingface_hub

Ahora tenemos que elegir qué modelo vamos a usar. Puedes ver los modelos disponibles en la página de Supported models de la documentación de Inference Endpoints de Hugging Face.

Vamos a usar Qwen2.5-72B-Instruct que es un modelo muy bueno.

	
MODEL = "Qwen/Qwen2.5-72B-Instruct"
Copy

Ahora podemos crear el cliente

	
from huggingface_hub import InferenceClient
client = InferenceClient(api_key=HUGGINGFACE_TOKEN, model=MODEL)
client
Copy
	
<InferenceClient(model='Qwen/Qwen2.5-72B-Instruct', timeout=None)>

Hacemos una prueba a ver si funciona

	
message = [
{opening_brace} "role": "user", "content": "Hola, qué tal?" {closing_brace}
]
stream = client.chat.completions.create(
messages=message,
temperature=0.5,
max_tokens=1024,
top_p=0.7,
stream=False
)
response = stream.choices[0].message.content
print(response)
Copy
	
¡Hola! Estoy bien, gracias por preguntar. ¿Cómo estás tú? ¿En qué puedo ayudarte hoy?

API KEY de Anthropiclink image 5

Crear un chatbot básicolink image 6

Vamos a crear un chatbot simple usando LangGraph. Este chatbot responderá directamente a los mensajes del usuario. Aunque es simple, nos servirá para ver los conceptos básicos de la construcción de grafos con LangGraph.

Como su nombre indica, LangGraph es una biblioteca para manejar grafos. Así que comenzamos creando un grafo StateGraph.

Un StateGraph define la estructura de nuestro chatbot como una máquina de estados. Agregaremos nodos a nuestro grafo para representar los llms, tools y funciones, los llms podrán hacer uso de esas tools y funciones; y añadimos edges para especificar cómo el bot debe hacer la transición entre esos nodos.

Así que comenzamos creando un StateGraph que necesita una clase State para manejar el estado del grafo. Como ahora vamos a crear un chatbot sencillo, solo necesitamos manejar una lista de mensajes en el estado.

	
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
class State(TypedDict):
# Messages have the type "list". The `add_messages` function
# in the annotation defines how this state key should be updated
# (in this case, it appends messages to the list, rather than overwriting them)
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
Copy

La función add_messages une dos listas de mensajes.

Llegarán nuevas listas de mensajes, por lo que se unirán a la lista de mensajes ya existente. Cada lista de mensajes contiene un ID, por lo que se agregan con este ID. Esto asegura que los mensajes solo se añaden, no se reemplazan, a no ser que un nuevo mensaje tenga el mismo ID que uno ya existente, que entonces se remplaza.

add_messages es una reducer function, es una función que se encarga de actualizar el estado.

El grafo graph_builder que hemos creado, recibe un estado State y devuelve un nuevo estado State. Además, actualiza la lista de mensajes.

Concepto

Al definir un grafo, el primer paso es definir su State. El State incluye el esquema del grafo y las reducer functions que manejan actualizaciones del estado.

En nuestro ejemplo, State es de tipo TypedDict (diccionario tipado) con una llave: messages.

add_messages es una reducer function que se utiliza para agregar nuevos mensajes a la lista en lugar de sobrescribirlos en la lista. Si una llave de un estado no tiene una reducer function, cada valor que llegue de esa clave sobrescribirán los valores anteriores.

add_messages es una reducer function de langgraph, pero nosotros vamos a poder crear las nuestras

Ahora vamos a agregar al grafo el nodo chatbot. Los nodos representan unidades de trabajo. Por lo general, son funciones regulares de Python.

Añadimos un nodo con el método add_node que recibe el nombre del nodo y la función que se ejecutará.

De modo que vamos a crear un LLM con HuggingFace, después crearemos un chat model con LangChain que hará referencia al LLM creado. Una vez tenemos definido un chat model, definimos la función que se ejecutará en el nodo de nuestro grafo. Esa función hará una llamada al chat model creado y devolverá el resultado.

Por último vamos a añadir un nodo con la función del chatbot al grafo

	
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from huggingface_hub import login
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
# Create the LLM model
login(token=HUGGINGFACE_TOKEN) # Login to HuggingFace to use the model
MODEL = "Qwen/Qwen2.5-72B-Instruct"
model = HuggingFaceEndpoint(
repo_id=MODEL,
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
# Create the chat model
llm = ChatHuggingFace(llm=model)
# Define the chatbot function
def chatbot_function(state: State):
return {opening_brace}"messages": [llm.invoke(state["messages"])]}
# The first argument is the unique node name
# The second argument is the function or object that will be called whenever
# the node is used.
graph_builder.add_node("chatbot_node", chatbot_function)
Copy
	
<langgraph.graph.state.StateGraph at 0x130548440>

Hemos usado ChatHuggingFace que es un chat del tipo BaseChatModel que es un tipo de chat base de LangChain. Una vez hemos creado el BaseChatModel, hemos creado la función chatbot_function que se ejecutará cuando se ejecute el nodo. Y por último, hemos creado el nodo chatbot_node y le hemos indicado que tiene que ejecutar la función chatbot_function.

Aviso

La función de nodo chatbot_function toma el estado State como entrada y devuelve un diccionario que contiene una actualización de la lista messages para la key mensajes. Este es el patrón básico para todas las funciones del nodo LangGraph.

La reducer function de nuestro grafo add_messages agregará los mensajes de respuesta del llm a cualquier mensaje que ya esté en el estado.

A continuación, agregamos un nodo entry. Esto le dice a nuestro grafo dónde empezar su trabajo cada vez que lo ejecutamos.

	
from langgraph.graph import START
graph_builder.add_edge(START, "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x130548440>

Del mismo modo, añadimos un nodo finish. Esto indica al grafo cada vez que se ejecuta este nodo, puede terminar el trabajo.

	
from langgraph.graph import END
graph_builder.add_edge("chatbot_node", END)
Copy
	
<langgraph.graph.state.StateGraph at 0x130548440>

Hemos importado START y END que podemos encontrarlos en constants y son el primer y el último nodo de nuestro grafo.

Normalmente son nodos virtuales

Finalmente, tenemos que compilar nuestro grafo. Para hacerlo, usamos el método constructor de grafos compile(). Esto crea un CompiledGraph que podemos usar para ejecutar nuestra aplicación.

	
graph = graph_builder.compile()
Copy

Podemos visualizar el grafo usando el método get_graph y uno de los métodos de "dibujo", como draw_ascii o draw_mermaid_png. El dibujo de cada uno de los métodos requiere dependencias adicionales.

from IPython.display import Image, display
      
      try:
          display(Image(graph.get_graph().draw_mermaid_png()))
      except Exception as e:
          print(f"Error al visualizar el grafo: {e}")
      
image uv 1

¡Ahora podemos probar el chatbot!

Consejo

En el siguiente bloque de código, puedes salir del bucle de chat en cualquier momento escribiendo quit, exit o q.

	
# Colors for the terminal
COLOR_GREEN = "\033[32m"
COLOR_YELLOW = "\033[33m"
COLOR_RESET = "\033[0m"
def stream_graph_updates(user_input: str):
for event in graph.stream({opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace}):
for value in event.values():
print(f"{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}user_input{closing_brace}")
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}value['messages'][-1].content{closing_brace}")
while True:
try:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print(f"{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}user_input{closing_brace}")
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}Goodbye!")
break
events =stream_graph_updates(user_input)
except:
# fallback if input() is not available
user_input = "What do you know about LangGraph?"
print("User: " + user_input)
stream_graph_updates(user_input)
break
Copy
	
User: Hello
Assistant: Hello! It's nice to meet you. How can I assist you today? Whether you have questions, need information, or just want to chat, I'm here to help!
User: How are you doing?
Assistant: I'm just a computer program, so I don't have feelings, but I'm here and ready to help you with any questions or tasks you have! How can I assist you today?
User: Me well, I'm making a post about LangGraph, what do you think?
Assistant: LangGraph is an intriguing topic, especially if you're delving into the realm of graph-based models and their applications in natural language processing (NLP). LangGraph, as I understand, is a framework or tool that leverages graph theory to improve or provide a new perspective on NLP tasks such as text classification, information extraction, and semantic analysis. By representing textual information as graphs (nodes for entities and edges for relationships), it can offer a more nuanced understanding of the context and semantics in language data.
If you're making a post about it, here are a few points you might consider:
1. **Introduction to LangGraph**: Start with a brief explanation of what LangGraph is and its core principles. How does it model language or text differently compared to traditional NLP approaches? What unique advantages does it offer by using graph-based methods?
2. **Applications of LangGraph**: Discuss some of the key applications where LangGraph has been or can be applied. This could include improving the accuracy of sentiment analysis, enhancing machine translation, or optimizing chatbot responses to be more contextually aware.
3. **Technical Innovations**: Highlight any technical innovations or advancements that LangGraph brings to the table. This could be about new algorithms, more efficient data structures, or novel ways of training models on graph data.
4. **Challenges and Limitations**: It's also important to address the challenges and limitations of using graph-based methods in NLP. Performance, scalability, and the current state of the technology can be discussed here.
5. **Future Prospects**: Wrap up with a look into the future of LangGraph and graph-based NLP in general. What are the upcoming trends, potential areas of growth, and how might these tools start impacting broader technology landscapes?
Each section can help frame your post in a way that's informative and engaging for your audience, whether they're technical experts or casual readers looking for an introduction to this intriguing area of NLP.
User: q
Assistant: Goodbye!

!Felicidades! Has construido tu primer chatbot usando LangGraph. Este bot puede participar en una conversación básica tomando la entrada del usuario y generando respuestas utilizando el LLM que hemos definido.

Antes hemos ido escribiendo el código poco a poco y puede que no haya quedado muy claro. Se ha hecho así para explicar cada parte del código, pero ahora vamos a volver a escribirlo, pero ordenado de otra manera, que queda más claro a la vista. Es decir, ahora que no hay que explicar cada parte del código, lo agrupamos de otra manera para que sea más claro

from typing import Annotated
      from typing_extensions import TypedDict
      
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from huggingface_hub import login
      
      from IPython.display import Image, display
      
      import os
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      import dotenv
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      # State
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Function
      def chatbot_function(state: State):
          return {"messages": [llm.invoke(state["messages"])]}
      
      # Start to build the graph
      graph_builder = StateGraph(State)
      
      # Add nodes to the graph
      graph_builder.add_node("chatbot_node", chatbot_function)
      
      # Add edges
      graph_builder.add_edge(START, "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile()
      
      # Display the graph
      try:
          display(Image(graph.get_graph().draw_mermaid_png()))
      except Exception as e:
          print(f"Error al visualizar el grafo: {e}")
      
image uv 2

Máslink image 7

Todos los bloques más están por si quieres profundizar más en LangGraph, si no, puedes leer todo sin leer los bloques más

Tipado del estadolink image 8

Hemos visto cómo crear un agente con un estado tipado mediante TypedDict, pero podemos crearlo con otro tipo tipado.

Tipado mediante TypeDictlink image 9

Es la forma que hemos visto antes, tipamos el estado como un diccionario usando el tipado de Python TypeDict. Le pasamos una llave y un valor para cada variable de nuestro estado

from typing_extensions import TypedDict
from typing import Annotated
from langgraph.graph.message import add_messages
from langgraph.graph import StateGraph
class State(TypedDict):
    messages: Annotated[list, add_messages]

Para acceder a los mensajes lo hacemos como con cualquier diccionario, mediante state["messages"]

Tipado mediante dataclasslink image 10

Otra opción es usar el tipado de python dataclass

from dataclasses import dataclass
from typing import Annotated
from langgraph.graph.message import add_messages
from langgraph.graph import StateGraph
@dataclass
class State:
    messages: Annotated[list, add_messages]

Como se puede ver es similar al tipado mediante diccionarios, pero ahora, al ser el estado una clase, accedemos a los mensajes mediante state.messages

Tipado con Pydanticlink image 11

Pydantic es una librería muy usada para tipar datos en Python. Nos ofrece la posibilidad de añadir una comprobación del tipado. Vamos a comprobar que el mensaje empiece con 'User', 'Assistant' o 'System'

from pydantic import BaseModel, field_validator, ValidationError
from typing import Annotated
from langgraph.graph.message import add_messages
class State(BaseModel):
    messages: Annotated[list, add_messages] # Should start by 'User', 'Assistant' or 'System'
    @field_validator('messages')
    @classmethod
    def validate_messages(cls, value):
        # Ensure the messages start with `User`, `Assistant` or `System`
        if not value.startswith["'User'"] and not value.startswith["'Assistant'"] and not value.startswith["'System'"]:
            raise ValueError("Message must to start with 'User', 'Assistant' or 'System'")
        return value
try:
    state = PydanticState(messages=["Hello"])
except ValidationError as e:
    print("Validation Error:", e)

Reducerslink image 12

Como hemos dicho, necesitamos usar una función de tipo Reducer para indicar cómo actualizar el estado, ya que si no los valores del estado se sobreescriben.

Vamos a ver un ejemplo de un grafo en el que no usamos una función de tipo Reducer para indicar cómo actualizar el estado

from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from IPython.display import Image, display
      
      class State(TypedDict):
          foo: int
      
      def node_1(state):
          print("---Node 1---")
          return {"foo": state['foo'] + 1}
      
      def node_2(state):
          print("---Node 2---")
          return {"foo": state['foo'] + 1}
      
      def node_3(state):
          print("---Node 3---")
          return {"foo": state['foo'] + 1}
      
      # Build graph
      builder = StateGraph(State)
      builder.add_node("node_1", node_1)
      builder.add_node("node_2", node_2)
      builder.add_node("node_3", node_3)
      
      # Logic
      builder.add_edge(START, "node_1")
      builder.add_edge("node_1", "node_2")
      builder.add_edge("node_1", "node_3")
      builder.add_edge("node_2", END)
      builder.add_edge("node_3", END)
      
      # Add
      graph = builder.compile()
      
      # View
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 3

Como vemos hemos definido un grafo en el que primero se ejecuta el nodo 1 y luego el 2 y el 3. Vamos a ejecutarlo a ver qué pasa

	
from langgraph.errors import InvalidUpdateError
try:
graph.invoke({"foo" : 1})
except InvalidUpdateError as e:
print(f"InvalidUpdateError occurred: {e}")
Copy
	
---Node 1---
---Node 2---
---Node 3---
InvalidUpdateError occurred: At key 'foo': Can receive only one value per step. Use an Annotated key to handle multiple values.
For troubleshooting, visit: https://python.langchain.com/docs/troubleshooting/errors/INVALID_CONCURRENT_GRAPH_UPDATE

Obtenemos un error porque primero el nodo 1 modifica el valor de foo y luego los nodos 2 y 3 intentan modificar el valor de foo en paralelo, lo cual da un error

Así que para evitar eso usamos una función de tipo Reducer para indicar cómo modificar el estado

Reducers predefinidoslink image 13

Usamos el tipo Annotated para especificar que es una función de tipo Reducer. Y usamos el operador add para añadir un valor a una lista

from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from IPython.display import Image, display
      from operator import add
      from typing import Annotated
      
      class State(TypedDict):
          foo: Annotated[list[int], add]
      
      def node_1(state):
          print("---Node 1---")
          return {"foo": [state['foo'][-1] + 1]}
      
      def node_2(state):
          print("---Node 2---")
          return {"foo": [state['foo'][-1] + 1]}
      
      def node_3(state):
          print("---Node 3---")
          return {"foo": [state['foo'][-1] + 1]}
      
      # Build graph
      builder = StateGraph(State)
      builder.add_node("node_1", node_1)
      builder.add_node("node_2", node_2)
      builder.add_node("node_3", node_3)
      
      # Logic
      builder.add_edge(START, "node_1")
      builder.add_edge("node_1", "node_2")
      builder.add_edge("node_1", "node_3")
      builder.add_edge("node_2", END)
      builder.add_edge("node_3", END)
      
      # Add
      graph = builder.compile()
      
      # View
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 4

Lo volvemos a ejecutar a ver qué pasa

graph.invoke({"foo" : [1]})
      
---Node 1---
      ---Node 2---
      ---Node 3---
      
Out[8]:
{'foo': [1, 2, 3, 3]}

Como vemos inicializamos el valor de foo a 1, lo cual se añade en una lista. Luego el nodo 1 le suma 1 y lo añade como nuevo valor en la lista, es decir, añade un 2. Por último los nodos 2 y 3 suman uno al último valor de la lista, es decir, los dos nodos obtienen un 3 y los dos nodos lo añaden al final de la lista, por eso la lista resultante tiene dos 3 al final

Vamos a ver el caso de que una rama tenga más nodos que otra

from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from IPython.display import Image, display
      from operator import add
      from typing import Annotated
      
      class State(TypedDict):
          foo: Annotated[list[int], add]
      
      def node_1(state):
          print("---Node 1---")
          return {"foo": [state['foo'][-1] + 1]}
      
      def node_2_1(state):
          print("---Node 2_1---")
          return {"foo": [state['foo'][-1] + 1]}
      
      def node_2_2(state):
          print("---Node 2_2---")
          return {"foo": [state['foo'][-1] + 1]}
      
      def node_3(state):
          print("---Node 3---")
          return {"foo": [state['foo'][-1] + 1]}
      
      # Build graph
      builder = StateGraph(State)
      builder.add_node("node_1", node_1)
      builder.add_node("node_2_1", node_2_1)
      builder.add_node("node_2_2", node_2_2)
      builder.add_node("node_3", node_3)
      
      # Logic
      builder.add_edge(START, "node_1")
      builder.add_edge("node_1", "node_2_1")
      builder.add_edge("node_1", "node_3")
      builder.add_edge("node_2_1", "node_2_2")
      builder.add_edge("node_2_2", END)
      builder.add_edge("node_3", END)
      
      # Add
      graph = builder.compile()
      
      # View
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 5

Si ahora ejecutamos el grafo

graph.invoke({"foo" : [1]})
      
---Node 1---
      ---Node 2_1---
      ---Node 3---
      ---Node 2_2---
      
Out[3]:
{'foo': [1, 2, 3, 3, 4]}

Lo que ha pasado es que primero se ha ejecutado el nodo 1, a continuación el nodo 2_1, después, en paralelo, los nodos 2_2 y 3, y por último el nodo END

Como hemos definido foo como una lista de enteros, y está tipada, si inicializamos el estado con None obtenemos un error

	
try:
graph.invoke({"foo" : None})
except TypeError as e:
print(f"TypeError occurred: {e}")
Copy
	
TypeError occurred: can only concatenate list (not "NoneType") to list

Vamos a ver cómo arreglarlo con reducers personalizados

Reducers personalizadoslink image 14

A veces no podemos usar un Reducer predefinido y tenemos que crear el nuestro

from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from IPython.display import Image, display
      from typing import Annotated
      
      def reducer_function(current_list, new_item: list | None):
          if current_list is None:
              current_list = []
      
          if new_item is not None:
              return current_list + new_item
          return current_list
      
      class State(TypedDict):
          foo: Annotated[list[int], reducer_function]
      
      def node_1(state):
          print("---Node 1---")
          if len(state['foo']) == 0:
              return {'foo': [0]}
          return {"foo": [state['foo'][-1] + 1]}
      
      def node_2(state):
          print("---Node 2---")
          return {"foo": [state['foo'][-1] + 1]}
      
      def node_3(state):
          print("---Node 3---")
          return {"foo": [state['foo'][-1] + 1]}
      
      # Build graph
      builder = StateGraph(State)
      builder.add_node("node_1", node_1)
      builder.add_node("node_2", node_2)
      builder.add_node("node_3", node_3)
      
      # Logic
      builder.add_edge(START, "node_1")
      builder.add_edge("node_1", "node_2")
      builder.add_edge("node_1", "node_3")
      builder.add_edge("node_2", END)
      builder.add_edge("node_3", END)
      
      # Add
      graph = builder.compile()
      
      # View
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 6

Si ahora iniciamos el grafo con un valor None ya no nos da un error

	
try:
graph.invoke({"foo" : None})
except TypeError as e:
print(f"TypeError occurred: {e}")
Copy
	
---Node 1---
---Node 2---
---Node 3---

Múltiples estadoslink image 15

Estados privadoslink image 16

Supongamos que queremos ocultar variables del estado, por la razón que sea, porque algunas variables solo aportan ruido o porque queremos mantener alguna variable privada.

Si queremos tener un estado privado, simplemente lo creamos.

from typing_extensions import TypedDict
      from IPython.display import Image, display
      from langgraph.graph import StateGraph, START, END
      
      class OverallState(TypedDict):
          public_var: int
      
      class PrivateState(TypedDict):
          private_var: int
      
      def node_1(state: OverallState) -> PrivateState:
          print("---Node 1---")
          return {"private_var": state['public_var'] + 1}
      
      def node_2(state: PrivateState) -> OverallState:
          print("---Node 2---")
          return {"public_var": state['private_var'] + 1}
      
      # Build graph
      builder = StateGraph(OverallState)
      builder.add_node("node_1", node_1)
      builder.add_node("node_2", node_2)
      
      # Logic
      builder.add_edge(START, "node_1")
      builder.add_edge("node_1", "node_2")
      builder.add_edge("node_2", END)
      
      # Add
      graph = builder.compile()
      
      # View
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 7

Como vemos hemos creado el estado privado PrivateState y el estado público OverallState. Cada uno con una variable privada y una pública. Primero se ejecuta el nodo 1, que modifica la variable privada y la devuelve. Luego se ejecuta el nodo 2, que modifica la variable pública y la devuelve. Vamos a ejecutar el grafo para ver qué pasa

graph.invoke({"public_var" : 1})
      
---Node 1---
      ---Node 2---
      
Out[2]:
{'public_var': 3}

Como vemos al ejecutar el grafo, pasamos la variable pública public_var y obtenemos a la salida otra variable pública public_var con el valor modificado, pero nunca se ha accedido a la variable privada private_var

Estados de entrada y salidalink image 17

Podemos definir las variables de entrada y salida del grafo. Aunque internamente el estado puede tener más variables, definimos qué variables son de entrada al grafo y qué variables son de salida.

from typing_extensions import TypedDict
      from IPython.display import Image, display
      from langgraph.graph import StateGraph, START, END
      
      class InputState(TypedDict):
          question: str
      
      class OutputState(TypedDict):
          answer: str
      
      class OverallState(TypedDict):
          question: str
          answer: str
          notes: str
      
      def thinking_node(state: InputState):
          return {"answer": "bye", "notes": "... his is name is Lance"}
      
      def answer_node(state: OverallState) -> OutputState:
          return {"answer": "bye Lance"}
      
      graph = StateGraph(OverallState, input=InputState, output=OutputState)
      
      graph.add_node("answer_node", answer_node)
      graph.add_node("thinking_node", thinking_node)
      graph.add_edge(START, "thinking_node")
      graph.add_edge("thinking_node", "answer_node")
      graph.add_edge("answer_node", END)
      
      graph = graph.compile()
      
      # View
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 8

En este caso el estado tiene 3 variables, question, answer y notes. Sin embargo definimos como entrada al grafo question y como salida del grafo answer. Por lo tanto, el estado interno puede tener más variables, pero no se tienen en cuenta a la hora de invocar el grafo. Vamos a ejecutar el grafo para ver qué pasa

	
graph.invoke({"question":"hi"})
Copy
	
{'answer': 'bye Lance'}

Como vemos, le hemos metido question al grafo y hemos obtenido answer a la salida.

Manejo del contextolink image 18

Vamos a volver a ver el código del chatbot básico

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Define the chatbot function
      def chatbot_function(state: State):
          return {"messages": [llm.invoke(state["messages"])]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("chatbot_node", chatbot_function)
      
      # Connect nodes
      graph_builder.add_edge(START, "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile()
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 9

Vamos a crear un contexto que le pasaremos al modelo

	
from langchain_core.messages import AIMessage, HumanMessage
messages = [AIMessage(f"So you said you were researching ocean mammals?", name="Bot")]
messages.append(HumanMessage(f"Yes, I know about whales. But what others should I learn about?", name="Lance"))
for m in messages:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
Name: Bot
So you said you were researching ocean mammals?
================================ Human Message =================================
Name: Lance
Yes, I know about whales. But what others should I learn about?

Si se lo pasamos al grafo, obtendremos la salida

	
output = graph.invoke({'messages': messages})
for m in output['messages']:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
Name: Bot
So you said you were researching ocean mammals?
================================ Human Message =================================
Name: Lance
Yes, I know about whales. But what others should I learn about?
================================== Ai Message ==================================
That's a great topic! Besides whales, there are several other fascinating ocean mammals you might want to learn about. Here are a few:
1. **Dolphins**: Highly intelligent and social, dolphins are found in all oceans of the world. They are known for their playful behavior and communication skills.
2. **Porpoises**: Similar to dolphins but generally smaller and stouter, porpoises are less social and more elusive. They are found in coastal waters around the world.
3. **Seals and Sea Lions**: These are semi-aquatic mammals that can be found in both Arctic and Antarctic regions, as well as in more temperate waters. They are known for their sleek bodies and flippers, and they differ in their ability to walk on land (sea lions can "walk" on their flippers, while seals can only wriggle or slide).
4. **Walruses**: Known for their large tusks and whiskers, walruses are found in the Arctic. They are well-adapted to cold waters and have a thick layer of blubber to keep them warm.
5. **Manatees and Dugongs**: These gentle, herbivorous mammals are often called "sea cows." They live in shallow, coastal areas and are found in tropical and subtropical regions. Manatees are found in the Americas, while dugongs are found in the Indo-Pacific region.
6. **Otters**: While not fully aquatic, sea otters spend most of their lives in the water and are excellent swimmers. They are known for their dense fur, which keeps them warm in cold waters.
7. **Polar Bears**: Although primarily considered land animals, polar bears are excellent swimmers and spend a significant amount of time in the water, especially when hunting for seals.
Each of these mammals has unique adaptations and behaviors that make them incredibly interesting to study. If you have any specific questions or topics you'd like to explore further, feel free to ask!

Como vemos ahora en la salida tenemos un mensaje más.

Si esto sigue creciendo llegará un momento en el que tendremos un contexto muy largo, por lo que supondrá un mayor gasto de tokens, que puede acarrear un mayor gasto económico, y conlleva también mayor latencia.

Además con contextos muy largos los LLMs empiezan a rendir peor.

En los últimos modelos, a día de la escritura de este post, por encima de 8k tokens de contexto, empieza a decaer el rendimiento del LLM

Así que vamos a ver varias maneras de gestionar esto

Modificar el contexto con funciones de tipo Reducerlink image 19

Hemos visto que con las funciones de tipo Reducer podemos modificar los mensajes del estado

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import RemoveMessage
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Nodes
      def filter_messages(state: State):
          # Delete all but the 2 most recent messages
          delete_messages = [RemoveMessage(id=m.id) for m in state["messages"][:-2]]
          return {"messages": delete_messages}
      
      def chat_model_node(state: State):    
          return {"messages": [llm.invoke(state["messages"])]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("filter_messages_node", filter_messages)
      graph_builder.add_node("chatbot_node", chat_model_node)
      
      # Connecto nodes
      graph_builder.add_edge(START, "filter_messages_node")
      graph_builder.add_edge("filter_messages_node", "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile()
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 10

Como vemos en el grafo, primero filtramos los mensajes y luego pasamos el resultado al modelo.

Volvemos a crear un contexto que le pasaremos al modelo, pero ahora con más mensajes

	
from langchain_core.messages import AIMessage, HumanMessage
messages = [AIMessage(f"So you said you were researching ocean mammals?", name="Bot")]
messages.append(HumanMessage(f"Yes, I know about whales. But what others should I learn about?", name="Lance"))
messages.append(AIMessage(f"I know about sharks too", name="Bot"))
messages.append(HumanMessage(f"What others should I learn about?", name="Lance"))
messages.append(AIMessage(f"I know about dolphins too", name="Bot"))
messages.append(HumanMessage(f"Tell me more about dolphins", name="Lance"))
for m in messages:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
Name: Bot
So you said you were researching ocean mammals?
================================ Human Message =================================
Name: Lance
Yes, I know about whales. But what others should I learn about?
================================== Ai Message ==================================
Name: Bot
I know about sharks too
================================ Human Message =================================
Name: Lance
What others should I learn about?
================================== Ai Message ==================================
Name: Bot
I know about dolphins too
================================ Human Message =================================
Name: Lance
Tell me more about dolphins

Si se lo pasamos al grafo, obtendremos la salida

	
output = graph.invoke({'messages': messages})
for m in output['messages']:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
Name: Bot
I know about dolphins too
================================ Human Message =================================
Name: Lance
Tell me more about dolphins
================================== Ai Message ==================================
Dolphins are highly intelligent marine mammals that are part of the family Delphinidae, which includes about 40 species. They are found in oceans worldwide, from tropical to temperate regions, and are known for their agility and playful behavior. Here are some interesting facts about dolphins:
1. **Social Behavior**: Dolphins are highly social animals and often live in groups called pods, which can range from a few individuals to several hundred. Social interactions are complex and include cooperative behaviors, such as hunting and defending against predators.
2. **Communication**: Dolphins communicate using a variety of sounds, including clicks, whistles, and body language. These sounds can be used for navigation (echolocation), communication, and social bonding. Each dolphin has a unique signature whistle that helps identify it to others in the pod.
3. **Intelligence**: Dolphins are considered one of the most intelligent animals on Earth. They have large brains and display behaviors such as problem-solving, mimicry, and even the use of tools. Some studies suggest that dolphins can recognize themselves in mirrors, indicating a level of self-awareness.
4. **Diet**: Dolphins are carnivores and primarily feed on fish and squid. They use echolocation to locate and catch their prey. Some species, like the bottlenose dolphin, have been observed using teamwork to herd fish into tight groups, making them easier to catch.
5. **Reproduction**: Dolphins typically give birth to a single calf after a gestation period of about 10 to 12 months. Calves are born tail-first and are immediately helped to the surface for their first breath by their mother or another dolphin. Calves nurse for up to two years and remain dependent on their mothers for a significant period.
6. **Conservation**: Many dolphin species are threatened by human activities such as pollution, overfishing, and habitat destruction. Some species, like the Indo-Pacific humpback dolphin and the Amazon river dolphin, are endangered. Conservation efforts are crucial to protect these animals and their habitats.
7. **Human Interaction**: Dolphins have a long history of interaction with humans, often appearing in mythology and literature. In some cultures, they are considered sacred or bring good luck. Today, dolphins are popular in marine parks and are often the focus of eco-tourism activities, such as dolphin-watching tours.
Dolphins continue to fascinate scientists and the general public alike, with ongoing research into their behavior, communication, and social structures providing new insights into these remarkable creatures.

Como se puede ver, la función de filtrado ha eliminado todos los mensajes menos los dos últimos y esos dos mensajes se han pasado como contexto al LLM.

Recortar mensajeslink image 20

Otra solución es recortar cada mensaje de la lista de mensajes que tengan muchos tokens, se establece un límite de tokens y se elimina el mensaje que supera ese límite.

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import trim_messages
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Nodes
      def trim_messages_node(state: State):
          # Trim the messages based on the specified parameters
          trimmed_messages = trim_messages(
              state["messages"],
              max_tokens=100,       # Maximum tokens allowed in the trimmed list
              strategy="last",     # Keep the latest messages
              token_counter=llm,   # Use the LLM's tokenizer to count tokens
              allow_partial=True,  # Allow cutting messages mid-way if needed
          )
      
          # Print the trimmed messages to see the effect of trim_messages
          print("--- trimmed messages (input to LLM) ---")
          for m in trimmed_messages:
              m.pretty_print()
          print("------------------------------------------------")
      
          # Invoke the LLM with the trimmed messages
          response = llm.invoke(trimmed_messages)
      
          # Return the LLM's response in the correct state format
          return {"messages": [response]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("trim_messages_node", trim_messages_node)
      
      # Connecto nodes
      graph_builder.add_edge(START, "trim_messages_node")
      graph_builder.add_edge("trim_messages_node", END)
      
      # Compile the graph
      graph = graph_builder.compile()
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 11

Como vemos en el grafo, primero filtramos los mensajes y luego pasamos el resultado al modelo.

Volvemos a crear un contexto que le pasaremos al modelo, pero ahora con más mensajes

	
from langchain_core.messages import AIMessage, HumanMessage
messages = [AIMessage(f"So you said you were researching ocean mammals?", name="Bot")]
messages.append(HumanMessage(f"Yes, I know about whales. But what others should I learn about?", name="Lance"))
messages.append(AIMessage(f"""I know about sharks too. They are very dangerous, but they are also very beautiful.
Sometimes have been seen in the wild, but they are not very common. In the wild, they are very dangerous, but they are also very beautiful.
They live in the sea and in the ocean. They can travel long distances and can be found in many parts of the world.
Often they live in groups of 20 or more, but they are not very common.
They should eat a lot of food. Normally they eat a lot of fish.
The white shark is the largest of the sharks and is the most dangerous.
The great white shark is the most famous of the sharks and is the most dangerous.
The tiger shark is the most aggressive of the sharks and is the most dangerous.
The hammerhead shark is the most beautiful of the sharks and is the most dangerous.
The mako shark is the fastest of the sharks and is the most dangerous.
The bull shark is the most common of the sharks and is the most dangerous.
""", name="Bot"))
messages.append(HumanMessage(f"What others should I learn about?", name="Lance"))
messages.append(AIMessage(f"I know about dolphins too", name="Bot"))
messages.append(HumanMessage(f"Tell me more about dolphins", name="Lance"))
for m in messages:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
Name: Bot
So you said you were researching ocean mammals?
================================ Human Message =================================
Name: Lance
Yes, I know about whales. But what others should I learn about?
================================== Ai Message ==================================
Name: Bot
I know about sharks too. They are very dangerous, but they are also very beautiful.
Sometimes have been seen in the wild, but they are not very common. In the wild, they are very dangerous, but they are also very beautiful.
They live in the sea and in the ocean. They can travel long distances and can be found in many parts of the world.
Often they live in groups of 20 or more, but they are not very common.
They should eat a lot of food. Normally they eat a lot of fish.
The white shark is the largest of the sharks and is the most dangerous.
The great white shark is the most famous of the sharks and is the most dangerous.
The tiger shark is the most aggressive of the sharks and is the most dangerous.
The hammerhead shark is the most beautiful of the sharks and is the most dangerous.
The mako shark is the fastest of the sharks and is the most dangerous.
The bull shark is the most common of the sharks and is the most dangerous.
================================ Human Message =================================
Name: Lance
What others should I learn about?
================================== Ai Message ==================================
Name: Bot
I know about dolphins too
================================ Human Message =================================
Name: Lance
Tell me more about dolphins

Si se lo pasamos al grafo obtendremos la salida

	
output = graph.invoke({'messages': messages})
Copy
	
--- trimmed messages (input to LLM) ---
================================== Ai Message ==================================
Name: Bot
The tiger shark is the most aggressive of the sharks and is the most dangerous.
The hammerhead shark is the most beautiful of the sharks and is the most dangerous.
The mako shark is the fastest of the sharks and is the most dangerous.
The bull shark is the most common of the sharks and is the most dangerous.
================================ Human Message =================================
Name: Lance
What others should I learn about?
================================== Ai Message ==================================
Name: Bot
I know about dolphins too
================================ Human Message =================================
Name: Lance
Tell me more about dolphins
------------------------------------------------

Como se puede ver se ha recortado el contexto que se le pasa al LLM, el mensaje que era muy largo y tenía muchos tokens se ha recortado. Vamos a ver la salida del LLM

	
for m in output['messages']:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
Name: Bot
So you said you were researching ocean mammals?
================================ Human Message =================================
Name: Lance
Yes, I know about whales. But what others should I learn about?
================================== Ai Message ==================================
Name: Bot
I know about sharks too. They are very dangerous, but they are also very beautiful.
Sometimes have been seen in the wild, but they are not very common. In the wild, they are very dangerous, but they are also very beautiful.
They live in the sea and in the ocean. They can travel long distances and can be found in many parts of the world.
Often they live in groups of 20 or more, but they are not very common.
They should eat a lot of food. Normally they eat a lot of fish.
The white shark is the largest of the sharks and is the most dangerous.
The great white shark is the most famous of the sharks and is the most dangerous.
The tiger shark is the most aggressive of the sharks and is the most dangerous.
The hammerhead shark is the most beautiful of the sharks and is the most dangerous.
The mako shark is the fastest of the sharks and is the most dangerous.
The bull shark is the most common of the sharks and is the most dangerous.
================================ Human Message =================================
Name: Lance
What others should I learn about?
================================== Ai Message ==================================
Name: Bot
I know about dolphins too
================================ Human Message =================================
Name: Lance
Tell me more about dolphins
================================== Ai Message ==================================
Certainly! Dolphins are intelligent marine mammals that are part of the family Delphinidae, which includes nearly 40 species. Here are some interesting facts about dolphins:
1. **Intelligence**: Dolphins are known for their high intelligence and have large brains relative to their body size. They exhibit behaviors that suggest social complexity, self-awareness, and problem-solving skills. For example, they can recognize themselves in mirrors, a trait shared by only a few other species.
2. **Communication**: Dolphins communicate using a variety of clicks, whistles, and body language. Each dolphin has a unique "signature whistle" that helps identify it to others, similar to a human name. They use echolocation to navigate and locate prey by emitting clicks and interpreting the echoes that bounce back.
3. **Social Structure**: Dolphins are highly social animals and often live in groups called pods. These pods can vary in size from a few individuals to several hundred. Within these groups, dolphins form complex social relationships and often cooperate to hunt and protect each other from predators.
4. **Habitat**: Dolphins are found in all the world's oceans and in some rivers. Different species have adapted to various environments, from tropical waters to the cooler regions of the open sea. Some species, like the Amazon river dolphin (also known as the boto), live in freshwater rivers.
5. **Diet**: Dolphins are carnivores and primarily eat fish, squid, and crustaceans. Their diet can vary depending on the species and their habitat. Some species, like the killer whale (which is actually a large dolphin), can even hunt larger marine mammals.
6. **Reproduction**: Dolphins have a long gestation period, typically around 10 to 12 months. Calves are born tail-first and are nursed by their mothers for up to two years. Dolphins often form strong bonds with their offspring and other members of their pod.
7. **Conservation**: Many species of dolphins face threats such as pollution, overfishing, and entanglement in fishing nets. Conservation efforts are ongoing to protect these animals and their habitats. Organizations like the International Union for Conservation of Nature (IUCN) and the World Wildlife Fund (WWF) work to raise awareness and implement conservation measures.
8. **Cultural Significance**: Dolphins have been a source of fascination and inspiration for humans for centuries. They appear in myths, legends, and art across many cultures and are often seen as symbols of intelligence, playfulness, and freedom.
Dolphins are truly remarkable creatures with a lot to teach us about social behavior, communication, and the complexities of marine ecosystems. If you have any specific questions or want to know more about a particular species, feel free to ask!

Con un contexto recortado, el LLM sigue contestando

Modificación del contexto y recorte de mensajeslink image 21

Vamos a juntar las dos técnicas anteriores, modificaremos el contexto y recortaremos los mensajes.

from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import RemoveMessage, trim_messages
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Nodes
      def filter_messages(state: State):
          # Delete all but the 2 most recent messages
          delete_messages = [RemoveMessage(id=m.id) for m in state["messages"][:-2]]
          return {"messages": delete_messages}
      
      def trim_messages_node(state: State):
          # print the messages
          print("--- messages (input to trim_messages) ---")
          for m in state["messages"]:
              m.pretty_print()
          print("------------------------------------------------")
      
          # Trim the messages based on the specified parameters
          trimmed_messages = trim_messages(
              state["messages"],
              max_tokens=100,       # Maximum tokens allowed in the trimmed list
              strategy="last",     # Keep the latest messages
              token_counter=llm,   # Use the LLM's tokenizer to count tokens
              allow_partial=True,  # Allow cutting messages mid-way if needed
          )
      
          # Print the trimmed messages to see the effect of trim_messages
          print("--- trimmed messages (input to LLM) ---")
          for m in trimmed_messages:
              m.pretty_print()
          print("------------------------------------------------")
      
          # Invoke the LLM with the trimmed messages
          response = llm.invoke(trimmed_messages)
      
          # Return the LLM's response in the correct state format
          return {"messages": [response]}
      
      def chat_model_node(state: State):    
          return {"messages": [llm.invoke(state["messages"])]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("filter_messages_node", filter_messages)
      graph_builder.add_node("chatbot_node", chat_model_node)
      graph_builder.add_node("trim_messages_node", trim_messages_node)
      
      # Connecto nodes
      graph_builder.add_edge(START, "filter_messages_node")
      graph_builder.add_edge("filter_messages_node", "trim_messages_node")
      graph_builder.add_edge("trim_messages_node", "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile()
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 12

Ahora filtramos quedándonos con los dos últimos mensajes, luego trrimamos el contexto para que no se gasten muchos tokens y finalmente pasamos el resultado al modelo.

Creamos un contexto para pasárselo al grafo

	
from langchain_core.messages import AIMessage, HumanMessage
messages = [AIMessage(f"So you said you were researching ocean mammals?", name="Bot")]
messages.append(HumanMessage(f"Yes, I know about whales. But what others should I learn about?", name="Lance"))
messages.append(AIMessage(f"I know about dolphins too", name="Bot"))
messages.append(HumanMessage(f"What others should I learn about?", name="Lance"))
messages.append(AIMessage(f"""I know about sharks too. They are very dangerous, but they are also very beautiful.
Sometimes have been seen in the wild, but they are not very common. In the wild, they are very dangerous, but they are also very beautiful.
They live in the sea and in the ocean. They can travel long distances and can be found in many parts of the world.
Often they live in groups of 20 or more, but they are not very common.
They should eat a lot of food. Normally they eat a lot of fish.
The white shark is the largest of the sharks and is the most dangerous.
The great white shark is the most famous of the sharks and is the most dangerous.
The tiger shark is the most aggressive of the sharks and is the most dangerous.
The hammerhead shark is the most beautiful of the sharks and is the most dangerous.
The mako shark is the fastest of the sharks and is the most dangerous.
The bull shark is the most common of the sharks and is the most dangerous.
""", name="Bot"))
messages.append(HumanMessage(f"What others should I learn about?", name="Lance"))
for m in messages:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
Name: Bot
So you said you were researching ocean mammals?
================================ Human Message =================================
Name: Lance
Yes, I know about whales. But what others should I learn about?
================================== Ai Message ==================================
Name: Bot
I know about dolphins too
================================ Human Message =================================
Name: Lance
What others should I learn about?
================================== Ai Message ==================================
Name: Bot
I know about sharks too. They are very dangerous, but they are also very beautiful.
Sometimes have been seen in the wild, but they are not very common. In the wild, they are very dangerous, but they are also very beautiful.
They live in the sea and in the ocean. They can travel long distances and can be found in many parts of the world.
Often they live in groups of 20 or more, but they are not very common.
They should eat a lot of food. Normally they eat a lot of fish.
The white shark is the largest of the sharks and is the most dangerous.
The great white shark is the most famous of the sharks and is the most dangerous.
The tiger shark is the most aggressive of the sharks and is the most dangerous.
The hammerhead shark is the most beautiful of the sharks and is the most dangerous.
The mako shark is the fastest of the sharks and is the most dangerous.
The bull shark is the most common of the sharks and is the most dangerous.
================================ Human Message =================================
Name: Lance
What others should I learn about?

Se lo pasamos al grafo y obtenemos la salida

	
output = graph.invoke({'messages': messages})
Copy
	
--- messages (input to trim_messages) ---
================================== Ai Message ==================================
Name: Bot
I know about sharks too. They are very dangerous, but they are also very beautiful.
Sometimes have been seen in the wild, but they are not very common. In the wild, they are very dangerous, but they are also very beautiful.
They live in the sea and in the ocean. They can travel long distances and can be found in many parts of the world.
Often they live in groups of 20 or more, but they are not very common.
They should eat a lot of food. Normally they eat a lot of fish.
The white shark is the largest of the sharks and is the most dangerous.
The great white shark is the most famous of the sharks and is the most dangerous.
The tiger shark is the most aggressive of the sharks and is the most dangerous.
The hammerhead shark is the most beautiful of the sharks and is the most dangerous.
The mako shark is the fastest of the sharks and is the most dangerous.
The bull shark is the most common of the sharks and is the most dangerous.
================================ Human Message =================================
Name: Lance
What others should I learn about?
------------------------------------------------
--- trimmed messages (input to LLM) ---
================================ Human Message =================================
Name: Lance
What others should I learn about?
------------------------------------------------

Como se ve, solo nos hemos quedado con el último mensaje, ha sido porque la función de filtrado ha devuelto los dos últimos mensajes, pero la función de trimado ha eliminado el penúltimo mensaje por tener más de 100 tokens.

Vamos a ver qué tenemos a la salida del modelo

	
for m in output['messages']:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
Name: Bot
I know about sharks too. They are very dangerous, but they are also very beautiful.
Sometimes have been seen in the wild, but they are not very common. In the wild, they are very dangerous, but they are also very beautiful.
They live in the sea and in the ocean. They can travel long distances and can be found in many parts of the world.
Often they live in groups of 20 or more, but they are not very common.
They should eat a lot of food. Normally they eat a lot of fish.
The white shark is the largest of the sharks and is the most dangerous.
The great white shark is the most famous of the sharks and is the most dangerous.
The tiger shark is the most aggressive of the sharks and is the most dangerous.
The hammerhead shark is the most beautiful of the sharks and is the most dangerous.
The mako shark is the fastest of the sharks and is the most dangerous.
The bull shark is the most common of the sharks and is the most dangerous.
================================ Human Message =================================
Name: Lance
What others should I learn about?
================================== Ai Message ==================================
Certainly! To provide a more tailored response, it would be helpful to know what areas or topics you're interested in. However, here’s a general list of areas that are often considered valuable for personal and professional development:
1. **Technology & Digital Skills**:
- Programming languages (Python, JavaScript, etc.)
- Web development (HTML, CSS, React, etc.)
- Data analysis and visualization (SQL, Tableau, Power BI)
- Machine learning and AI
- Cloud computing (AWS, Azure, Google Cloud)
2. **Business & Entrepreneurship**:
- Marketing (digital marketing, SEO, content marketing)
- Project management
- Financial literacy
- Leadership and management
-Startup and venture capital
3. **Science & Engineering**:
- Biology and genetics
- Physics and materials science
- Environmental science and sustainability
- Robotics and automation
- Aerospace engineering
4. **Health & Wellness**:
- Nutrition and dietetics
- Mental health and psychology
- Exercise science
- Yoga and mindfulness
- Traditional and alternative medicine
5. **Arts & Humanities**:
- Creative writing and storytelling
- Music and sound production
- Visual arts and design (graphic design, photography)
- Philosophy and ethics
- History and cultural studies
6. **Communication & Languages**:
- Public speaking and presentation skills
- Conflict resolution and negotiation
- Learning a new language (Spanish, Mandarin, French, etc.)
- Writing and editing
7. **Personal Development**:
- Time management and productivity
- Mindfulness and stress management
- Goal setting and motivation
- Personal finance and budgeting
- Critical thinking and problem solving
8. **Social & Environmental Impact**:
- Social entrepreneurship
- Community organizing and activism
- Sustainable living practices
- Climate change and environmental policy
If you have a specific area of interest or a particular goal in mind, feel free to share, and I can provide more detailed recommendations!
================================== Ai Message ==================================

Hemos filtrado tanto el estado que el LLM no tiene contexto suficiente, más adelante veremos una manera de solucionarlo añadiendo al estado un resumen de la conversación.

Modos de streaminglink image 22

Streaming síncronolink image 23

En este caso vamos a recibir el resultado del LLM completo de una vez haya terminado de generar el texto.

Para explicar los modos de streaming síncrono, primero vamos a crear un grafo básico.

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import HumanMessage
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Nodes
      def chat_model_node(state: State):
          # Return the LLM's response in the correct state format
          return {"messages": [llm.invoke(state["messages"])]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("chatbot_node", chat_model_node)
      
      # Connecto nodes
      graph_builder.add_edge(START, "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile()
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 13

Ahora tenemos dos maneras de obtener el resultado del LLM, una es mediante el modo updates y la otra mediante el modo values.

updates vs values

Mientras que updates nos da cada nuevo resultado, values nos da todo el historial de resultados.

Updateslink image 24
	
for chunk in graph.stream({"messages": [HumanMessage(content="hi! I'm Máximo")]}, stream_mode="updates"):
print(chunk['chatbot_node']['messages'][-1].content)
Copy
	
Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.
Valueslink image 25
	
for chunk in graph.stream({"messages": [HumanMessage(content="hi! I'm Máximo")]}, stream_mode="values"):
print(chunk['messages'][-1].content)
Copy
	
hi! I'm Máximo
Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.
Streaming asíncronolink image 26

Ahora vamos a recibir el resultado del LLM token a token. Para ello tenemos que añadir streaming=True cuando creamos el LLM de HuggingFace y tenemos que cambiar la función del nodo del chatbot para que sea asíncrona.

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import HumanMessage
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
          streaming=True,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Nodes
      async def chat_model_node(state: State):
          async for token in llm.astream_log(state["messages"]):
              yield {"messages": [token]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("chatbot_node", chat_model_node)
      
      # Connecto nodes
      graph_builder.add_edge(START, "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile()
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 14

Como se puede ver, la función se ha creado asíncrona y se ha convertido en un generador ya que el yield devuelve un valor y pausa la ejecución de la función hasta que se llame de nuevo.

Vamos a ejecutar el grafo de forma asíncrona y vemos los tipos de eventos que se generan.

try:
          async for event in graph.astream_events({"messages": [HumanMessage(content="hi! I'm Máximo")]}, version="v2"):
              print(f"event: {event}")
      except Exception as e:
          print(f"Error: {e}")
      
event: {'event': 'on_chain_start', 'data': {'input': {'messages': [HumanMessage(content="hi! I'm Máximo", additional_kwargs={}, response_metadata={})]}}, 'name': 'LangGraph', 'tags': [], 'run_id': 'c9c40a00-157a-4229-a0d1-fda00e7bfd34', 'metadata': {}, 'parent_ids': []}
      event: {'event': 'on_chain_start', 'data': {'input': {'input': [HumanMessage(content="hi! I'm Máximo", additional_kwargs={}, response_metadata={}, id='6469501c-07b0-42e4-a3e6-f133ace1860c')]}}, 'name': 'chatbot_node', 'tags': ['graph:step:1'], 'run_id': '638828c0-4add-4141-b6b6-484446100237', 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34']}
      event: {'event': 'on_chain_start', 'data': {}, 'name': 'chatbot_node', 'tags': ['seq:step:1'], 'run_id': '15247b1a-1cd6-4863-9402-66499f921244', 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237']}
      event: {'event': 'on_chat_model_start', 'data': {'input': {'input': [[HumanMessage(content="hi! I'm Máximo", additional_kwargs={}, response_metadata={}, id='6469501c-07b0-42e4-a3e6-f133ace1860c')]]}}, 'name': 'ChatHuggingFace', 'tags': [], 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chain_stream', 'run_id': '15247b1a-1cd6-4863-9402-66499f921244', 'name': 'chatbot_node', 'tags': ['seq:step:1'], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd'}, 'data': {'chunk': {'input': [RunLogPatch(}'op': 'replace',
        'path': '',
        'value': {'final_output': None,
                  'id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3',
                  'logs': {},
                  'name': 'ChatHuggingFace',
                  'streamed_output': [],
                  'type': 'llm'}})]}}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='Hello', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' Má', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='ximo', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='!', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' It', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content="'s", additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' nice', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' to', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' meet', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' you', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='.', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' How', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' can', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' I', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' assist', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' you', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' today', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='?', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' Feel', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' free', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' to', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' ask', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' me', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' any', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' questions', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      
/Users/macm1/miniforge3/envs/langgraph/lib/python3.13/site-packages/huggingface_hub/inference/_generated/_async_client.py:2308: FutureWarning: `stop_sequences` is a deprecated argument for `text_generation` task and will be removed in version '0.28.0'. Use `stop` instead.
        warnings.warn(
      
event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' or', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' let', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' me', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' know', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' if', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' you', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' need', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' help', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' with', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' anything', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' specific', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='.', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='<|im_end|>', additional_kwargs={}, response_metadata={})}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chat_model_end', 'data': }'output': AIMessage(content="Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.<|im_end|>", additional_kwargs={}, response_metadata={}, id='run-74dfdbb9-4c2d-4a08-ad7d-795b5953cae3-0'), 'input': {'input': [[HumanMessage(content="hi! I'm Máximo", additional_kwargs={}, response_metadata={}, id='6469501c-07b0-42e4-a3e6-f133ace1860c')]]}}, 'run_id': '74dfdbb9-4c2d-4a08-ad7d-795b5953cae3', 'name': 'ChatHuggingFace', 'tags': [], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'ls_provider': 'huggingface', 'ls_model_type': 'chat'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237', '15247b1a-1cd6-4863-9402-66499f921244']}
      event: {'event': 'on_chain_stream', 'run_id': '15247b1a-1cd6-4863-9402-66499f921244', 'name': 'chatbot_node', 'tags': ['seq:step:1'], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd'}, 'data': {'chunk': {'input': [RunLogPatch(}'op': 'add',
        'path': '/streamed_output/-',
        'value': AIMessage(content="Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.<|im_end|>", additional_kwargs={}, response_metadata={}, id='run-74dfdbb9-4c2d-4a08-ad7d-795b5953cae3-0')},
       }'op': 'replace',
        'path': '/final_output',
        'value': AIMessage(content="Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.<|im_end|>", additional_kwargs={}, response_metadata={}, id='run-74dfdbb9-4c2d-4a08-ad7d-795b5953cae3-0')})]}}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237']}
      event: {'event': 'on_chain_end', 'data': }'output': {'input': [RunLogPatch(}'op': 'add',
        'path': '/streamed_output/-',
        'value': AIMessage(content="Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.<|im_end|>", additional_kwargs={}, response_metadata={}, id='run-74dfdbb9-4c2d-4a08-ad7d-795b5953cae3-0')},
       }'op': 'replace',
        'path': '/final_output',
        'value': AIMessage(content="Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.<|im_end|>", additional_kwargs={}, response_metadata={}, id='run-74dfdbb9-4c2d-4a08-ad7d-795b5953cae3-0')})]}, 'input': {'input': [HumanMessage(content="hi! I'm Máximo", additional_kwargs={}, response_metadata={}, id='6469501c-07b0-42e4-a3e6-f133ace1860c')]}}, 'run_id': '15247b1a-1cd6-4863-9402-66499f921244', 'name': 'chatbot_node', 'tags': ['seq:step:1'], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd', 'checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34', '638828c0-4add-4141-b6b6-484446100237']}
      event: {'event': 'on_chain_stream', 'run_id': '638828c0-4add-4141-b6b6-484446100237', 'name': 'chatbot_node', 'tags': ['graph:step:1'], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd'}, 'data': {'chunk': {'input': [RunLogPatch(}'op': 'add',
        'path': '/streamed_output/-',
        'value': AIMessage(content="Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.<|im_end|>", additional_kwargs={}, response_metadata={}, id='run-74dfdbb9-4c2d-4a08-ad7d-795b5953cae3-0')},
       }'op': 'replace',
        'path': '/final_output',
        'value': AIMessage(content="Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.<|im_end|>", additional_kwargs={}, response_metadata={}, id='run-74dfdbb9-4c2d-4a08-ad7d-795b5953cae3-0')})]}}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34']}
      event: {'event': 'on_chain_end', 'data': }'output': {'input': [RunLogPatch(}'op': 'add',
        'path': '/streamed_output/-',
        'value': AIMessage(content="Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.<|im_end|>", additional_kwargs={}, response_metadata={}, id='run-74dfdbb9-4c2d-4a08-ad7d-795b5953cae3-0')},
       }'op': 'replace',
        'path': '/final_output',
        'value': AIMessage(content="Hello Máximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.<|im_end|>", additional_kwargs={}, response_metadata={}, id='run-74dfdbb9-4c2d-4a08-ad7d-795b5953cae3-0')})]}, 'input': {'input': [HumanMessage(content="hi! I'm Máximo", additional_kwargs={}, response_metadata={}, id='6469501c-07b0-42e4-a3e6-f133ace1860c')]}}, 'run_id': '638828c0-4add-4141-b6b6-484446100237', 'name': 'chatbot_node', 'tags': ['graph:step:1'], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'chatbot_node', 'langgraph_triggers': ('branch:to:chatbot_node',), 'langgraph_path': ('__pregel_pull', 'chatbot_node'), 'langgraph_checkpoint_ns': 'chatbot_node:b7599990-0c1a-4133-fb2c-f32105784fbd'}, 'parent_ids': ['c9c40a00-157a-4229-a0d1-fda00e7bfd34']}
      Error: Unsupported message type: <class 'langchain_core.tracers.log_stream.RunLogPatch'>
      For troubleshooting, visit: https://python.langchain.com/docs/troubleshooting/errors/MESSAGE_COERCION_FAILURE 
      

Como se puede ver los tokens llegan con el evento on_chat_model_stream, así que vamos a capturarlo e imprimirlo.

try:
          async for event in graph.astream_events({"messages": [HumanMessage(content="hi! I'm Máximo")]}, version="v2"):
              if event["event"] == "on_chat_model_stream":
                  print(event["data"]["chunk"].content, end=" | ", flush=True)
      except Exception as e:
          pass
      
/Users/macm1/miniforge3/envs/langgraph/lib/python3.13/site-packages/huggingface_hub/inference/_generated/_async_client.py:2308: FutureWarning: `stop_sequences` is a deprecated argument for `text_generation` task and will be removed in version '0.28.0'. Use `stop` instead.
        warnings.warn(
      
Hello |  Má | ximo | ! |  It | 's |  nice |  to |  meet |  you | . |  How |  can |  I |  assist |  you |  today | ? |  Feel |  free |  to |  ask |  me |  any |  questions |  or |  let |  me |  know |  if |  you |  need |  help |  with |  anything |  specific | . | <|im_end|> | 

Sub grafoslink image 27

Antes hemos visto cómo bifurcar un grafo de manera que se ejecuten nodos en paralelo, pero supongamos el caso de que ahora lo que queremos es que lo que se ejecute en paralelo sean subgrafos. Así que vamos a ver cómo hacerlo

Vamos a ver cómo hacer un grafo de gestión de logs que va a tener un subgrafo de resumen de logs y otro subgrafo de análisis de errores en los logs.

grafo con subgrafos

Así que lo que vamos a hacer es primero definir cada uno de los subgrafos por separado y luego añadirlos al grafo principal.

Subgrafo de análisis de errores en logslink image 28

Importamos las librerías necesarias

	
from IPython.display import Image, display
from langgraph.graph import StateGraph, START, END
from operator import add
from typing_extensions import TypedDict
from typing import List, Optional, Annotated
Copy

Creamos una clase con la estructura de los logs

	
# The structure of the logs
class Log(TypedDict):
id: str
question: str
docs: Optional[List]
answer: str
grade: Optional[int]
grader: Optional[str]
feedback: Optional[str]
Copy

Creamos ahora dos clases, una con la estructura de los errores de los logs y otra con el análisis que reportará a la salida

	
# Failure Analysis Sub-graph
class FailureAnalysisState(TypedDict):
cleaned_logs: List[Log]
failures: List[Log]
fa_summary: str
processed_logs: List[str]
class FailureAnalysisOutputState(TypedDict):
fa_summary: str
processed_logs: List[str]
Copy

Creamos ahora las funciones de los nodos, una obtendrá los fallos en los logs, para ello buscará los logs que tengan algún valor en el campo grade. Otra generará un resumen de los fallos. Además vamos a poner prints para poder ver qué está pasando internamente.

	
def get_failures(state):
""" Get logs that contain a failure """
cleaned_logs = state["cleaned_logs"]
print(f" debug get_failures: cleaned_logs: {cleaned_logs}")
failures = [log for log in cleaned_logs if "grade" in log]
print(f" debug get_failures: failures: {failures}")
return {opening_brace}"failures": failures{closing_brace}
def generate_summary(state):
""" Generate summary of failures """
failures = state["failures"]
print(f" debug generate_summary: failures: {failures}")
fa_summary = "Poor quality retrieval of documentation."
print(f" debug generate_summary: fa_summary: {fa_summary}")
processed_logs = [f"failure-analysis-on-log-{failure['id']}" for failure in failures]
print(f" debug generate_summary: processed_logs: {processed_logs}")
return {opening_brace}"fa_summary": fa_summary, "processed_logs": processed_logs{closing_brace}
Copy

Por último, creamos el grafo, añadimos los nodos y los edges y lo compilamos

fa_builder = StateGraph(FailureAnalysisState,output=FailureAnalysisOutputState)
      
      fa_builder.add_node("get_failures", get_failures)
      fa_builder.add_node("generate_summary", generate_summary)
      
      fa_builder.add_edge(START, "get_failures")
      fa_builder.add_edge("get_failures", "generate_summary")
      fa_builder.add_edge("generate_summary", END)
      
      graph = fa_builder.compile()
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 15

Vamos a crear un log de prueba

	
failure_log = {opening_brace}
"id": "1",
"question": "What is the meaning of life?",
"docs": None,
"answer": "42",
"grade": 1,
"grader": "AI",
"feedback": "Good job!"
{closing_brace}
Copy

Ejecutamos el grafo con el log de prueba. Como la función get_failures coge la key cleaned_logs del estado, tenemos que pasarle el log al grafo en esa misma key.

graph.invoke({"cleaned_logs": [failure_log]})
      
	 debug get_failures: cleaned_logs: [{'id': '1', 'question': 'What is the meaning of life?', 'docs': None, 'answer': '42', 'grade': 1, 'grader': 'AI', 'feedback': 'Good job!'}]
      	 debug get_failures: failures: [{'id': '1', 'question': 'What is the meaning of life?', 'docs': None, 'answer': '42', 'grade': 1, 'grader': 'AI', 'feedback': 'Good job!'}]
      	 debug generate_summary: failures: [{'id': '1', 'question': 'What is the meaning of life?', 'docs': None, 'answer': '42', 'grade': 1, 'grader': 'AI', 'feedback': 'Good job!'}]
      	 debug generate_summary: fa_summary: Poor quality retrieval of documentation.
      	 debug generate_summary: processed_logs: ['failure-analysis-on-log-1']
      
Out[16]:
{'fa_summary': 'Poor quality retrieval of documentation.',
       'processed_logs': ['failure-analysis-on-log-1']}

Se puede ver que ha encontrado el log de prueba, ya que tiene un valor de 1 en el campo grade y luego ha generado un resumen de los fallos.

Vamos a definir todo el subgrafo junto otra vez para que se vea más claro y además para quitar los prints que hemos puesto para debug.

from IPython.display import Image, display
      from langgraph.graph import StateGraph, START, END
      
      from operator import add
      from typing_extensions import TypedDict
      from typing import List, Optional, Annotated
      
      # The structure of the logs
      class Log(TypedDict):
          id: str
          question: str
          docs: Optional[List]
          answer: str
          grade: Optional[int]
          grader: Optional[str]
          feedback: Optional[str]
      
      # Failure clases
      class FailureAnalysisState(TypedDict):
          cleaned_logs: List[Log]
          failures: List[Log]
          fa_summary: str
          processed_logs: List[str]
      
      class FailureAnalysisOutputState(TypedDict):
          fa_summary: str
          processed_logs: List[str]
      
      # Functions
      def get_failures(state):
          """ Get logs that contain a failure """
          cleaned_logs = state["cleaned_logs"]
          failures = [log for log in cleaned_logs if "grade" in log]
          return {"failures": failures}
      
      def generate_summary(state):
          """ Generate summary of failures """
          failures = state["failures"]
          fa_summary = "Poor quality retrieval of documentation."
          processed_logs = [f"failure-analysis-on-log-{failure['id']}" for failure in failures]
          return {"fa_summary": fa_summary, "processed_logs": processed_logs}
      
      # Build the graph
      fa_builder = StateGraph(FailureAnalysisState,output=FailureAnalysisOutputState)
      
      fa_builder.add_node("get_failures", get_failures)
      fa_builder.add_node("generate_summary", generate_summary)
      
      fa_builder.add_edge(START, "get_failures")
      fa_builder.add_edge("get_failures", "generate_summary")
      fa_builder.add_edge("generate_summary", END)
      
      graph = fa_builder.compile()
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 16

Si ahora lo volvemos a ejecutar obtenemos el mismo resultado, pero sin los prints.

	
graph.invoke({"cleaned_logs": [failure_log]})
Copy
	
{opening_brace}'fa_summary': 'Poor quality retrieval of documentation.',
'processed_logs': ['failure-analysis-on-log-1']{closing_brace}
Subgrafo de resumen de logslink image 29

Ahora creamos el subgrafo de resumen de logs. En este caso no hace falta volver a crear la clase con la estructura de los logs, por lo que creamos las clases con la estructura para los resúmenes de los logs y con la estructura de la salida.

	
# Summarization subgraph
class QuestionSummarizationState(TypedDict):
cleaned_logs: List[Log]
qs_summary: str
report: str
processed_logs: List[str]
class QuestionSummarizationOutputState(TypedDict):
report: str
processed_logs: List[str]
Copy

Ahora definimos las funciones de los nodos, una generará el resumen de los logs y otra "enviará el resumen a Slack".

	
def generate_summary(state):
cleaned_logs = state["cleaned_logs"]
print(f" debug generate_summary: cleaned_logs: {cleaned_logs}")
summary = "Questions focused on ..."
print(f" debug generate_summary: summary: {summary}")
processed_logs = [f"summary-on-log-{log['id']}" for log in cleaned_logs]
print(f" debug generate_summary: processed_logs: {processed_logs}")
return {opening_brace}"qs_summary": summary, "processed_logs": processed_logs{closing_brace}
def send_to_slack(state):
qs_summary = state["qs_summary"]
print(f" debug send_to_slack: qs_summary: {qs_summary}")
report = "foo bar baz"
print(f" debug send_to_slack: report: {report}")
return {opening_brace}"report": report{closing_brace}
Copy

Por último, creamos el grafo, añadimos los nodos y los edges y lo compilamos.

# Build the graph
      qs_builder = StateGraph(QuestionSummarizationState,output=QuestionSummarizationOutputState)
      
      qs_builder.add_node("generate_summary", generate_summary)
      qs_builder.add_node("send_to_slack", send_to_slack)
      
      qs_builder.add_edge(START, "generate_summary")
      qs_builder.add_edge("generate_summary", "send_to_slack")
      qs_builder.add_edge("send_to_slack", END)
      
      graph = qs_builder.compile()
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 17

Volvemos a probar con el log que creamos antes.

graph.invoke({"cleaned_logs": [failure_log]})
      
	 debug generate_summary: cleaned_logs: [{'id': '1', 'question': 'What is the meaning of life?', 'docs': None, 'answer': '42', 'grade': 1, 'grader': 'AI', 'feedback': 'Good job!'}]
      	 debug generate_summary: summary: Questions focused on ...
      	 debug generate_summary: processed_logs: ['summary-on-log-1']
      	 debug send_to_slack: qs_summary: Questions focused on ...
      	 debug send_to_slack: report: foo bar baz
      
Out[25]:
{'report': 'foo bar baz', 'processed_logs': ['summary-on-log-1']}

Volvemos a escribir el subgrafo, todo junto para ver con mayor claridad y sin los prints.

# Summarization clases
      class QuestionSummarizationState(TypedDict):
          cleaned_logs: List[Log]
          qs_summary: str
          report: str
          processed_logs: List[str]
      
      class QuestionSummarizationOutputState(TypedDict):
          report: str
          processed_logs: List[str]
      
      # Functions
      def generate_summary(state):
          cleaned_logs = state["cleaned_logs"]
          summary = "Questions focused on ..."
          processed_logs = [f"summary-on-log-{log['id']}" for log in cleaned_logs]
          return {"qs_summary": summary, "processed_logs": processed_logs}
      
      def send_to_slack(state):
          qs_summary = state["qs_summary"]
          report = "foo bar baz"
          return {"report": report}
      
      # Build the graph
      qs_builder = StateGraph(QuestionSummarizationState,output=QuestionSummarizationOutputState)
      
      qs_builder.add_node("generate_summary", generate_summary)
      qs_builder.add_node("send_to_slack", send_to_slack)
      
      qs_builder.add_edge(START, "generate_summary")
      qs_builder.add_edge("generate_summary", "send_to_slack")
      qs_builder.add_edge("send_to_slack", END)
      
      graph = qs_builder.compile()
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 18

Volvemos a ejecutar el grafo con el log de prueba.

	
graph.invoke({"cleaned_logs": [failure_log]})
Copy
	
{opening_brace}'report': 'foo bar baz', 'processed_logs': ['summary-on-log-1']{closing_brace}
Grafo principallink image 30

Ahora que tenemos los dos subgrafos, podemos crear el grafo principal que los usará. Para ello creamos la clase EntryGraphState que tendrá el estado de los dos subgrafos.

	
# Entry Graph
class EntryGraphState(TypedDict):
raw_logs: List[Log]
cleaned_logs: List[Log]
fa_summary: str # This will only be generated in the FA sub-graph
report: str # This will only be generated in the QS sub-graph
processed_logs: Annotated[List[int], add] # This will be generated in BOTH sub-graphs
Copy

Creamos una función de limpieza de logs, que será un nodo que se ejecutará antes de los dos subgrafos y que les aportará los logs limpios y a través de la key cleaned_logs, que es la que los dos subgrafos toman del estado.

	
def clean_logs(state):
# Get logs
raw_logs = state["raw_logs"]
# Data cleaning raw_logs -> docs
cleaned_logs = raw_logs
return {opening_brace}"cleaned_logs": cleaned_logs{closing_brace}
Copy

Ahora creamos el grafo principal

	
# Build the graph
entry_builder = StateGraph(EntryGraphState)
Copy

Añadimos los nodos. Para añadir un subgrafo como nodo, lo que hacemos es añadir su compilación

	
# Add nodes
entry_builder.add_node("clean_logs", clean_logs)
entry_builder.add_node("question_summarization", qs_builder.compile())
entry_builder.add_node("failure_analysis", fa_builder.compile())
Copy
	
<langgraph.graph.state.StateGraph at 0x107985ef0>

A partir de aquí ya es como si siempre, añadimos los edges y lo compilamos.

	
# Add edges
entry_builder.add_edge(START, "clean_logs")
entry_builder.add_edge("clean_logs", "failure_analysis")
entry_builder.add_edge("clean_logs", "question_summarization")
entry_builder.add_edge("failure_analysis", END)
entry_builder.add_edge("question_summarization", END)
# Compile the graph
graph = entry_builder.compile()
Copy
	
Adding an edge to a graph that has already been compiled. This will not be reflected in the compiled graph.
Adding an edge to a graph that has already been compiled. This will not be reflected in the compiled graph.
Adding an edge to a graph that has already been compiled. This will not be reflected in the compiled graph.
Adding an edge to a graph that has already been compiled. This will not be reflected in the compiled graph.
Adding an edge to a graph that has already been compiled. This will not be reflected in the compiled graph.

Por último, mostramos el grafo. Añadimos xray=1 para que se vea el estado interno del grafo.

# Setting xray to 1 will show the internal structure of the nested graph
      display(Image(graph.get_graph(xray=1).draw_mermaid_png()))
      
image uv 19

Si no hubiésemos añadido xray=1, el grafo se vería así

display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 20

Ahora creamos dos logs de prueba, en uno habrá un error (un valor en grade) y en el otro no.

	
# Dummy logs
question_answer = Log(
id="1",
question="How can I import ChatOllama?",
answer="To import ChatOllama, use: 'from langchain_community.chat_models import ChatOllama.'",
)
question_answer_feedback = Log(
id="2",
question="How can I use Chroma vector store?",
answer="To use Chroma, define: rag_chain = create_retrieval_chain(retriever, question_answer_chain).",
grade=0,
grader="Document Relevance Recall",
feedback="The retrieved documents discuss vector stores in general, but not Chroma specifically",
)
raw_logs = [question_answer,question_answer_feedback]
Copy

Se los pasamos al grafo principal

	
graph.invoke({"raw_logs": raw_logs})
Copy
	
{opening_brace}'raw_logs': [{opening_brace}'id': '1',
'question': 'How can I import ChatOllama?',
'answer': "To import ChatOllama, use: 'from langchain_community.chat_models import ChatOllama.'"{closing_brace},
{opening_brace}'id': '2',
'question': 'How can I use Chroma vector store?',
'answer': 'To use Chroma, define: rag_chain = create_retrieval_chain(retriever, question_answer_chain).',
'grade': 0,
'grader': 'Document Relevance Recall',
'feedback': 'The retrieved documents discuss vector stores in general, but not Chroma specifically'{closing_brace}],
'cleaned_logs': [{opening_brace}'id': '1',
'question': 'How can I import ChatOllama?',
'answer': "To import ChatOllama, use: 'from langchain_community.chat_models import ChatOllama.'"{closing_brace},
{opening_brace}'id': '2',
'question': 'How can I use Chroma vector store?',
'answer': 'To use Chroma, define: rag_chain = create_retrieval_chain(retriever, question_answer_chain).',
'grade': 0,
'grader': 'Document Relevance Recall',
'feedback': 'The retrieved documents discuss vector stores in general, but not Chroma specifically'{closing_brace}],
'fa_summary': 'Poor quality retrieval of documentation.',
'report': 'foo bar baz',
'processed_logs': ['failure-analysis-on-log-2',
'summary-on-log-1',
'summary-on-log-2']{closing_brace}

Al igual que antes, escribimos todo el grafo para verlo con mayor claridad

# Entry Graph
      class EntryGraphState(TypedDict):
          raw_logs: List[Log]
          cleaned_logs: List[Log]
          fa_summary: str # This will only be generated in the FA sub-graph
          report: str # This will only be generated in the QS sub-graph
          processed_logs:  Annotated[List[int], add] # This will be generated in BOTH sub-graphs
      
      # Functions
      def clean_logs(state):
          # Get logs
          raw_logs = state["raw_logs"]
          # Data cleaning raw_logs -> docs 
          cleaned_logs = raw_logs
          return {"cleaned_logs": cleaned_logs}
      
      # Build the graph
      entry_builder = StateGraph(EntryGraphState)
      
      # Add nodes
      entry_builder.add_node("clean_logs", clean_logs)
      entry_builder.add_node("question_summarization", qs_builder.compile())
      entry_builder.add_node("failure_analysis", fa_builder.compile())
      
      # Add edges
      entry_builder.add_edge(START, "clean_logs")
      entry_builder.add_edge("clean_logs", "failure_analysis")
      entry_builder.add_edge("clean_logs", "question_summarization")
      entry_builder.add_edge("failure_analysis", END)
      entry_builder.add_edge("question_summarization", END)
      
      # Compile the graph
      graph = entry_builder.compile()
      
      # Setting xray to 1 will show the internal structure of the nested graph
      display(Image(graph.get_graph(xray=1).draw_mermaid_png()))
      
image uv 21

Le pasamos los logs de prueba al grafo principal

	
graph.invoke({"raw_logs": raw_logs})
Copy
	
{opening_brace}'raw_logs': [{opening_brace}'id': '1',
'question': 'How can I import ChatOllama?',
'answer': "To import ChatOllama, use: 'from langchain_community.chat_models import ChatOllama.'"{closing_brace},
{opening_brace}'id': '2',
'question': 'How can I use Chroma vector store?',
'answer': 'To use Chroma, define: rag_chain = create_retrieval_chain(retriever, question_answer_chain).',
'grade': 0,
'grader': 'Document Relevance Recall',
'feedback': 'The retrieved documents discuss vector stores in general, but not Chroma specifically'{closing_brace}],
'cleaned_logs': [{opening_brace}'id': '1',
'question': 'How can I import ChatOllama?',
'answer': "To import ChatOllama, use: 'from langchain_community.chat_models import ChatOllama.'"{closing_brace},
{opening_brace}'id': '2',
'question': 'How can I use Chroma vector store?',
'answer': 'To use Chroma, define: rag_chain = create_retrieval_chain(retriever, question_answer_chain).',
'grade': 0,
'grader': 'Document Relevance Recall',
'feedback': 'The retrieved documents discuss vector stores in general, but not Chroma specifically'{closing_brace}],
'fa_summary': 'Poor quality retrieval of documentation.',
'report': 'foo bar baz',
'processed_logs': ['failure-analysis-on-log-2',
'summary-on-log-1',
'summary-on-log-2']{closing_brace}

Ramas dinámicaslink image 31

Hasta ahora hemos creado nodos y edges estáticos, pero hay veces en las que no sabemos si vamos a necesitar una rama hasta que se ejecute el grafo. Para ello, podemos usar el método SEND de langgraph, que permite crear ramas dinámicamente.

Para verlo, vamos a crear un grafo que genere chistes sobre unos temas, pero como no sabemos de antemano sobre cuántos temas queremos generar chistes, mediante el método SEND vamos a crear ramas dinámicamente, de manera que si quedan temas por generar, se creará una nueva rama.

Nota: Este apartado lo vamos a hacer usando Sonnet 3.7, ya que la integración de HuggingFace no tiene la funcionalidad de with_structured_output que proporciona una salida extructurada con un estructura definida.

Primero importamos las librerías necesarias.

	
import operator
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import END, StateGraph, START
from langchain_anthropic import ChatAnthropic
import os
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
import dotenv
dotenv.load_dotenv()
ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
from IPython.display import Image
Copy

Creamos las clases con la estructura del estado.

	
class OverallState(TypedDict):
topic: str
subjects: list
jokes: Annotated[list, operator.add]
best_selected_joke: str
class JokeState(TypedDict):
subject: str
Copy

Creamos el LLM

	
# Create the LLM model
llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
Copy

Creamos la función que generará los temas.

Vamos a usar with_structured_output para que el LLM genere una salida con una estructura definida por nosotros, esa estructura la vamos a definir con la clase Subjects que es una clase de tipo BaseModel de Pydantic.

	
from pydantic import BaseModel
class Subjects(BaseModel):
subjects: list[str]
subjects_prompt = """Generate a list of 3 sub-topics that are all related to this overall topic: {topic}."""
def generate_topics(state: OverallState):
prompt = subjects_prompt.format(topic=state["topic"])
response = llm.with_structured_output(Subjects).invoke(prompt)
return {opening_brace}"subjects": response.subjects{closing_brace}
Copy

Ahora definimos la función que generará los chistes.

	
class Joke(BaseModel):
joke: str
joke_prompt = """Generate a joke about {subject}"""
def generate_joke(state: JokeState):
prompt = joke_prompt.format(subject=state["subject"])
response = llm.with_structured_output(Joke).invoke(prompt)
return {opening_brace}"jokes": [response.joke]}
Copy

Y por último la función que seleccionará el mejor chiste.

	
class BestJoke(BaseModel):
id: int
best_joke_prompt = """Below are a bunch of jokes about {topic}. Select the best one! Return the ID of the best one, starting 0 as the ID for the first joke. Jokes: \n\n {jokes}"""
def best_joke(state: OverallState):
jokes = "\n\n".join(state["jokes"])
prompt = best_joke_prompt.format(topic=state["topic"], jokes=jokes)
response = llm.with_structured_output(BestJoke).invoke(prompt)
return {opening_brace}"best_selected_joke": state["jokes"][response.id]}
Copy

Ahora vamos a crear una función que decida si crear una nueva rama con SEND o no, y para decidirlo comprobará si quedan temas por generar.

	
from langgraph.constants import Send
def continue_to_jokes(state: OverallState):
return [Send("generate_joke", {opening_brace}"subject": s}) for s in state["subjects"]]
Copy

Construimos el grafo, añadimos los nodos y los edges.

# Build the graph
      graph = StateGraph(OverallState)
      
      # Add nodes
      graph.add_node("generate_topics", generate_topics)
      graph.add_node("generate_joke", generate_joke)
      graph.add_node("best_joke", best_joke)
      
      # Add edges
      graph.add_edge(START, "generate_topics")
      graph.add_conditional_edges("generate_topics", continue_to_jokes, ["generate_joke"])
      graph.add_edge("generate_joke", "best_joke")
      graph.add_edge("best_joke", END)
      
      # Compile the graph
      app = graph.compile()
      
      # Display the graph
      Image(app.get_graph().draw_mermaid_png())
      
Out[8]:
image uv 22

Como se puede ver el edge entre generate_topics y generate_joke se representa con una línea discontinua, lo que indica que es una rama dinámica.

Creamos ahora un diccionario con la key topic que es la que necesita el nodo generate_topics para generar los temas y se lo pasamos al grafo.

	
# Call the graph: here we call it to generate a list of jokes
for state in app.stream({"topic": "animals"}):
print(state)
Copy
	
{opening_brace}'generate_topics': {'subjects': ['Marine Animals', 'Endangered Species', 'Animal Behavior']{closing_brace}{closing_brace}
{opening_brace}'generate_joke': {'jokes': ["Why don't cats play poker in the wild? Too many cheetahs!"]{closing_brace}{closing_brace}
{opening_brace}'generate_joke': {'jokes': ["Why don't sharks eat clownfish? Because they taste funny!"]{closing_brace}{closing_brace}
{opening_brace}'generate_joke': {'jokes': ["Why don't endangered species tell jokes? Because they're afraid of dying out from laughter!"]{closing_brace}{closing_brace}
{opening_brace}'best_joke': {'best_selected_joke': "Why don't cats play poker in the wild? Too many cheetahs!"{closing_brace}{closing_brace}

Volvemos a crear el grafo con todo el código junto para mayor claridad.

import operator
      from typing import Annotated
      from typing_extensions import TypedDict
      from pydantic import BaseModel
      
      from langgraph.graph import END, StateGraph, START
      from langgraph.constants import Send
      
      from langchain_anthropic import ChatAnthropic
      
      import os
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      import dotenv
      dotenv.load_dotenv()
      ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
      
      from IPython.display import Image
      
      # Prompts we will use
      subjects_prompt = """Generate a list of 3 sub-topics that are all related to this overall topic: {topic}."""
      joke_prompt = """Generate a joke about {subject}"""
      best_joke_prompt = """Below are a bunch of jokes about {topic}. Select the best one! Return the ID of the best one, starting 0 as the ID for the first joke. Jokes: \n\n  {jokes}"""
      
      # Create the LLM model
      llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
      
      class Subjects(BaseModel):
          subjects: list[str]
      
      class BestJoke(BaseModel):
          id: int
          
      class OverallState(TypedDict):
          topic: str
          subjects: list
          jokes: Annotated[list, operator.add]
          best_selected_joke: str
      
      class JokeState(TypedDict):
          subject: str
      
      class Joke(BaseModel):
          joke: str
      
      def generate_topics(state: OverallState):
          prompt = subjects_prompt.format(topic=state["topic"])
          response = llm.with_structured_output(Subjects).invoke(prompt)
          return {"subjects": response.subjects}
      
      def continue_to_jokes(state: OverallState):
          return [Send("generate_joke", {"subject": s}) for s in state["subjects"]]
      
      def generate_joke(state: JokeState):
          prompt = joke_prompt.format(subject=state["subject"])
          response = llm.with_structured_output(Joke).invoke(prompt)
          return {"jokes": [response.joke]}
      
      def best_joke(state: OverallState):
          jokes = "\n\n".join(state["jokes"])
          prompt = best_joke_prompt.format(topic=state["topic"], jokes=jokes)
          response = llm.with_structured_output(BestJoke).invoke(prompt)
          return {"best_selected_joke": state["jokes"][response.id]}
      
      # Build the graph
      graph = StateGraph(OverallState)
      
      # Add nodes
      graph.add_node("generate_topics", generate_topics)
      graph.add_node("generate_joke", generate_joke)
      graph.add_node("best_joke", best_joke)
      
      # Add edges
      graph.add_edge(START, "generate_topics")
      graph.add_conditional_edges("generate_topics", continue_to_jokes, ["generate_joke"])
      graph.add_edge("generate_joke", "best_joke")
      graph.add_edge("best_joke", END)
      
      # Compile the graph
      app = graph.compile()
      
      # Display the graph
      Image(app.get_graph().draw_mermaid_png())
      
Out[1]:
image uv 23

Volvemos a ejecutarlo, pero ahora, en vez de con animales, lo vamos a hacer con coches

	
for state in app.stream({"topic": "cars"}):
print(state)
Copy
	
{opening_brace}'generate_topics': {'subjects': ['Car Maintenance and Repair', 'Electric and Hybrid Vehicles', 'Automotive Design and Engineering']{closing_brace}{closing_brace}
{opening_brace}'generate_joke': {'jokes': ["Why don't electric cars tell jokes? They're afraid of running out of charge before they get to the punchline!"]{closing_brace}{closing_brace}
{opening_brace}'generate_joke': {'jokes': ["Why don't automotive engineers play hide and seek? Because good luck hiding when you're always making a big noise about torque!"]{closing_brace}{closing_brace}
{opening_brace}'generate_joke': {'jokes': ["Why don't cars ever tell their own jokes? Because they always exhaust themselves during the delivery! Plus, their timing belts are always a little off."]{closing_brace}{closing_brace}
{opening_brace}'best_joke': {'best_selected_joke': "Why don't electric cars tell jokes? They're afraid of running out of charge before they get to the punchline!"{closing_brace}{closing_brace}

Mejorar el chatbot con herramientaslink image 32

Para manejar algunas consultas, nuestro chatbot no puede responder desde su conocimiento, así que vamos a integrar una herramienta de búsqueda web. Nuestro bot puede utilizar esta herramienta para encontrar información relevante y proporcionar mejores respuestas.

Requisitoslink image 33

Antes de comenzar, tenemos que instalar el buscador Tavily que es un buscador web que nos permite buscar información en la web.

pip install -U tavily-python langchain_community

Después, tenemos que crear una API KEY, la escribimos en nuestro archivo .env y la cargamos en una variable.

	
import dotenv
import os
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
Copy

Chatbot con toolslink image 34

Primero creamos el estado y el LLM

	
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from huggingface_hub import login
import json
import os
from IPython.display import Image, display
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
import dotenv
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
class State(TypedDict):
messages: Annotated[list, add_messages]
# Create the LLM
login(token=HUGGINGFACE_TOKEN) # Login to HuggingFace to use the model
MODEL = "Qwen/Qwen2.5-72B-Instruct"
model = HuggingFaceEndpoint(
repo_id=MODEL,
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
# Create the chat model
llm = ChatHuggingFace(llm=model)
Copy

Ahora, definimos la herramienta de búsqueda web mediante TavilySearchResults

	
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
tool = TavilySearchResults(api_wrapper=wrapper, max_results=2)
Copy

Probamos la herramienta, vamos a hacer una búsqueda en Internet

tool.invoke("What was the result of Real Madrid's at last match in the Champions League?")
      
Failed to multipart ingest runs: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      
Out[3]:
[{'title': 'HIGHLIGHTS | Real Madrid 3-2 Leganés | LaLiga 2024/25 - YouTube',
        'url': 'https://www.youtube.com/watch?v=Np-Kwz4RDpY',
        'content': "20:14 · Go to channel · RONALDO'S LAST MATCH WITH REAL MADRID: THE MOST THRILLING FINAL EVER! ... Champions League 1/4 Final | PES. Football",
        'score': 0.65835214},
       {'title': 'Real Madrid | History | UEFA Champions League',
        'url': 'https://www.uefa.com/uefachampionsleague/history/clubs/50051--real-madrid/',
        'content': '1955/56 P W D L Final 7 5 0 2\nUEFA Champions League [...] 2010/11 P W D L Semi-finals 12 8 3 1\n2009/10 P W D L Round of 16 8 4 2 2\n2000s\n2008/09 P W D L Round of 16 8 4 0 4\n2007/08 P W D L Round of 16 8 3 2 3\n2006/07 P W D L Round of 16 8 4 2 2\n2005/06 P W D L Round of 16 8 3 2 3\n2004/05 P W D L Round of 16 10 6 2 2\n2003/04 P W D L Quarter-finals 10 6 3 1\n2002/03 P W D L Semi-finals 16 7 5 4\n2001/02 P W D L Final 17 12 3 2\n2000/01 P W D L Semi-finals 16 9 2 5\n1990s\n1999/00 P W D L Final 17 10 3 4\n1998/99 P W D L Quarter-finals 8 4 1 3 [...] 1969/70 P W D L Second round 4 2 0 2\n1968/69 P W D L Second round 4 3 0 1\n1967/68 P W D L Semi-finals 8 2 4 2\n1966/67 P W D L Quarter-finals 4 1 0 3\n1965/66 P W D L Final 9 5 2 2\n1964/65 P W D L Quarter-finals 6 4 1 1\n1963/64 P W D L Final 9 7 0 2\n1962/63 P W D L Preliminary round 2 0 1 1\n1961/62 P W D L Final 10 8 0 2\n1960/61 P W D L First round 2 0 1 1\n1950s\n1959/60 P W D L Final 7 6 0 1\n1958/59 P W D L Final 8 5 2 1\n1957/58 P W D L Final 7 5 1 1\n1956/57 P W D L Final 8 6 1 1',
        'score': 0.6030211}]

Los resultados son resúmenes de páginas que nuestro chatbot puede usar para responder preguntas.

Creamos una lista de herramientas, porque nuestro grafo necesita definir las herramientas mediante una lista.

	
tools_list = [tool]
Copy

Ahora que tenemos la lista de tools creamos un llm_with_tools

	
# Modification: tell the LLM which tools it can call
llm_with_tools = llm.bind_tools(tools_list)
Copy

Definimos la función que irá en el nodo chat bot

	
# Define the chatbot function
def chatbot_function(state: State):
return {opening_brace}"messages": [llm_with_tools.invoke(state["messages"])]}
Copy

Necesitamos crear una función para ejecutar las tools_list si se llaman. Agregamos las tools_list a un nuevo nodo.

Más tarde haremos esto con el método ToolNode de LangGraph, pero primero lo construiremos nosotros mismos para entender cómo funciona.

Vamos a implementar la clase BasicToolNode, que comprueba el mensaje más reciente en el estado y llama a las tools_list si el mensaje contiene tool_calls.

Se basa en el soporte de tool_calling de los LLMs , que está disponible en Anthropic, HuggingFace, Google Gemini, OpenAI y varios otros proveedores de LLM.

	
from langchain_core.messages import ToolMessage
class BasicToolNode:
"""A node that runs the tools requested in the last AIMessage."""
def __init__(self, tools: list) -> None:
"""
Initialize the tools
Args:
tools (list): The tools to use
Returns:
None
"""
# Initialize the tools
self.tools_by_name = {opening_brace}tool.name: tool for tool in tools{closing_brace}
def __call__(self, inputs: dict):
"""
Call the node
Args:
inputs (dict): The inputs to the node
Returns:
dict: The outputs of the node
"""
# Get the last message
if messages := inputs.get("messages", []):
message = messages[-1]
else:
raise ValueError("No message found in input")
# Execute the tools
outputs = []
for tool_call in message.tool_calls:
tool_result = self.tools_by_name[tool_call["name"]].invoke(
tool_call["args"]
)
outputs.append(
ToolMessage(
content=json.dumps(tool_result),
name=tool_call["name"],
tool_call_id=tool_call["id"],
)
)
return {opening_brace}"messages": outputs{closing_brace}
basic_tool_node = BasicToolNode(tools=tools_list)
Copy

Hemos usado ToolMessage que pasa el resultado de ejecutar una tool de nuevo al LLM.

ToolMessage contiene el resultado de una invocación de una tool.

Es decir, en cuanto tenemos el resultado de usar una Tool se lo pasamos al LLM para que lo procese

Con el objeto de basic_tool_node (que es un objeto de la clase BasicToolNode que hemos creado) ya podemos hacer que el LLM ejecute tools

Ahora, igual que hicimos cuando construimos un chatbot básico, vamos a crear el grafo y añadirle nodos

	
# Create graph
graph_builder = StateGraph(State)
# Add the chatbot node
graph_builder.add_node("chatbot_node", chatbot_function)
graph_builder.add_node("tools_node", basic_tool_node)
Copy
	
<langgraph.graph.state.StateGraph at 0x14996cd70>

Cuando el LLM reciba un mensaje, como conoce las tools que tiene a disposición, decidirá si contestar o usar una tool. Así que vamos a crear una función de rutado, que ejecutará una tool si el LLM decide usarla, o si no terminará la ejecución del grafo

	
def route_tools_function(
state: State,
):
"""
Use in the conditional_edge to route to the ToolNode if the last message
has tool calls. Otherwise, route to the end.
"""
# Get last message
if isinstance(state, list):
ai_message = state[-1]
elif messages := state.get("messages", []):
ai_message = messages[-1]
else:
raise ValueError(f"No messages found in input state to tool_edge: {state}")
# Router in function of last message
if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
return "tools_node"
return END
Copy

Añadimos los edges.

Tenemos que añadir un edge especial mediante add_conditional_edges, que creará un nodo condicional. Une el nodo chatbot_node con la función de rutado que hemos creado antes route_tools_function. Con este nodo, si obtenemos a la salida de route_tools_function el string tools_node rutará el grafo al nodo tools_node, pero si recibimos END rutará el grafo al nodo END y terminará la ejecución del grafo

Más tarde, reemplazaremos esto con el método preconstruido tools_condition, pero ahora lo implementamos nosotros mismos para ver cómo funciona.

Por último, se añade otro edge que une tools_node con chatbot_node, para que cuando termine de ejecutarse una tool el grafo vuelva al nodo del LLM

	
# Add edges
graph_builder.add_edge(START, "chatbot_node")
graph_builder.add_conditional_edges(
"chatbot_node",
route_tools_function,
# The following dictionary lets you tell the graph to interpret the condition's outputs as a specific node
# It defaults to the identity function, but if you
# want to use a node named something else apart from "tools",
# You can update the value of the dictionary to something else
# e.g., "tools": "my_tools"
{opening_brace}"tools_node": "tools_node", END: END},
)
graph_builder.add_edge("tools_node", "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x14996cd70>

Compilamos el nodo y lo representamos

graph = graph_builder.compile()
      
      try:
          display(Image(graph.get_graph().draw_mermaid_png()))
      except Exception as e:
          print(f"Error al visualizar el grafo: {e}")
      
image uv 24

Ahora podemos hacer preguntas al bot fuera de sus datos de entrenamiento

	
# Colors for the terminal
COLOR_GREEN = "\033[32m"
COLOR_YELLOW = "\033[33m"
COLOR_RESET = "\033[0m"
def stream_graph_updates(user_input: str):
for event in graph.stream({opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace}):
for value in event.values():
print(f"{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}user_input{closing_brace}")
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}value['messages'][-1].content{closing_brace}")
while True:
try:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print(f"{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}user_input{closing_brace}")
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}Goodbye!")
break
stream_graph_updates(user_input)
except:
# fallback if input() is not available
user_input = "What do you know about LangGraph?"
print("User: " + user_input)
stream_graph_updates(user_input)
break
Copy
	
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid 3-2 Leganes: Goals and highlights - LaLiga 24/25 | Marca", "url": "https://www.marca.com/en/soccer/laliga/r-madrid-leganes/2025/03/29/01_0101_20250329_186_957-live.html", "content": "While their form has varied throughout the campaign there is no denying Real Madrid are a force at home in LaLiga this season, as they head into Saturday's match having picked up 34 points from 13 matches. As for Leganes they currently sit 18th in the table, though they are level with Alaves for 17th as both teams look to stay in the top flight. [...] The two teams have already played twice this season, with Real Madrid securing a 3-0 win in the reverse league fixture. They also met in the quarter-finals of the Copa del Rey, a game Real won 3-2. Real Madrid vs Leganes LIVE - Latest Updates Match ends, Real Madrid 3, Leganes 2. Second Half ends, Real Madrid 3, Leganes 2. Foul by Vinícius Júnior (Real Madrid). Seydouba Cissé (Leganes) wins a free kick in the defensive half. [...] Goal! Real Madrid 1, Leganes 1. Diego García (Leganes) left footed shot from very close range. Attempt missed. Óscar Rodríguez (Leganes) left footed shot from the centre of the box. Goal! Real Madrid 1, Leganes 0. Kylian Mbappé (Real Madrid) converts the penalty with a right footed shot. Penalty Real Madrid. Arda Güler draws a foul in the penalty area. Penalty conceded by Óscar Rodríguez (Leganes) after a foul in the penalty area. Delay over. They are ready to continue.", "score": 0.8548001}, {opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Game Analysis - ESPN", "url": "https://www.espn.com/soccer/report/_/gameId/704946", "content": "Real Madrid Leganés Mbappé nets twice to keep Real Madrid's title hopes alive Real Madrid vs. Leganés - Game Highlights Watch the Game Highlights from Real Madrid vs. Leganés, 03/30/2025 Real Madrid's Kylian Mbappé struck twice to help his side come from behind to claim a hard-fought 3-2 home win over relegation-threatened Leganes on Saturday to move the second-placed reigning champions level on points with leaders Barcelona. [...] Leganes pushed for an equaliser but fell to a third consecutive defeat to sit 18th on 27 points, level with Alaves who are one place higher in the safety zone on goal difference. "We have done a tremendous job. We leave with our heads held high because we were fighting until the end to score here," Leganes striker Garcia said. "Ultimately, it was down to the details that they took it. We played a very serious game and now we have to think about next week." Game Information", "score": 0.82220376}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid vs Leganes 3-2 | Highlights & All Goals - YouTube", "url": "https://www.youtube.com/watch?v=ngBWsjmeHEk", "content": "Real Madrid secured a dramatic 3-2 victory over Leganes in an intense La Liga showdown on 29 March 2025! ⚽ Watch all the goals and", "score": 0.5157425}, {opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Game Analysis - ESPN", "url": "https://www.espn.com/soccer/report/_/gameId/704946", "content": ""We know what we always have to do: win. We started well, in the opposition half, and we scored a goal. Then we didn't play well for 20 minutes and conceded two goals," said Mbappé. "But we know that if we play well we'll score and in the second half we scored two goals. We won the game and we're very happy. "We worked on [the set piece] a few weeks ago with the staff. I knew I could shoot this way, I saw the space. I asked the others to let me shoot and it worked out well." [...] Leganes pushed for an equaliser but fell to a third consecutive defeat to sit 18th on 27 points, level with Alaves who are one place higher in the safety zone on goal difference. "We have done a tremendous job. We leave with our heads held high because we were fighting until the end to score here," Leganes striker Garcia said. "Ultimately, it was down to the details that they took it. We played a very serious game and now we have to think about next week." Game Information [...] However, Leganes responded almost immediately as Diego Garcia tapped in a loose ball at the far post to equalise in the following minute before Rodriguez set up Dani Raba to slot past goalkeeper Andriy Lunin in the 41st. Real midfielder Jude Bellingham brought the scores level two minutes after the break, sliding the ball into the net after a rebound off the crossbar. Mbappé then bagged the winner with a brilliant curled free kick in the 76th minute for his second.", "score": 0.50944775}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Game Analysis - ESPN", "url": "https://www.espn.com/soccer/report/_/gameId/704946", "content": "Real Madrid Leganés Mbappé nets twice to keep Real Madrid's title hopes alive Real Madrid vs. Leganés - Game Highlights Watch the Game Highlights from Real Madrid vs. Leganés, 03/30/2025 Real Madrid's Kylian Mbappé struck twice to help his side come from behind to claim a hard-fought 3-2 home win over relegation-threatened Leganes on Saturday to move the second-placed reigning champions level on points with leaders Barcelona. [...] Leganes pushed for an equaliser but fell to a third consecutive defeat to sit 18th on 27 points, level with Alaves who are one place higher in the safety zone on goal difference. "We have done a tremendous job. We leave with our heads held high because we were fighting until the end to score here," Leganes striker Garcia said. "Ultimately, it was down to the details that they took it. We played a very serious game and now we have to think about next week." Game Information [...] However, Leganes responded almost immediately as Diego Garcia tapped in a loose ball at the far post to equalise in the following minute before Rodriguez set up Dani Raba to slot past goalkeeper Andriy Lunin in the 41st. Real midfielder Jude Bellingham brought the scores level two minutes after the break, sliding the ball into the net after a rebound off the crossbar. Mbappé then bagged the winner with a brilliant curled free kick in the 76th minute for his second.", "score": 0.93666285}, {opening_brace}"title": "MBAPPE BRACE Leganes vs. Real Madrid - ESPN FC - YouTube", "url": "https://www.youtube.com/watch?v=0xwUhzx19_4", "content": "MBAPPE BRACE 🔥 Leganes vs. Real Madrid | LALIGA Highlights | ESPN FC ESPN FC 6836 likes 550646 views 29 Mar 2025 Watch these highlights as Kylian Mbappe scores 2 goals to give Real Madrid the 3-2 victory over Leganes in their LALIGA matchup. ✔ Subscribe to ESPN+: http://espnplus.com/soccer/youtube ✔ Subscribe to ESPN FC on YouTube: http://bit.ly/SUBSCRIBEtoESPNFC 790 comments", "score": 0.92857105}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "(VIDEO) All Goals from Real Madrid vs Leganes in La Liga", "url": "https://www.beinsports.com/en-us/soccer/la-liga/articles-video/-video-all-goals-from-real-madrid-vs-leganes-in-la-liga-2025-03-29?ess=", "content": "Real Madrid will host CD Leganes this Saturday, March 29, 2025, at the Santiago Bernabéu in a Matchday 29 clash of LaLiga EA Sports.", "score": 0.95628047}, {opening_brace}"title": "Real Madrid v Leganes | March 29, 2025 | Goal.com US", "url": "https://www.goal.com/en-us/match/real-madrid-vs-leganes/sZTw_SnjyKCcntxKHHQI7", "content": "Latest news, stats and live commentary for the LaLiga's meeting between Real Madrid v Leganes on the March 29, 2025.", "score": 0.9522955}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: Real Madrid faced Leganes in La Liga this weekend and came away with a 3-2 victory at the Santiago Bernabéu. The match was intense, with Kylian Mbappé scoring twice for Real Madrid, including a curled free kick in the 76th minute that proved to be the winner. Leganes managed to take the lead briefly with goals from Diego García and Dani Raba, but Real Madrid leveled through Jude Bellingham before Mbappé's second goal secured the win. This result keeps Real Madrid's title hopes alive, moving them level on points with leaders Barcelona.
User: Which players played the match?
Assistant: The question is too vague and doesn't provide context such as the sport, league, or specific match in question. Could you please provide more details?
User: q
Assistant: Goodbye!

Como ves, primero le he preguntado cómo quedó el Real Madrid en su último partido en la Liga contra el Leganés , como es algo de actualidad, ha decidido usar la herramienta de búsqueda, con lo que ha obtenido el resultado

Sin embargo, a continuación le he preguntado qué jugadores jugaron y no sabía de qué le hablaba, eso es porque no se mantiene el contexto de la conversación. Así que lo siguiente que vamos a hacer es agregar una memoria al agente para que pueda mantener el contexto de la conversación.

Vamos a escribir todo junto para que sea más legible

	
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from huggingface_hub import login
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import ToolMessage
from IPython.display import Image, display
import json
import os
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
import dotenv
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
# State
class State(TypedDict):
messages: Annotated[list, add_messages]
# Tools
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
tool = TavilySearchResults(api_wrapper=wrapper, max_results=2)
tools_list = [tool]
# Create the LLM model
login(token=HUGGINGFACE_TOKEN) # Login to HuggingFace to use the model
MODEL = "Qwen/Qwen2.5-72B-Instruct"
model = HuggingFaceEndpoint(
repo_id=MODEL,
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
# Create the chat model
llm = ChatHuggingFace(llm=model)
# Create the LLM with tools
llm_with_tools = llm.bind_tools(tools_list)
# BasicToolNode class
class BasicToolNode:
"""A node that runs the tools requested in the last AIMessage."""
def __init__(self, tools: list) -> None:
"""
Initialize the tools
Args:
tools (list): The tools to use
Returns:
None
"""
# Initialize the tools
self.tools_by_name = {opening_brace}tool.name: tool for tool in tools{closing_brace}
def __call__(self, inputs: dict):
"""
Call the node
Args:
inputs (dict): The inputs to the node
Returns:
dict: The outputs of the node
"""
# Get the last message
if messages := inputs.get("messages", []):
message = messages[-1]
else:
raise ValueError("No message found in input")
# Execute the tools
outputs = []
for tool_call in message.tool_calls:
tool_result = self.tools_by_name[tool_call["name"]].invoke(
tool_call["args"]
)
outputs.append(
ToolMessage(
content=json.dumps(tool_result),
name=tool_call["name"],
tool_call_id=tool_call["id"],
)
)
return {opening_brace}"messages": outputs{closing_brace}
basic_tool_node = BasicToolNode(tools=tools_list)
# Functions
def chatbot_function(state: State):
return {opening_brace}"messages": [llm_with_tools.invoke(state["messages"])]}
# Route function
def route_tools_function(state: State):
"""
Use in the conditional_edge to route to the ToolNode if the last message
has tool calls. Otherwise, route to the end.
"""
# Get last message
if isinstance(state, list):
ai_message = state[-1]
elif messages := state.get("messages", []):
ai_message = messages[-1]
else:
raise ValueError(f"No messages found in input state to tool_edge: {state}")
# Router in function of last message
if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
return "tools_node"
return END
# Start to build the graph
graph_builder = StateGraph(State)
# Add nodes to the graph
graph_builder.add_node("chatbot_node", chatbot_function)
graph_builder.add_node("tools_node", basic_tool_node)
# Add edges
graph_builder.add_edge(START, "chatbot_node")
graph_builder.add_conditional_edges(
"chatbot_node",
route_tools_function,
{opening_brace}
"tools_node": "tools_node",
END: END
},
)
graph_builder.add_edge("tools_node", "chatbot_node")
# Compile the graph
graph = graph_builder.compile()
# Display the graph
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception as e:
print(f"Error al visualizar el grafo: {e}")
Copy
	
Error al visualizar el grafo: Failed to reach https://mermaid.ink/ API while trying to render your graph after 1 retries. To resolve this issue:
1. Check your internet connection and try again
2. Try with higher retry settings: `draw_mermaid_png(..., max_retries=5, retry_delay=2.0)`
3. Use the Pyppeteer rendering method which will render your graph locally in a browser: `draw_mermaid_png(..., draw_method=MermaidDrawMethod.PYPPETEER)`

Ejecutamos el grafo

	
# Colors for the terminal
COLOR_GREEN = "\033[32m"
COLOR_YELLOW = "\033[33m"
COLOR_RESET = "\033[0m"
def stream_graph_updates(user_input: str):
for event in graph.stream({opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace}):
for value in event.values():
print(f"{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}user_input{closing_brace}")
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}value['messages'][-1].content{closing_brace}")
while True:
try:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print(f"{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}user_input{closing_brace}")
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}Goodbye!")
break
stream_graph_updates(user_input)
except:
# fallback if input() is not available
user_input = "What do you know about LangGraph?"
print("User: " + user_input)
stream_graph_updates(user_input)
break
Copy
	
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid 3-2 Leganes: Mbappe, Bellingham inspire comeback to ...", "url": "https://www.nbcsports.com/soccer/news/how-to-watch-real-madrid-vs-leganes-live-stream-link-tv-team-news-prediction", "content": "Real Madrid fought back to beat struggling Leganes 3-2 at the Santiago Bernabeu on Saturday as Kylian Mbappe scored twice and Jude", "score": 0.78749067}, {opening_brace}"title": "Real Madrid vs Leganes 3-2: LaLiga – as it happened - Al Jazeera", "url": "https://www.aljazeera.com/sports/liveblog/2025/3/29/live-real-madrid-vs-leganes-laliga", "content": "Defending champions Real Madrid beat 3-2 Leganes in Spain's LaLiga. The match at Santiago Bernabeu in Madrid, Spain saw Real trail 2-1 at half-", "score": 0.7485182}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid vs Leganés: Spanish La Liga stats & head-to-head - BBC", "url": "https://www.bbc.com/sport/football/live/cm2ndndvdgmt", "content": "Mbappe scores winner as Real Madrid survive Leganes scare Match Summary Sat 29 Mar 2025 ‧ Spanish La Liga Real Madrid 3 , Leganés 2 at Full time Real MadridReal MadridReal Madrid 3 2 LeganésLeganésLeganés Full time FT Half Time Real Madrid 1 , Leganés 2 HT 1-2 Key Events Real Madrid K. Mbappé (32' pen, 76')Penalty 32 minutes, Goal 76 minutes J. Bellingham (47')Goal 47 minutes Leganés Diego García (34')Goal 34 minutes Dani Raba (41')Goal 41 minutes [...] Good nightpublished at 22:14 Greenwich Mean Time 29 March 22:14 GMT 29 March Thanks for joining us, that was a great game. See you again soon for more La Liga action. 13 2 Share close panel Share page Copy link About sharing Postpublished at 22:10 Greenwich Mean Time 29 March 22:10 GMT 29 March FT: Real Madrid 3-2 Leganes [...] Postpublished at 22:02 Greenwich Mean Time 29 March 22:02 GMT 29 March FT: Real Madrid 3-2 Leganes Over to you, Barcelona. Hansi Flick's side face Girona tomorrow (15:15 BST) and have the chance to regain their three point lead if they are victorious. 18 6 Share close panel Share page Copy link About sharing", "score": 0.86413884}, {opening_brace}"title": "Real Madrid 3 - 2 CD Leganés (03/29) - Game Report - 365Scores", "url": "https://www.365scores.com/en-us/football/match/laliga-11/cd-leganes-real-madrid-131-9242-11", "content": "The game between Real Madrid and CD Leganés ended with a score of Real Madrid 3 - 2 CD Leganés. On 365Scores, you can check all the head-to-head results between", "score": 0.8524574}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Final Score - ESPN", "url": "https://www.espn.com/soccer/match/_/gameId/704946/leganes-real-madrid", "content": "Game Information Santiago Bernabéu 8:00 PM, March 29, 2025Coverage: ESPN Deportes/ESPN+ Madrid, Spain Attendance: 73,641 [...] Match Commentary -Match ends, Real Madrid 3, Leganes 2.90'+9'Second Half ends, Real Madrid 3, Leganes 2.90'+7'Seydouba Cissé (Leganes) wins a free kick in the defensive half. Full Commentary Match Stats RMALEG Possession 70.7% 29.3% Shots on Goal 10 4 Shot Attempts 24 10 Yellow Cards 1 4 Corner Kicks 8 3 Saves 2 6 4-2-3-1 13 Lunin * 20 García * 22 Rüdiger * 35 Asencio * 17 Vázquez 6 Camavinga * 10 Modric 21 Díaz 5 Bellingham * 15 Güler 9 Mbappé [...] | Rayo Vallecano | 35 | 12 | 11 | 12 | -5 | 47 | | Mallorca | 35 | 13 | 8 | 14 | -7 | 47 | | Valencia | 35 | 11 | 12 | 12 | -8 | 45 | | Osasuna | 35 | 10 | 15 | 10 | -8 | 45 | | Real Sociedad | 35 | 12 | 7 | 16 | -9 | 43 | | Getafe | 35 | 10 | 9 | 16 | -3 | 39 | | Espanyol | 35 | 10 | 9 | 16 | -9 | 39 | | Girona | 35 | 10 | 8 | 17 | -12 | 38 | | Sevilla | 35 | 9 | 11 | 15 | -10 | 38 | | Alavés | 35 | 8 | 11 | 16 | -12 | 35 | | Leganés | 35 | 7 | 13 | 15 | -18 | 34 |", "score": 0.93497354}, {opening_brace}"title": "Real Madrid v Leganes | March 29, 2025 | Goal.com US", "url": "https://www.goal.com/en-us/match/real-madrid-vs-leganes/sZTw_SnjyKCcntxKHHQI7", "content": "Latest news, stats and live commentary for the LaLiga's meeting between Real Madrid v Leganes on the March 29, 2025.", "score": 0.921929}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Final Score - ESPN", "url": "https://www.espn.com/soccer/match/_/gameId/704946/leganes-real-madrid", "content": "Game Information Santiago Bernabéu 8:00 PM, March 29, 2025Coverage: ESPN Deportes/ESPN+ Madrid, Spain Attendance: 73,641 [...] Match Commentary -Match ends, Real Madrid 3, Leganes 2.90'+9'Second Half ends, Real Madrid 3, Leganes 2.90'+7'Seydouba Cissé (Leganes) wins a free kick in the defensive half. Full Commentary Match Stats RMALEG Possession 70.7% 29.3% Shots on Goal 10 4 Shot Attempts 24 10 Yellow Cards 1 4 Corner Kicks 8 3 Saves 2 6 4-2-3-1 13 Lunin * 20 García * 22 Rüdiger * 35 Asencio * 17 Vázquez 6 Camavinga * 10 Modric 21 Díaz 5 Bellingham * 15 Güler 9 Mbappé [...] Mbappé nets twice to maintain Madrid title hopes ------------------------------------------------ Kylian Mbappé struck twice to guide Real Madrid to a 3-2 home win over relegation-threatened Leganes on Saturday. Mar 29, 2025, 10:53 pm - Reuters Match Timeline Real Madrid Leganés KO 32 34 41 HT 47 62 62 62 65 66 72 74 76 81 83 86 89 FT", "score": 0.96213967}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Final Score - ESPN", "url": "https://www.espn.com/soccer/match/_/gameId/704946/leganes-real-madrid", "content": "Game Information Santiago Bernabéu 8:00 PM, March 29, 2025Coverage: ESPN Deportes/ESPN+ Madrid, Spain Attendance: 73,641 [...] Match Commentary -Match ends, Real Madrid 3, Leganes 2.90'+9'Second Half ends, Real Madrid 3, Leganes 2.90'+7'Seydouba Cissé (Leganes) wins a free kick in the defensive half. Full Commentary Match Stats RMALEG Possession 70.7% 29.3% Shots on Goal 10 4 Shot Attempts 24 10 Yellow Cards 1 4 Corner Kicks 8 3 Saves 2 6 4-2-3-1 13 Lunin * 20 García * 22 Rüdiger * 35 Asencio * 17 Vázquez 6 Camavinga * 10 Modric 21 Díaz 5 Bellingham * 15 Güler 9 Mbappé [...] -550 o3.5 +105 -1.5 -165 LEGLeganésLeganés (6-9-14) (6-9-14, 27 pts) u3.5 -120 +950 u3.5 -135", "score": 0.9635647}, {opening_brace}"title": "Real Madrid v Leganes | March 29, 2025 | Goal.com US", "url": "https://www.goal.com/en-us/match/real-madrid-vs-leganes/sZTw_SnjyKCcntxKHHQI7", "content": "Latest news, stats and live commentary for the LaLiga's meeting between Real Madrid v Leganes on the March 29, 2025.", "score": 0.95921934}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Final Score - ESPN", "url": "https://www.espn.com/soccer/match/_/gameId/704946/leganes-real-madrid", "content": "Real Madrid 3-2 Leganés (Mar 29, 2025) Final Score - ESPN Real Madrid -Match ends, Real Madrid 3, Leganes 2.90'+9'Second Half ends, Real Madrid 3, Leganes 2.90'+7'Seydouba Cissé (Leganes) wins a free kick in the defensive half. Freedom from Property StressJohn buys bay area houses | [Sponsored](https://popup.taboola.com/en/?template=colorbox&utm_source=espnnetwork-espn&utm_medium=referral&utm_content=thumbs-feed-01-b:gamepackage-thumbnails-3x1-b%20|%20Card%201:)[Sponsored](https://popup.taboola.com/en/?template=colorbox&utm_source=espnnetwork-espn&utm_medium=referral&utm_content=thumbs-feed-01-b:gamepackage-thumbnails-3x1-b%20|%20Card%201:) Get Offer Brand-New 2-Bedroom Senior Apartment in Mountain View: You Won't Believe the Price2-Bedroom Senior Apartment | [Sponsored](https://popup.taboola.com/en/?template=colorbox&utm_source=espnnetwork-espn&utm_medium=referral&utm_content=thumbs-feed-01-b:gamepackage-thumbnails-3x1-b%20|%20Card%201:)[Sponsored](https://popup.taboola.com/en/?template=colorbox&utm_source=espnnetwork-espn&utm_medium=referral&utm_content=thumbs-feed-01-b:gamepackage-thumbnails-3x1-b%20|%20Card%201:) Read More | Real Madrid | 35 | 23 | 6 | 6 | +35 | 75 | Real Madrid woes continue as Vinícius Júnior injury confirmed ------------------------------------------------------------- Injuries to Vinícius Júnior and Lucas Vázquez added to Real Madrid's problems on Monday. To learn more, visit "Do Not Sell or Share My Personal Information" and "Targeted Advertising" Opt-Out Rights.", "score": 0.98565}, {opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Game Analysis - ESPN", "url": "https://www.espn.com/soccer/report/_/gameId/704946", "content": "Real Madrid's Kylian Mbappé struck twice to help his side come from behind to claim a hard-fought 3-2 home win over relegation-threatened", "score": 0.98277}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant:
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: [{opening_brace}"title": "Real Madrid 3 - 2 CD Leganés (03/29) - Game Report - 365Scores", "url": "https://www.365scores.com/en-us/football/match/laliga-11/cd-leganes-real-madrid-131-9242-11", "content": "The game between Real Madrid and CD Leganés in the Regular Season of LaLiga, held on Saturday, March 29, 2025 at Estadio Santiago Bernabéu, ended with a score", "score": 0.96686727}, {opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Final Score - ESPN", "url": "https://www.espn.com/soccer/match/_/gameId/704946/leganes-real-madrid", "content": "Game Information Santiago Bernabéu 8:00 PM, March 29, 2025Coverage: ESPN Deportes/ESPN+ Madrid, Spain Attendance: 73,641 [...] -550 o3.5 +105 -1.5 -165 LEGLeganésLeganés (6-9-14) (6-9-14, 27 pts) u3.5 -120 +950 u3.5 -135 [...] Referees: Pablo González Fuertes", "score": 0.9595845}]
User: How did Real Madrid fare this weekend against Leganes in La Liga?
Assistant: Real Madrid faced CD Leganés in a La Liga match on Saturday, March 29, 2025, at the Estadio Santiago Bernabéu. The match was a thrilling encounter, with Real Madrid coming from behind to secure a 3-2 victory.
Key points from the match include:
- **Scoreline**: Real Madrid 3, Leganés 2.
- **Goals**:
- **Real Madrid**: Kylian Mbappé scored twice, including a penalty, and Jude Bellingham also found the net.
- **Leganés**: Goals were scored by Diego García and Dani Raba.
- **Attendance**: The match was played in front of 73,641 spectators.
- **Key Moments**:
- Real Madrid trailed 2-1 at half-time but mounted a comeback in the second half.
- Mbappé's penalty in the 32nd minute and his second goal in the 76th minute were crucial in turning the game around.
- Bellingham's goal in the 47th minute shortly after the break tied the game.
This victory is significant for Real Madrid as they continue their push for the La Liga title, while Leganés remains in a difficult position, fighting against relegation.
User: Which players played the match?
Assistant: I'm sorry, but I need more information to answer your question. Could you please specify which match you're referring to, including the sport, the teams, or any other relevant details? This will help me provide you with the correct information.
User: q
Assistant: Goodbye!

Volvemos a ver que el problema es que no recuerda el contexto de la conversación.

Agregar memoria al chatbot - memoria a corto plazo, memoria dentro del hilolink image 35

Nuestro chatbot ahora puede usar herramientas para responder preguntas de los usuarios, pero no recuerda el contexto de las interacciones anteriores. Esto limita su capacidad de tener conversaciones coherentes y de múltiples turnos.

LangGraph resuelve este problema a través de puntos de control persistentes o checkpoints. Si le proporcionamos un checkpointer al compilar el grafo y un thread_id al llamar al grafo, LangGraph guarda automáticamente el estado después de cada iteración en la conversación.

Cuando invoquemos el grafo nuevamente usando el mismo thread_id, el grafo cargará su estado guardado, permitiendo que el chatbot continúe donde lo dejó.

Veremos más tarde que ese checkpointing es mucho más potente que la simple memoria de chat: le permite guardar y reanudar estados complejos en cualquier momento para la recuperación de errores, flujos de trabajo con human in the loop, interacciones en el tiempo y más. Pero antes de ver todo eso, vamos a agregar puntos de control para permitir conversaciones de varias iteraciones.

	
import os
import dotenv
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
Copy

Para empezar, creamos un checkpointer MemorySaver.

	
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
Copy

Aviso

Estamos usando un checkpointer en memoria, es decir, se guarda en la RAM y cuando se termine de ejecutar el grafo se elimina. Esto nos vale para nuestro caso, ya que es un ejemplo para aprender a usar LangGraph. En una aplicación de producción, es probable que se necesite cambiar esto para usarlo con SqliteSaver o PostgresSaver y conéctarnos a nuestra propia base de datos.

A continuación, definimos el grafo.

	
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
Copy

Definimos la tool

	
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
tool = TavilySearchResults(api_wrapper=wrapper, max_results=2)
tools_list = [tool]
Copy

A continuación, el LLM con las bind_tools y lo añadimos al grafo

	
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from huggingface_hub import login
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
# Create the LLM
login(token=HUGGINGFACE_TOKEN) # Login to HuggingFace to use the model
MODEL = "Qwen/Qwen2.5-72B-Instruct"
model = HuggingFaceEndpoint(
repo_id=MODEL,
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
# Create the chat model
llm = ChatHuggingFace(llm=model)
# Modification: tell the LLM which tools it can call
llm_with_tools = llm.bind_tools(tools_list)
# Define the chatbot function
def chatbot_function(state: State):
return {opening_brace}"messages": [llm_with_tools.invoke(state["messages"])]}
# Add the chatbot node
graph_builder.add_node("chatbot_node", chatbot_function)
Copy
	
<langgraph.graph.state.StateGraph at 0x1173534d0>

Antes construimos nuestro propio BasicToolNode para aprender cómo funciona, ahora lo reemplazaremos con el método de LangGraph ToolNode y tools_condition, ya que estos hacen algunas cosas buenas como la ejecución paralela de API. Aparte de eso, el resto es igual que antes.

	
from langgraph.prebuilt import ToolNode, tools_condition
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
Copy
	
<langgraph.graph.state.StateGraph at 0x1173534d0>

Añadimos el nodo de tools_condition al grafo

	
graph_builder.add_conditional_edges(
"chatbot_node",
tools_condition,
)
Copy
	
<langgraph.graph.state.StateGraph at 0x1173534d0>

Añadimos el nodo de tools al grafo

	
graph_builder.add_edge("tools", "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x1173534d0>

Añadimos el nodo de START al grafo

	
graph_builder.add_edge(START, "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x1173534d0>

Compilamos el grafo añadiendo el checkpointer

	
graph = graph_builder.compile(checkpointer=memory)
Copy

Lo representamos gráficamente

from IPython.display import Image, display
      
      try:
          display(Image(graph.get_graph().draw_mermaid_png()))
      except Exception as e:
          print(f"Error al visualizar el grafo: {e}")
      
image uv 25

Creamos una configuración con un thread_id de un usuario

	
USER1_THREAD_ID = "1"
config_USER1 = {opening_brace}"configurable": {opening_brace}"thread_id": USER1_THREAD_ID{closing_brace}{closing_brace}
Copy
	
user_input = "Hi there! My name is Maximo."
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config_USER1,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Hi there! My name is Maximo.
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: does not reside in any location,{closing_brace}{closing_brace},
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Determining an individual's tax residency status - IRS", "url": "https://www.irs.gov/individuals/international-taxpayers/determining-an-individuals-tax-residency-status", "content": "If you are not a U.S. citizen, you are considered a nonresident of the United States for U.S. tax purposes unless you meet one of two tests.", "score": 0.1508904}, {opening_brace}"title": "Fix "Location Is Not Available", C:\WINDOWS\system32 ... - YouTube", "url": "https://www.youtube.com/watch?v=QFD-Ptp0SJw", "content": "Fix Error "Location is not available" C:\WINDOWS\system32\config\systemprofile\Desktop is unavailable. If the location is on this PC,", "score": 0.07777658}]
================================== Ai Message ==================================
Invalid Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
{"query": "Arguments["image={"}
	
user_input = "Do you remember my name?"
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config_USER1,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Do you remember my name?
================================== Ai Message ==================================
Of course! You mentioned your name is Maximo.

Como se puede ver, no hemos pasado una lista con los mensajes, todo está siendo gestionado por el checkpointer.

Si ahora probamos con otro usuario, es decir, con otro thread_id, veremos que el grafo no recuerda la conversación anterior.

	
USER2_THREAD_ID = "2"
config_USER2 = {opening_brace}"configurable": {opening_brace}"thread_id": USER2_THREAD_ID{closing_brace}{closing_brace}
user_input = "Do you remember my name?"
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config_USER2,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Do you remember my name?
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: Do you Remember My Name
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Sam Fender - Remember My Name (Official Video) - YouTube", "url": "https://www.youtube.com/watch?v=uaQm48G6IjY", "content": "Sam Fender - Remember My Name (Official Video) SamFenderVEVO 10743 likes 862209 views 14 Feb 2025 Remember My Name is a love song dedicated to my late Grandparents - they were always so fiercely proud of our family so I wrote the song in honour of them, from the perspective of my Grandad who was looking after my Grandma when she was suffering from dementia. This video is a really special one for me and I want to say thank you to everyone involved in making it. I hope you like it ❤️ [...] If I was wanting of anymore I’d be as greedy as those men on the hill But I remain forlorn In the memory of what once was Chasing a cross in from the wing Our boy’s a whippet, he’s faster than anything Remember the pride that we felt For the two of us made him ourselves Humour me Make my day I’ll tell you stories Kiss your face And I’ll pray You’ll remember My name I’m not sure of what awaits Wasn’t a fan of St Peter and his gates But by god I pray That I’ll see you in some way [...] Oh 11 Walk Avenue Something to behold To them it’s a council house To me it’s a home And a home that you made Where the grandkids could play But it’s never the same without you Humour me Make my day I’ll tell you stories I’ll kiss your face And I’ll pray You’ll remember My name And I’ll pray you remember my name And I’ll pray you remember my name ---", "score": 0.6609831}, {opening_brace}"title": "Do You Remember My Name? - Novel Updates", "url": "https://www.novelupdates.com/series/do-you-remember-my-name/", "content": "This is a Cute, Tender, and Heartwarming High School Romance. It's not Heavy. It's not so Emotional too, but it does have Emotional moments. It's story Full of", "score": 0.608897}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: do you remember my name
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Sam Fender - Remember My Name (Official Video) - YouTube", "url": "https://www.youtube.com/watch?v=uaQm48G6IjY", "content": "Sam Fender - Remember My Name (Official Video) SamFenderVEVO 10743 likes 862209 views 14 Feb 2025 Remember My Name is a love song dedicated to my late Grandparents - they were always so fiercely proud of our family so I wrote the song in honour of them, from the perspective of my Grandad who was looking after my Grandma when she was suffering from dementia. This video is a really special one for me and I want to say thank you to everyone involved in making it. I hope you like it ❤️ [...] Oh 11 Walk Avenue Something to behold To them it’s a council house To me it’s a home And a home that you made Where the grandkids could play But it’s never the same without you Humour me Make my day I’ll tell you stories I’ll kiss your face And I’ll pray You’ll remember My name And I’ll pray you remember my name And I’ll pray you remember my name --- [...] If I was wanting of anymore I’d be as greedy as those men on the hill But I remain forlorn In the memory of what once was Chasing a cross in from the wing Our boy’s a whippet, he’s faster than anything Remember the pride that we felt For the two of us made him ourselves Humour me Make my day I’ll tell you stories Kiss your face And I’ll pray You’ll remember My name I’m not sure of what awaits Wasn’t a fan of St Peter and his gates But by god I pray That I’ll see you in some way", "score": 0.7123327}, {opening_brace}"title": "Do you remember my name? - song and lyrics by Alea, Mama Marjas", "url": "https://open.spotify.com/track/3GVBn3rEQLxZl4zJ4dG8UJ", "content": "Listen to Do you remember my name? on Spotify. Song · Alea, Mama Marjas · 2023.", "score": 0.6506676}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: do you remember my name
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Sam Fender - Remember My Name (Official Video) - YouTube", "url": "https://www.youtube.com/watch?v=uaQm48G6IjY", "content": "Sam Fender - Remember My Name (Official Video) SamFenderVEVO 10743 likes 862209 views 14 Feb 2025 Remember My Name is a love song dedicated to my late Grandparents - they were always so fiercely proud of our family so I wrote the song in honour of them, from the perspective of my Grandad who was looking after my Grandma when she was suffering from dementia. This video is a really special one for me and I want to say thank you to everyone involved in making it. I hope you like it ❤️ [...] Oh 11 Walk Avenue Something to behold To them it’s a council house To me it’s a home And a home that you made Where the grandkids could play But it’s never the same without you Humour me Make my day I’ll tell you stories I’ll kiss your face And I’ll pray You’ll remember My name And I’ll pray you remember my name And I’ll pray you remember my name --- [...] If I was wanting of anymore I’d be as greedy as those men on the hill But I remain forlorn In the memory of what once was Chasing a cross in from the wing Our boy’s a whippet, he’s faster than anything Remember the pride that we felt For the two of us made him ourselves Humour me Make my day I’ll tell you stories Kiss your face And I’ll pray You’ll remember My name I’m not sure of what awaits Wasn’t a fan of St Peter and his gates But by god I pray That I’ll see you in some way", "score": 0.7123327}, {opening_brace}"title": "Do you remember my name? - song and lyrics by Alea, Mama Marjas", "url": "https://open.spotify.com/track/3GVBn3rEQLxZl4zJ4dG8UJ", "content": "Listen to Do you remember my name? on Spotify. Song · Alea, Mama Marjas · 2023.", "score": 0.6506676}]
================================== Ai Message ==================================
I'm here to assist you, but I don't actually have the ability to remember names or personal information from previous conversations. How can I assist you today?

Ahora que nuestro chatbot tiene herramientas de búsqueda y memoria, vamos a repetir el ejemplo anterior, donde le pregunto por el resultado del último partido del Real Madrid en la Liga y luego por qué jugadores jugaron.

	
USER3_THREAD_ID = "3"
config_USER3 = {opening_brace}"configurable": {opening_brace}"thread_id": USER3_THREAD_ID{closing_brace}{closing_brace}
user_input = "How did Real Madrid fare this weekend against Leganes in La Liga?"
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config_USER3,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
How did Real Madrid fare this weekend against Leganes in La Liga?
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: Real Madrid vs Leganes La Liga this weekend
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Real Madrid 3-2 Leganes: Goals and highlights - LaLiga 24/25 | Marca", "url": "https://www.marca.com/en/soccer/laliga/r-madrid-leganes/2025/03/29/01_0101_20250329_186_957-live.html", "content": "While their form has varied throughout the campaign there is no denying Real Madrid are a force at home in LaLiga this season, as they head into Saturday's match having picked up 34 points from 13 matches. As for Leganes they currently sit 18th in the table, though they are level with Alaves for 17th as both teams look to stay in the top flight. [...] The two teams have already played twice this season, with Real Madrid securing a 3-0 win in the reverse league fixture. They also met in the quarter-finals of the Copa del Rey, a game Real won 3-2. Real Madrid vs Leganes LIVE - Latest Updates Match ends, Real Madrid 3, Leganes 2. Second Half ends, Real Madrid 3, Leganes 2. Foul by Vinícius Júnior (Real Madrid). Seydouba Cissé (Leganes) wins a free kick in the defensive half. [...] Goal! Real Madrid 1, Leganes 1. Diego García (Leganes) left footed shot from very close range. Attempt missed. Óscar Rodríguez (Leganes) left footed shot from the centre of the box. Goal! Real Madrid 1, Leganes 0. Kylian Mbappé (Real Madrid) converts the penalty with a right footed shot. Penalty Real Madrid. Arda Güler draws a foul in the penalty area. Penalty conceded by Óscar Rodríguez (Leganes) after a foul in the penalty area. Delay over. They are ready to continue.", "score": 0.8548001}, {opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Game Analysis - ESPN", "url": "https://www.espn.com/soccer/report/_/gameId/704946", "content": "Real Madrid Leganés Mbappé nets twice to keep Real Madrid's title hopes alive Real Madrid vs. Leganés - Game Highlights Watch the Game Highlights from Real Madrid vs. Leganés, 03/30/2025 Real Madrid's Kylian Mbappé struck twice to help his side come from behind to claim a hard-fought 3-2 home win over relegation-threatened Leganes on Saturday to move the second-placed reigning champions level on points with leaders Barcelona. [...] Leganes pushed for an equaliser but fell to a third consecutive defeat to sit 18th on 27 points, level with Alaves who are one place higher in the safety zone on goal difference. "We have done a tremendous job. We leave with our heads held high because we were fighting until the end to score here," Leganes striker Garcia said. "Ultimately, it was down to the details that they took it. We played a very serious game and now we have to think about next week." Game Information", "score": 0.82220376}]
================================== Ai Message ==================================
Real Madrid secured a 3-2 victory against Leganes this weekend in their La Liga match. Kylian Mbappé scored twice, including a penalty, to help his team come from behind and claim the win, keeping Real Madrid's title hopes alive. Leganes, now sitting 18th in the table, continues to face challenges in their fight against relegation.

Ahora le preguntamos por los jugadores que jugaron en el partido.

	
user_input = "Which players played the match?"
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config_USER3,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Which players played the match?
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: Real Madrid vs Leganes match report players lineup
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Real Madrid vs. Leganes final score: La Liga result, updates, stats ...", "url": "https://www.sportingnews.com/us/soccer/news/real-madrid-leganes-score-result-updates-stats-la-liga/8ecf730cfcb9b6c5f6693a0d", "content": "Real Madrid came through a topsy-turvy game with Leganes to claim a 3-2 victory and put pressure back on Barcelona in La Liga's title race. Kylian Mbappe scored in each half either side of a Jude Bellingham goal — his first in the league since January 3 — to seal all three points for the champions after Leganes had come from behind to lead at the interval. Rodrygo won back the ball in the Leganes half and earned a free-kick on the edge of the box, and Mbappe found the bottom corner after rolling the ball short to Fran Garcia to work an angle. Leganes lead Real Madrid at the Bernabeu for the very first time! *Real Madrid starting lineup (4-3-3, right to left):* Lunin (GK) — Vazquez, Rudiger, Asencio, Garcia — Modric, Bellingham, Camavinga — B.", "score": 0.88372874}, {opening_brace}"title": "CONFIRMED lineups: Real Madrid vs Leganés, 2025 La Liga", "url": "https://www.managingmadrid.com/2025/3/29/24396638/real-madrid-vs-leganes-2025-la-liga-live-online-stream", "content": "Real Madrid starting XI: Lunin, Vazquez, Rudiger, Asencio, Fran Garcia, Camavinga, Guler, Modric, Bellingham, Brahim, Mbappe. Leganes starting", "score": 0.83452857}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: Real Madrid vs Leganes players 2025
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Player Ratings: Real Madrid 3-2 Leganes; 2025 La Liga", "url": "https://www.managingmadrid.com/2025/3/30/24396688/player-ratings-real-madrid-3-2-leganes-2025-la-liga", "content": "Raúl Asencio—7: Applauded by the Bernabeu on multiple occasions with good sweeping up defensively. Fran García—6: Better on the offensive end, getting into the final third and playing some dagger crosses. Eduardo Camavinga—6: Modric and Camavinga struggled to deal with Leganes counter attacks and Diego, playing as a #10 for Leganes, got the better of both of them. [...] Follow Managing Madrid online: Site search Managing Madrid main menu Filed under: Player Ratings: Real Madrid 3-2 Leganes; 2025 La Liga Kylian Mbappe scores a brace to help Madrid secure a nervy 3-2 victory. Share this story Share All sharing options for: Player Ratings: Real Madrid 3-2 Leganes; 2025 La Liga Full match player ratings below: Andriy Lunin—7: Not at fault for the goals, was left with the opposition taking a shot from near the six yard box. [...] Lucas Vázquez—4: Exposed in transition and lacking the speed and athleticism to cover the gaps he leaves when venturing forward. Needs a more “pessimistic” attitude when the ball is on the opposite flank, occupying better spots in ““rest defense”. Antonio Rudiger—5: Several unnecessary long distance shots to hurt Madrid’s rhythm and reinforce Leganes game plan. Playing with too many matches in his legs and it’s beginning to show.", "score": 0.8832463}, {opening_brace}"title": "Real Madrid vs. Leganés (Mar 29, 2025) Live Score - ESPN", "url": "https://www.espn.com/soccer/match/_/gameId/704946", "content": "Match Formations · 13. Lunin · 20. García · 22. Rüdiger · 35. Asencio · 17. Vázquez · 5. Bellingham · 10. Modric · 6. Camavinga.", "score": 0.86413884}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: Real Madrid vs Leganes starting lineup
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Starting lineups of Real Madrid and Leganés", "url": "https://www.realmadrid.com/en-US/news/football/first-team/latest-news/once-inicial-del-real-madrid-contra-el-leganes-29-03-2025", "content": "Starting lineups of Real Madrid and Leganés The Whites’ team is: Lunin, Lucas V., Asencio, Rüdiger, Fran García, Arda Güler, Modrić, Camavinga, Bellingham, Brahim and Mbappé. Real Madrid have named their starting line-up for the game against Leganés on matchday 29 of LaLiga, which will be played at the Santiago Bernabéu (9 pm CET). [...] Real Madrid starting line-up: 13. Lunin 17. Lucas V. 35. Asencio 22. Rüdiger 20. Fran García 15. Arda Güler 10. Modrić 6. Camavinga 5. Bellingham 21. Brahim 9. Mbappé. Substitutes: 26. Fran González 34. Sergio Mestre 4. Alaba 7. Vini Jr. 8. Valverde 11. Rodrygo 14. Tchouameni 16. Endrick 18. Vallejo 43. Diego Aguado. Leganés starting line-up: 13. Dmitrovic 5. Tapia 6. Sergio G. 7. Óscar 10. Raba 11. Cruz 12. V. Rosier 17. Neyou 19. Diego G. 20. Javi Hernández 22. Nastasic. [...] Suplentes: 1. Juan Soriano 36. Abajas 2. A. Alti 3. Jorge Sáenz 8. Cisse 9. Miguel 14. Darko 18. Duk 21. R. López 23. Munir 24. Chicco 30. I. Diomande. Download Now Official App Fan Real Madrid © 2025 All rights reserved", "score": 0.9465623}, {opening_brace}"title": "Real Madrid vs. Leganes lineups, confirmed starting 11, team news ...", "url": "https://www.sportingnews.com/us/soccer/news/real-madrid-leganes-lineups-starting-11-team-news-injuries/aac757d10cc7b9a084995b4d", "content": "Real Madrid starting lineup (4-3-3, right to left): Lunin (GK) — Vazquez, Rudiger, Asencio, Garcia — Modric, Bellingham, Camavinga — B. Diaz,", "score": 0.9224337}]
================================== Ai Message ==================================
The starting lineup for Real Madrid in their match against Leganés was: Lunin (GK), Vázquez, Rüdiger, Asencio, Fran García, Modric, Bellingham, Camavinga, Brahim, Arda Güler, and Mbappé. Notable players like Vini Jr., Rodrygo, and Valverde were on the bench.

Tras mucho buscar, al final lo encuentra. Por lo que ya tenemos un chatbot con tools y memoria.

Por ahora, hemos creado unos checkpoints en tres hilos diferentes. Pero, ¿qué entra en cada checkpoint? Para inspeccionar el estado de un grafo para una configuración dada podemos usar el método get_state(config).

	
snapshot = graph.get_state(config_USER3)
snapshot
Copy
	
StateSnapshot(values={opening_brace}'messages': [HumanMessage(content='How did Real Madrid fare this weekend against Leganes in La Liga?', additional_kwargs={opening_brace}{closing_brace}, response_metadata={opening_brace}{closing_brace}, id='a33f5825-1ae4-4717-ad17-8e306f35b027'), AIMessage(content='', additional_kwargs={opening_brace}'tool_calls': [{opening_brace}'function': {'arguments': {opening_brace}'query': 'Real Madrid vs Leganes La Liga this weekend'{closing_brace}, 'name': 'tavily_search_results_json', 'description': None}, 'id': '0', 'type': 'function'{closing_brace}]}, response_metadata={opening_brace}'token_usage': {'completion_tokens': 25, 'prompt_tokens': 296, 'total_tokens': 321}, 'model': '', 'finish_reason': 'stop'{closing_brace}, id='run-7905b5ae-5dee-4641-b012-396affde984c-0', tool_calls=[{opening_brace}'name': 'tavily_search_results_json', 'args': {opening_brace}'query': 'Real Madrid vs Leganes La Liga this weekend'{closing_brace}, 'id': '0', 'type': 'tool_call'{closing_brace}]), ToolMessage(content='[{opening_brace}"title": "Real Madrid 3-2 Leganes: Goals and highlights - LaLiga 24/25 | Marca", "url": "https://www.marca.com/en/soccer/laliga/r-madrid-leganes/2025/03/29/01_0101_20250329_186_957-live.html", "content": "While their form has varied throughout the campaign there is no denying Real Madrid are a force at home in LaLiga this season, as they head into Saturday's match having picked up 34 points from 13 matches.\n\nAs for Leganes they currently sit 18th in the table, though they are level with Alaves for 17th as both teams look to stay in the top flight. [...] The two teams have already played twice this season, with Real Madrid securing a 3-0 win in the reverse league fixture. They also met in the quarter-finals of the Copa del Rey, a game Real won 3-2.\n\nReal Madrid vs Leganes LIVE - Latest Updates\n\nMatch ends, Real Madrid 3, Leganes 2.\n\nSecond Half ends, Real Madrid 3, Leganes 2.\n\nFoul by Vinícius Júnior (Real Madrid).\n\nSeydouba Cissé (Leganes) wins a free kick in the defensive half. [...] Goal! Real Madrid 1, Leganes 1. Diego García (Leganes) left footed shot from very close range.\n\nAttempt missed. Óscar Rodríguez (Leganes) left footed shot from the centre of the box.\n\nGoal! Real Madrid 1, Leganes 0. Kylian Mbappé (Real Madrid) converts the penalty with a right footed shot.\n\nPenalty Real Madrid. Arda Güler draws a foul in the penalty area.\n\nPenalty conceded by Óscar Rodríguez (Leganes) after a foul in the penalty area.\n\nDelay over. They are ready to continue.", "score": 0.8548001}, {opening_brace}"title": "Real Madrid 3-2 Leganés (Mar 29, 2025) Game Analysis - ESPN", "url": "https://www.espn.com/soccer/report/_/gameId/704946", "content": "Real Madrid\n\nLeganés\n\nMbappé nets twice to keep Real Madrid's title hopes alive\n\nReal Madrid vs. Leganés - Game Highlights\n\nWatch the Game Highlights from Real Madrid vs. Leganés, 03/30/2025\n\nReal Madrid's Kylian Mbappé struck twice to help his side come from behind to claim a hard-fought 3-2 home win over relegation-threatened Leganes on Saturday to move the second-placed reigning champions level on points with leaders Barcelona. [...] Leganes pushed for an equaliser but fell to a third consecutive defeat to sit 18th on 27 points, level with Alaves who are one place higher in the safety zone on goal difference.\n\n\"We have done a tremendous job. We leave with our heads held high because we were fighting until the end to score here,\" Leganes striker Garcia said.\n\n\"Ultimately, it was down to the details that they took it. We played a very serious game and now we have to think about next week.\"\n\nGame Information", "score": 0.82220376}]', name='tavily_search_results_json', id='0e02fce3-a6f0-4cce-9217-04c8c3219265', tool_call_id='0', artifact={opening_brace}'query': 'Real Madrid vs Leganes La Liga this weekend', 'follow_up_questions': None, 'answer': None, 'images': [], 'results': [{'url': 'https://www.marca.com/en/soccer/laliga/r-madrid-leganes/2025/03/29/01_0101_20250329_186_957-live.html', 'title': 'Real Madrid 3-2 Leganes: Goals and highlights - LaLiga 24/25 | Marca', 'content': "While their form has varied throughout the campaign there is no denying Real Madrid are a force at home in LaLiga this season, as they head into Saturday's match having picked up 34 points from 13 matches. As for Leganes they currently sit 18th in the table, though they are level with Alaves for 17th as both teams look to stay in the top flight. [...] The two teams have already played twice this season, with Real Madrid securing a 3-0 win in the reverse league fixture. They also met in the quarter-finals of the Copa del Rey, a game Real won 3-2. Real Madrid vs Leganes LIVE - Latest Updates Match ends, Real Madrid 3, Leganes 2. Second Half ends, Real Madrid 3, Leganes 2. Foul by Vinícius Júnior (Real Madrid). Seydouba Cissé (Leganes) wins a free kick in the defensive half. [...] Goal! Real Madrid 1, Leganes 1. Diego García (Leganes) left footed shot from very close range. Attempt missed. Óscar Rodríguez (Leganes) left footed shot from the centre of the box. Goal! Real Madrid 1, Leganes 0. Kylian Mbappé (Real Madrid) converts the penalty with a right footed shot. Penalty Real Madrid. Arda Güler draws a foul in the penalty area. Penalty conceded by Óscar Rodríguez (Leganes) after a foul in the penalty area. Delay over. They are ready to continue.", 'score': 0.8548001, 'raw_content': None}, {'url': 'https://www.espn.com/soccer/report/_/gameId/704946', 'title': 'Real Madrid 3-2 Leganés (Mar 29, 2025) Game Analysis - ESPN', 'content': 'Real Madrid Leganés Mbappé nets twice to keep Real Madrid's title hopes alive Real Madrid vs. Leganés - Game Highlights Watch the Game Highlights from Real Madrid vs. Leganés, 03/30/2025 Real Madrid's Kylian Mbappé struck twice to help his side come from behind to claim a hard-fought 3-2 home win over relegation-threatened Leganes on Saturday to move the second-placed reigning champions level on points with leaders Barcelona. [...] Leganes pushed for an equaliser but fell to a third consecutive defeat to sit 18th on 27 points, level with Alaves who are one place higher in the safety zone on goal difference. "We have done a tremendous job. We leave with our heads held high because we were fighting until the end to score here," Leganes striker Garcia said. "Ultimately, it was down to the details that they took it. We played a very serious game and now we have to think about next week." Game Information', 'score': 0.82220376, 'raw_content': None}], 'response_time': 1.47}), AIMessage(content="Real Madrid secured a 3-2 victory against Leganes this weekend in their La Liga match. Kylian Mbappé scored twice, including a penalty, to help his team come from behind and claim the win, keeping Real Madrid's title hopes alive. Leganes, now sitting 18th in the table, continues to face challenges in their fight against relegation.", additional_kwargs={opening_brace}{closing_brace}, response_metadata={opening_brace}'token_usage': {'completion_tokens': 92, 'prompt_tokens': 1086, 'total_tokens': 1178}, 'model': '', 'finish_reason': 'stop'{closing_brace}, id='run-22226dda-0475-49b7-882f-fe7bd63ef025-0'), HumanMessage(content='Which players played the match?', additional_kwargs={opening_brace}{closing_brace}, response_metadata={opening_brace}{closing_brace}, id='3e6d9f84-06a2-4148-8f2b-d8ef42c3bea1'), AIMessage(content='', additional_kwargs={opening_brace}'tool_calls': [{opening_brace}'function': {'arguments': {opening_brace}'query': 'Real Madrid vs Leganes match report players lineup'{closing_brace}, 'name': 'tavily_search_results_json', 'description': None}, 'id': '0', 'type': 'function'{closing_brace}]}, response_metadata={opening_brace}'token_usage': {'completion_tokens': 29, 'prompt_tokens': 1178, 'total_tokens': 1207}, 'model': '', 'finish_reason': 'stop'{closing_brace}, id='run-025d3235-61b9-4add-8e1b-5b1bc795a9d3-0', tool_calls=[{opening_brace}'name': 'tavily_search_results_json', 'args': {opening_brace}'query': 'Real Madrid vs Leganes match report players lineup'{closing_brace}, 'id': '0', 'type': 'tool_call'{closing_brace}]), ToolMessage(content='[{opening_brace}"title": "Real Madrid vs. Leganes final score: La Liga result, updates, stats ...", "url": "https://www.sportingnews.com/us/soccer/news/real-madrid-leganes-score-result-updates-stats-la-liga/8ecf730cfcb9b6c5f6693a0d", "content": "Real Madrid came through a topsy-turvy game with Leganes to claim a 3-2 victory and put pressure back on Barcelona in La Liga's title race. Kylian Mbappe scored in each half either side of a Jude Bellingham goal — his first in the league since January 3 — to seal all three points for the champions after Leganes had come from behind to lead at the interval. Rodrygo won back the ball in the Leganes half and earned a free-kick on the edge of the box, and Mbappe found the bottom corner after rolling the ball short to Fran Garcia to work an angle. Leganes lead Real Madrid at the Bernabeu for the very first time! *Real Madrid starting lineup (4-3-3, right to left):* Lunin (GK) — Vazquez, Rudiger, Asencio, Garcia — Modric, Bellingham, Camavinga — B.", "score": 0.88372874}, {opening_brace}"title": "CONFIRMED lineups: Real Madrid vs Leganés, 2025 La Liga", "url": "https://www.managingmadrid.com/2025/3/29/24396638/real-madrid-vs-leganes-2025-la-liga-live-online-stream", "content": "Real Madrid starting XI: Lunin, Vazquez, Rudiger, Asencio, Fran Garcia, Camavinga, Guler, Modric, Bellingham, Brahim, Mbappe. Leganes starting", "score": 0.83452857}]', name='tavily_search_results_json', id='2dbc1324-2c20-406a-b2d7-a3d6fc609537', tool_call_id='0', artifact={opening_brace}'query': 'Real Madrid vs Leganes match report players lineup', 'follow_up_questions': None, 'answer': None, 'images': [], 'results': [{'url': 'https://www.sportingnews.com/us/soccer/news/real-madrid-leganes-score-result-updates-stats-la-liga/8ecf730cfcb9b6c5f6693a0d', 'title': 'Real Madrid vs. Leganes final score: La Liga result, updates, stats ...', 'content': "Real Madrid came through a topsy-turvy game with Leganes to claim a 3-2 victory and put pressure back on Barcelona in La Liga's title race. Kylian Mbappe scored in each half either side of a Jude Bellingham goal — his first in the league since January 3 — to seal all three points for the champions after Leganes had come from behind to lead at the interval. Rodrygo won back the ball in the Leganes half and earned a free-kick on the edge of the box, and Mbappe found the bottom corner after rolling the ball short to Fran Garcia to work an angle. Leganes lead Real Madrid at the Bernabeu for the very first time! *Real Madrid starting lineup (4-3-3, right to left):* Lunin (GK) — Vazquez, Rudiger, Asencio, Garcia — Modric, Bellingham, Camavinga — B.", 'score': 0.88372874, 'raw_content': None}, {'url': 'https://www.managingmadrid.com/2025/3/29/24396638/real-madrid-vs-leganes-2025-la-liga-live-online-stream', 'title': 'CONFIRMED lineups: Real Madrid vs Leganés, 2025 La Liga', 'content': 'Real Madrid starting XI: Lunin, Vazquez, Rudiger, Asencio, Fran Garcia, Camavinga, Guler, Modric, Bellingham, Brahim, Mbappe. Leganes starting', 'score': 0.83452857, 'raw_content': None}], 'response_time': 3.36}), AIMessage(content='', additional_kwargs={opening_brace}'tool_calls': [{opening_brace}'function': {'arguments': {opening_brace}'query': 'Real Madrid vs Leganes players 2025'{closing_brace}, 'name': 'tavily_search_results_json', 'description': None}, 'id': '0', 'type': 'function'{closing_brace}]}, response_metadata={opening_brace}'token_usage': {'completion_tokens': 31, 'prompt_tokens': 1630, 'total_tokens': 1661}, 'model': '', 'finish_reason': 'stop'{closing_brace}, id='run-d6b4c4ff-0923-4082-9dea-7c51b2a4fc60-0', tool_calls=[{opening_brace}'name': 'tavily_search_results_json', 'args': {opening_brace}'query': 'Real Madrid vs Leganes players 2025'{closing_brace}, 'id': '0', 'type': 'tool_call'{closing_brace}]), ToolMessage(content='[{opening_brace}"title": "Player Ratings: Real Madrid 3-2 Leganes; 2025 La Liga", "url": "https://www.managingmadrid.com/2025/3/30/24396688/player-ratings-real-madrid-3-2-leganes-2025-la-liga", "content": "Raúl Asencio—7: Applauded by the Bernabeu on multiple occasions with good sweeping up defensively.\n\nFran García—6: Better on the offensive end, getting into the final third and playing some dagger crosses.\n\nEduardo Camavinga—6: Modric and Camavinga struggled to deal with Leganes counter attacks and Diego, playing as a #10 for Leganes, got the better of both of them. [...] Follow Managing Madrid online:\n\nSite search\n\nManaging Madrid main menu\n\nFiled under:\n\nPlayer Ratings: Real Madrid 3-2 Leganes; 2025 La Liga\n\nKylian Mbappe scores a brace to help Madrid secure a nervy 3-2 victory.\n\nShare this story\n\nShare\nAll sharing options for:\nPlayer Ratings: Real Madrid 3-2 Leganes; 2025 La Liga\n\nFull match player ratings below:\n\nAndriy Lunin—7: Not at fault for the goals, was left with the opposition taking a shot from near the six yard box. [...] Lucas Vázquez—4: Exposed in transition and lacking the speed and athleticism to cover the gaps he leaves when venturing forward. Needs a more “pessimistic” attitude when the ball is on the opposite flank, occupying better spots in ““rest defense”.\n\nAntonio Rudiger—5: Several unnecessary long distance shots to hurt Madrid’s rhythm and reinforce Leganes game plan. Playing with too many matches in his legs and it’s beginning to show.", "score": 0.8832463}, {opening_brace}"title": "Real Madrid vs. Leganés (Mar 29, 2025) Live Score - ESPN", "url": "https://www.espn.com/soccer/match/_/gameId/704946", "content": "Match Formations · 13. Lunin · 20. García · 22. Rüdiger · 35. Asencio · 17. Vázquez · 5. Bellingham · 10. Modric · 6. Camavinga.", "score": 0.86413884}]', name='tavily_search_results_json', id='ac15dd6e-09b1-4075-834e-d869f4079285', tool_call_id='0', artifact={opening_brace}'query': 'Real Madrid vs Leganes players 2025', 'follow_up_questions': None, 'answer': None, 'images': [], 'results': [{'url': 'https://www.managingmadrid.com/2025/3/30/24396688/player-ratings-real-madrid-3-2-leganes-2025-la-liga', 'title': 'Player Ratings: Real Madrid 3-2 Leganes; 2025 La Liga', 'content': 'Raúl Asencio—7: Applauded by the Bernabeu on multiple occasions with good sweeping up defensively. Fran García—6: Better on the offensive end, getting into the final third and playing some dagger crosses. Eduardo Camavinga—6: Modric and Camavinga struggled to deal with Leganes counter attacks and Diego, playing as a #10 for Leganes, got the better of both of them. [...] Follow Managing Madrid online: Site search Managing Madrid main menu Filed under: Player Ratings: Real Madrid 3-2 Leganes; 2025 La Liga Kylian Mbappe scores a brace to help Madrid secure a nervy 3-2 victory. Share this story Share All sharing options for: Player Ratings: Real Madrid 3-2 Leganes; 2025 La Liga Full match player ratings below: Andriy Lunin—7: Not at fault for the goals, was left with the opposition taking a shot from near the six yard box. [...] Lucas Vázquez—4: Exposed in transition and lacking the speed and athleticism to cover the gaps he leaves when venturing forward. Needs a more “pessimistic” attitude when the ball is on the opposite flank, occupying better spots in ““rest defense”. Antonio Rudiger—5: Several unnecessary long distance shots to hurt Madrid’s rhythm and reinforce Leganes game plan. Playing with too many matches in his legs and it’s beginning to show.', 'score': 0.8832463, 'raw_content': None}, {'url': 'https://www.espn.com/soccer/match/_/gameId/704946', 'title': 'Real Madrid vs. Leganés (Mar 29, 2025) Live Score - ESPN', 'content': 'Match Formations · 13. Lunin · 20. García · 22. Rüdiger · 35. Asencio · 17. Vázquez · 5. Bellingham · 10. Modric · 6. Camavinga.', 'score': 0.86413884, 'raw_content': None}], 'response_time': 0.89}), AIMessage(content='', additional_kwargs={opening_brace}'tool_calls': [{opening_brace}'function': {'arguments': {opening_brace}'query': 'Real Madrid vs Leganes starting lineup'{closing_brace}, 'name': 'tavily_search_results_json', 'description': None}, 'id': '0', 'type': 'function'{closing_brace}]}, response_metadata={opening_brace}'token_usage': {'completion_tokens': 27, 'prompt_tokens': 2212, 'total_tokens': 2239}, 'model': '', 'finish_reason': 'stop'{closing_brace}, id='run-68867df1-2012-47ac-9f01-42b071ef3a1f-0', tool_calls=[{opening_brace}'name': 'tavily_search_results_json', 'args': {opening_brace}'query': 'Real Madrid vs Leganes starting lineup'{closing_brace}, 'id': '0', 'type': 'tool_call'{closing_brace}]), ToolMessage(content='[{opening_brace}"title": "Starting lineups of Real Madrid and Leganés", "url": "https://www.realmadrid.com/en-US/news/football/first-team/latest-news/once-inicial-del-real-madrid-contra-el-leganes-29-03-2025", "content": "Starting lineups of Real Madrid and Leganés\n\n\n\nThe Whites’ team is: Lunin, Lucas V., Asencio, Rüdiger, Fran García, Arda Güler, Modrić, Camavinga, Bellingham, Brahim and Mbappé.\n\n\n\n\n\nReal Madrid have named their starting line-up for the game against Leganés on matchday 29 of LaLiga, which will be played at the Santiago Bernabéu (9 pm CET). [...] Real Madrid starting line-up:\n13. Lunin\n17. Lucas V.\n35. Asencio\n22. Rüdiger\n20. Fran García\n15. Arda Güler\n10. Modrić\n6. Camavinga\n5. Bellingham\n21. Brahim\n9. Mbappé.\n\nSubstitutes:\n26. Fran González\n34. Sergio Mestre\n4. Alaba\n7. Vini Jr.\n8. Valverde\n11. Rodrygo\n14. Tchouameni\n16. Endrick\n18. Vallejo\n43. Diego Aguado.\n\nLeganés starting line-up:\n13. Dmitrovic\n5. Tapia\n6. Sergio G.\n7. Óscar\n10. Raba\n11. Cruz\n12. V. Rosier\n17. Neyou\n19. Diego G.\n20. Javi Hernández\n22. Nastasic. [...] Suplentes:\n1. Juan Soriano\n36. Abajas\n2. A. Alti\n3. Jorge Sáenz\n8. Cisse\n9. Miguel\n14. Darko\n18. Duk\n21. R. López\n23. Munir\n24. Chicco\n30. I. Diomande.\n\n\n\nDownload Now\n\nOfficial App Fan\n\nReal Madrid © 2025 All rights reserved", "score": 0.9465623}, {opening_brace}"title": "Real Madrid vs. Leganes lineups, confirmed starting 11, team news ...", "url": "https://www.sportingnews.com/us/soccer/news/real-madrid-leganes-lineups-starting-11-team-news-injuries/aac757d10cc7b9a084995b4d", "content": "Real Madrid starting lineup (4-3-3, right to left): Lunin (GK) — Vazquez, Rudiger, Asencio, Garcia — Modric, Bellingham, Camavinga — B. Diaz,", "score": 0.9224337}]', name='tavily_search_results_json', id='46721f2b-2df2-4da2-831a-ce94f6b4ff8f', tool_call_id='0', artifact={opening_brace}'query': 'Real Madrid vs Leganes starting lineup', 'follow_up_questions': None, 'answer': None, 'images': [], 'results': [{'url': 'https://www.realmadrid.com/en-US/news/football/first-team/latest-news/once-inicial-del-real-madrid-contra-el-leganes-29-03-2025', 'title': 'Starting lineups of Real Madrid and Leganés', 'content': 'Starting lineups of Real Madrid and Leganés The Whites’ team is: Lunin, Lucas V., Asencio, Rüdiger, Fran García, Arda Güler, Modrić, Camavinga, Bellingham, Brahim and Mbappé. Real Madrid have named their starting line-up for the game against Leganés on matchday 29 of LaLiga, which will be played at the Santiago Bernabéu (9 pm CET). [...] Real Madrid starting line-up: 13. Lunin 17. Lucas V. 35. Asencio 22. Rüdiger 20. Fran García 15. Arda Güler 10. Modrić 6. Camavinga 5. Bellingham 21. Brahim 9. Mbappé. Substitutes: 26. Fran González 34. Sergio Mestre 4. Alaba 7. Vini Jr. 8. Valverde 11. Rodrygo 14. Tchouameni 16. Endrick 18. Vallejo 43. Diego Aguado. Leganés starting line-up: 13. Dmitrovic 5. Tapia 6. Sergio G. 7. Óscar 10. Raba 11. Cruz 12. V. Rosier 17. Neyou 19. Diego G. 20. Javi Hernández 22. Nastasic. [...] Suplentes: 1. Juan Soriano 36. Abajas 2. A. Alti 3. Jorge Sáenz 8. Cisse 9. Miguel 14. Darko 18. Duk 21. R. López 23. Munir 24. Chicco 30. I. Diomande. Download Now Official App Fan Real Madrid © 2025 All rights reserved', 'score': 0.9465623, 'raw_content': None}, {'url': 'https://www.sportingnews.com/us/soccer/news/real-madrid-leganes-lineups-starting-11-team-news-injuries/aac757d10cc7b9a084995b4d', 'title': 'Real Madrid vs. Leganes lineups, confirmed starting 11, team news ...', 'content': 'Real Madrid starting lineup (4-3-3, right to left): Lunin (GK) — Vazquez, Rudiger, Asencio, Garcia — Modric, Bellingham, Camavinga — B. Diaz,', 'score': 0.9224337, 'raw_content': None}], 'response_time': 2.3}), AIMessage(content='The starting lineup for Real Madrid in their match against Leganés was: Lunin (GK), Vázquez, Rüdiger, Asencio, Fran García, Modric, Bellingham, Camavinga, Brahim, Arda Güler, and Mbappé. Notable players like Vini Jr., Rodrygo, and Valverde were on the bench.', additional_kwargs={opening_brace}{closing_brace}, response_metadata={opening_brace}'token_usage': {'completion_tokens': 98, 'prompt_tokens': 2954, 'total_tokens': 3052}, 'model': '', 'finish_reason': 'stop'{closing_brace}, id='run-0bd921c6-1d94-4a4c-9d9c-d255d301e2d5-0')]}, next=(), config={'configurable': {'thread_id': '3', 'checkpoint_ns': '', 'checkpoint_id': '1f010a50-49f2-6904-800c-ec8d67fe5b92'{closing_brace}{closing_brace}, metadata={'source': 'loop', 'writes': {opening_brace}'chatbot_node': {opening_brace}'messages': [AIMessage(content='The starting lineup for Real Madrid in their match against Leganés was: Lunin (GK), Vázquez, Rüdiger, Asencio, Fran García, Modric, Bellingham, Camavinga, Brahim, Arda Güler, and Mbappé. Notable players like Vini Jr., Rodrygo, and Valverde were on the bench.', additional_kwargs={opening_brace}{closing_brace}, response_metadata={opening_brace}'token_usage': {'completion_tokens': 98, 'prompt_tokens': 2954, 'total_tokens': 3052}, 'model': '', 'finish_reason': 'stop'{closing_brace}, id='run-0bd921c6-1d94-4a4c-9d9c-d255d301e2d5-0')]{closing_brace}{closing_brace}, 'thread_id': '3', 'step': 12, 'parents': {opening_brace}{closing_brace{closing_brace}{closing_brace}, created_at='2025-04-03T16:02:18.167222+00:00', parent_config={'configurable': {'thread_id': '3', 'checkpoint_ns': '', 'checkpoint_id': '1f010a50-1feb-6534-800b-079c102aaa71'{closing_brace}{closing_brace}, tasks=())

Si queremos ver el siguiente nodo a procesar, podemos usar el atributo next

	
snapshot.next
Copy
	
()

Dado que el grafo ha finalizado, next está vacío. Si obtienes un estado desde dentro de una invocación del grafo, next indica qué nodo se ejecutará a continuación.

La instantánea anterior (snapshot) contiene los valores de estado actuales, la configuración correspondiente y el siguiente nodo (next) a procesar. En nuestro caso, el gráfico ha alcanzado el estado END, por eso next está vacío.

Vamos a volver a escribir todo el código para que sea más legible.

	
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from huggingface_hub import login
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import ToolMessage
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.checkpoint.memory import MemorySaver
from IPython.display import Image, display
import json
import os
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
import dotenv
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
# State
class State(TypedDict):
messages: Annotated[list, add_messages]
# Tools
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
tool = TavilySearchResults(api_wrapper=wrapper, max_results=2)
tools_list = [tool]
# Create the LLM model
login(token=HUGGINGFACE_TOKEN) # Login to HuggingFace to use the model
MODEL = "Qwen/Qwen2.5-72B-Instruct"
model = HuggingFaceEndpoint(
repo_id=MODEL,
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
# Create the chat model
llm = ChatHuggingFace(llm=model)
# Create the LLM with tools
llm_with_tools = llm.bind_tools(tools_list)
# Tool node
tool_node = ToolNode(tools=tools_list)
# Functions
def chatbot_function(state: State):
return {opening_brace}"messages": [llm_with_tools.invoke(state["messages"])]}
# Start to build the graph
graph_builder = StateGraph(State)
# Add nodes to the graph
graph_builder.add_node("chatbot_node", chatbot_function)
graph_builder.add_node("tools", tool_node)
# Add edges
graph_builder.add_edge(START, "chatbot_node")
graph_builder.add_conditional_edges( "chatbot_node", tools_condition)
graph_builder.add_edge("tools", "chatbot_node")
# Compile the graph
memory = MemorySaver()
graph = graph_builder.compile(checkpointer=memory)
# Display the graph
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception as e:
print(f"Error al visualizar el grafo: {e}")
Copy
	
Error al visualizar el grafo: Failed to reach https://mermaid.ink/ API while trying to render your graph after 1 retries. To resolve this issue:
1. Check your internet connection and try again
2. Try with higher retry settings: `draw_mermaid_png(..., max_retries=5, retry_delay=2.0)`
3. Use the Pyppeteer rendering method which will render your graph locally in a browser: `draw_mermaid_png(..., draw_method=MermaidDrawMethod.PYPPETEER)`
	
USER1_THREAD_ID = "1"
config_USER1 = {opening_brace}"configurable": {opening_brace}"thread_id": USER1_THREAD_ID{closing_brace}{closing_brace}
user_input = "Hi there! My name is Maximo."
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config_USER1,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Hi there! My name is Maximo.
================================== Ai Message ==================================
Hello Maximo! It's nice to meet you. How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.
	
user_input = "Do you remember my name?"
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config_USER1,
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Do you remember my name?
================================== Ai Message ==================================
Yes, I remember your name! You mentioned it's Maximo. It's nice to chat with you, Maximo. How can I assist you today?

¡Felicidades! Nuestro chatbot ahora puede mantener el estado de conversación en todas las sesiones gracias al sistema de puntos de control (checkpoints) de LangGraph. Esto abre posibilidades para interacciones más naturales y contextuales. El control de LangGraph incluso maneja estados de grafos complejos.

Máslink image 36

Chatbot con mensaje de resumenlink image 37

Si vamos a manejar el contexto de la conversación para no gastar muchos tokens, una cosa que podemos hacer para mejorar la conversación es añadir un mensaje con el resumen de la conversación. Esto puede ser útil para el ejemplo anterior, en el que hemos filtrado tanto el estado que el LLM no tiene contexto suficiente.

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import RemoveMessage, trim_messages, SystemMessage, HumanMessage, AIMessage, RemoveMessage
      from langgraph.checkpoint.memory import MemorySaver
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      memory_saver = MemorySaver()
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
          summary: str
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Print functions
      def print_message(m):
          if isinstance(m, HumanMessage):
              message_content = m.content
              message_lines = message_content.split("\n")
              for i, line in enumerate(message_lines):
                  if i == 0:
                      print(f"\t\t[HumanMessage]: {line}")
                  else:
                      print(f"\t\t{line}")
          elif isinstance(m, SystemMessage):
              message_content = m.content
              message_lines = message_content.split("\n")
              for i, line in enumerate(message_lines):
                  if i == 0:
                      print(f"\t\t[SystemMessage]: {line}")
                  else:
                      print(f"\t\t{line}")
          elif isinstance(m, AIMessage):
              message_content = m.content
              message_lines = message_content.split("\n")
              for i, line in enumerate(message_lines):
                  if i == 0:
                      print(f"\t\t[AIMessage]: {line}")
                  else:
                      print(f"\t\t{line}")
          elif isinstance(m, RemoveMessage):
              message_content = m.content
              message_lines = message_content.split("\n")
              for i, line in enumerate(message_lines):
                  if i == 0:
                      print(f"\t\t[RemoveMessage]: {line}")
                  else:
                      print(f"\t\t{line}")
          else:
              message_content = m.content
              message_lines = message_content.split("\n")
              for i, line in enumerate(message_lines):
                  if i == 0:
                      print(f"\t\t[{type(m)}]: {line}")
                  else:
                      print(f"\t\t{line}")
      
      def print_state_summary(state: State):
          if state.get("summary"):
              summary_lines = state["summary"].split("\n")
              for i, line in enumerate(summary_lines):
                  if i == 0:
                      print(f"\t\tSummary of the conversation: {line}")
                  else:
                      print(f"\t\t{line}")
          else:
              print("\t\tNo summary of the conversation")
      
      def print_summary(summary: str):
          if summary:
              summary_lines = summary.split("\n")
              for i, line in enumerate(summary_lines):
                  if i == 0:
                      print(f"\t\tSummary of the conversation: {line}")
                  else:
                      print(f"\t\t{line}")
          else:
              print("\t\tNo summary of the conversation")
      
      # Nodes
      def filter_messages(state: State):
          print("\t--- 1 messages (input to filter_messages) ---")
          for m in state["messages"]:
              print_message(m)
          print_state_summary(state)
          print("\t------------------------------------------------")
      
          # Delete all but the 2 most recent messages if there are more than 2
          if len(state["messages"]) > 2:
              delete_messages = [RemoveMessage(id=m.id) for m in state["messages"][:-2]]
          else:
              delete_messages = []
      
          print("\t--- 1 messages (output of filter_messages) ---")
          for m in delete_messages:
              print_message(m)
          print_state_summary(state)
          print("\t------------------------------------------------")
      
          return {"messages": delete_messages}
      
      def trim_messages_node(state: State):
          # print the messages received from filter_messages_node
          print("\n\n\t--- 2 messages (input to trim_messages) ---")
          for m in state["messages"]:
              print_message(m)
          print_state_summary(state)
          print("\t------------------------------------------------")
      
          # Trim the messages based on the specified parameters
          trimmed_messages = trim_messages(
              state["messages"],
              max_tokens=100,       # Maximum tokens allowed in the trimmed list
              strategy="last",     # Keep the latest messages
              token_counter=llm,   # Use the LLM's tokenizer to count tokens
              allow_partial=True,  # Allow cutting messages mid-way if needed
          )
      
          # Identify the messages that must be removed
          # This is crucial: determine which messages are in 'state["messages"]' but not in 'trimmed_messages'
          original_ids = {m.id for m in state["messages"]}
          trimmed_ids = {m.id for m in trimmed_messages}
          ids_to_remove = original_ids - trimmed_ids
          
          # Create a RemoveMessage for each message that must be removed
          messages_to_remove = [RemoveMessage(id=msg_id) for msg_id in ids_to_remove]
      
          # Print the result of the trimming
          print("\t--- 2 messages (output of trim_messages - after trimming) ---")
          if trimmed_messages:
              for m in trimmed_messages:
                  print_message(m)
          else:
              print("[Empty list - No messages after trimming]")
          print_state_summary(state)
          print("\t------------------------------------------------")
      
          return {"messages": messages_to_remove}
      
      def chat_model_node(state: State):
          # Get summary of the conversation if it exists
          summary = state.get("summary", "")
      
          print("\n\n\t--- 3 messages (input to chat_model_node) ---")
          for m in state["messages"]:
              print_message(m)
          print_state_summary(state)
          print("\t------------------------------------------------")
      
          # If there is a summary, add it to the system message
          if summary:
              # Add the summary to the system message
              system_message = f"Summary of the conversation earlier: {summary}"
      
              # Add the system message to the messages at the beginning
              messages = [SystemMessage(content=system_message)] + state["messages"]
          
          # If there is no summary, just return the messages
          else:
              messages = state["messages"]
          print(f"\t--- 3 messages (input to chat_model_node) ---")
          for m in messages:
              print_message(m)
          print_summary(summary)
          print("\t------------------------------------------------")
      
          # Invoke the LLM with the messages
          response = llm.invoke(messages)
      
          print("\t--- 3 messages (output of chat_model_node) ---")
          print_message(response)
          print_summary(summary)
          print("\t------------------------------------------------")
      
          # Return the LLM's response in the correct state format
          return {"messages": [response]}
      
      def summarize_conversation(state: State):
          # Get summary of the conversation if it exists
          summary = state.get("summary", "")
      
          print("\n\n\t--- 4 messages (input to summarize_conversation) ---")
          for m in state["messages"]:
              print_message(m)
          print_summary(summary)
          print("\t------------------------------------------------")
      
          # If there is a summary, add it to the system message
          if summary:
              summary_message = (
                  f"This is a summary of the conversation to date: {summary}\n\n"
                  "Extend the summary by taking into account the new messages above."
              )
          
          # If there is no summary, create a new one
          else:
              summary_message = "Create a summary of the conversation above."
          print(f"\t--- 4 summary message ---")
          summary_lines = summary_message.split("\n")
          for i, line in enumerate(summary_lines):
              if i == 0:
                  print(f"\t\t{line}")
              else:
                  print(f"\t\t{line}")
          print_summary(summary)
          print("\t------------------------------------------------")
      
          # Add prompt to the messages
          messages = state["messages"] + [HumanMessage(summary_message)]
      
          print("\t--- 4 messages (input to summarize_conversation with summary) ---")
          for m in messages:
              print_message(m)
          print("\t------------------------------------------------")
      
          # Invoke the LLM with the messages
          response = llm.invoke(messages)
      
          print("\t--- 4 messages (output of summarize_conversation) ---")
          print_message(response)
          print("\t------------------------------------------------")
      
          # Return the summary message in the correct state format
          return {"summary": response.content}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("filter_messages_node", filter_messages)
      graph_builder.add_node("trim_messages_node", trim_messages_node)
      graph_builder.add_node("chatbot_node", chat_model_node)
      graph_builder.add_node("summarize_conversation_node", summarize_conversation)
      
      # Connecto nodes
      graph_builder.add_edge(START, "filter_messages_node")
      graph_builder.add_edge("filter_messages_node", "trim_messages_node")
      graph_builder.add_edge("trim_messages_node", "chatbot_node")
      graph_builder.add_edge("chatbot_node", "summarize_conversation_node")
      graph_builder.add_edge("summarize_conversation_node", END)
      
      # Compile the graph
      graph = graph_builder.compile(checkpointer=memory_saver)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 26

Como podemos ver, tenemos:

  • Función de filtrado de mensajes: Si en el estado hay más de 2 mensajes, se eliminan todos los mensajes excepto los 2 últimos.
  • Función de trimado de mensajes: Se eliminan los mensajes que superan los 100 tokens.
  • Función de chatbot: Se ejecuta el modelo con los mensajes filtrados y trimados. Además, si existe un resumen, se añade al mensaje de sistema.
  • Función de resumen: Se crea un resumen de la conversación.

Creamos una función para imprimir los mensajes del grafo.

	
# Colors for the terminal
COLOR_GREEN = "\033[32m"
COLOR_YELLOW = "\033[33m"
COLOR_RESET = "\033[0m"
def stream_graph_updates(user_input: str, config: dict):
# Initialize a flag to track if an assistant response has been printed
assistant_response_printed = False
# Print the user's input immediately
print(f"\n\n{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}user_input{closing_brace}")
# Create the user's message with the HumanMessage class
user_message = HumanMessage(content=user_input)
# Stream events from the graph execution
for event in graph.stream({"messages": [user_message]}, config, stream_mode="values"):
# event is a dictionary mapping node names to their output
# Example: {opening_brace}'chatbot_node': {opening_brace}'messages': [...]{closing_brace}{closing_brace} or {opening_brace}'summarize_conversation_node': {opening_brace}'summary': '...'{closing_brace}{closing_brace}
# Iterate through node name and its output
for node_name, value in event.items():
# Check if this event is from the chatbot node which should contain the assistant's reply
if node_name == 'messages':
# Ensure the output format is as expected (list of messages)
if isinstance(value, list):
# Get the messages from the event
messages = value
# Ensure 'messages' is a non-empty list
if isinstance(messages, list) and messages:
# Get the last message (presumably the assistant's reply)
last_message = messages[-1]
# Ensure the message is an instance of AIMessage
if isinstance(last_message, AIMessage):
# Ensure the message has content to display
if hasattr(last_message, 'content'):
# Print the assistant's message content
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}{last_message.content}")
assistant_response_printed = True # Mark that we've printed the response
# Fallback if no assistant response was printed (e.g., graph error before chatbot_node)
if not assistant_response_printed:
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}[No response generated or error occurred]")
Copy

Ahora ejecutamos el grafo

	
USER1_THREAD_ID = "1"
config_USER1 = {opening_brace}"configurable": {opening_brace}"thread_id": USER1_THREAD_ID{closing_brace}{closing_brace}
while True:
user_input = input(f"\n\nUser: ")
if user_input.lower() in ["quit", "exit", "q"]:
print(f"{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}Exiting...")
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}Goodbye!")
break
events = stream_graph_updates(user_input, config_USER1)
Copy
	
User: Hello
--- 1 messages (input to filter_messages) ---
[HumanMessage]: Hello
No summary of the conversation
------------------------------------------------
--- 1 messages (output of filter_messages) ---
No summary of the conversation
------------------------------------------------
--- 2 messages (input to trim_messages) ---
[HumanMessage]: Hello
No summary of the conversation
------------------------------------------------
--- 2 messages (output of trim_messages - after trimming) ---
[HumanMessage]: Hello
No summary of the conversation
------------------------------------------------
--- 3 messages (input to chat_model_node) ---
[HumanMessage]: Hello
No summary of the conversation
------------------------------------------------
--- 3 messages (input to chat_model_node) ---
[HumanMessage]: Hello
No summary of the conversation
------------------------------------------------
--- 3 messages (output of chat_model_node) ---
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
No summary of the conversation
------------------------------------------------
Assistant: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
--- 4 messages (input to summarize_conversation) ---
[HumanMessage]: Hello
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
No summary of the conversation
------------------------------------------------
--- 4 summary message ---
Create a summary of the conversation above.
No summary of the conversation
------------------------------------------------
--- 4 messages (input to summarize_conversation with summary) ---
[HumanMessage]: Hello
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
[HumanMessage]: Create a summary of the conversation above.
------------------------------------------------
--- 4 messages (output of summarize_conversation) ---
[AIMessage]: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
Assistant: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: I am studying about langgraph, do you know it?
--- 1 messages (input to filter_messages) ---
[HumanMessage]: Hello
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
[HumanMessage]: I am studying about langgraph, do you know it?
Summary of the conversation: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
--- 1 messages (output of filter_messages) ---
[RemoveMessage]:
Summary of the conversation: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
--- 2 messages (input to trim_messages) ---
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
[HumanMessage]: I am studying about langgraph, do you know it?
Summary of the conversation: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
--- 2 messages (output of trim_messages - after trimming) ---
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
[HumanMessage]: I am studying about langgraph, do you know it?
Summary of the conversation: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
--- 3 messages (input to chat_model_node) ---
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
[HumanMessage]: I am studying about langgraph, do you know it?
Summary of the conversation: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
--- 3 messages (input to chat_model_node) ---
[SystemMessage]: Summary of the conversation earlier: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
[HumanMessage]: I am studying about langgraph, do you know it?
Summary of the conversation: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
--- 3 messages (output of chat_model_node) ---
[AIMessage]: Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models.
LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. **Visualizing Model Architecture**: Provides a clear and detailed view of how different components of a language model are connected.
2. **Comparing Models**: Allows for easy comparison of different language models in terms of their structure, training data, and performance metrics.
3. **Understanding Training Processes**: Helps in understanding the training dynamics and the flow of data through the model.
4. **Identifying Bottlenecks**: Can help in identifying potential bottlenecks or areas for improvement in the model.
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
Summary of the conversation: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
Assistant: Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models.
LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. **Visualizing Model Architecture**: Provides a clear and detailed view of how different components of a language model are connected.
2. **Comparing Models**: Allows for easy comparison of different language models in terms of their structure, training data, and performance metrics.
3. **Understanding Training Processes**: Helps in understanding the training dynamics and the flow of data through the model.
4. **Identifying Bottlenecks**: Can help in identifying potential bottlenecks or areas for improvement in the model.
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
--- 4 messages (input to summarize_conversation) ---
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
[HumanMessage]: I am studying about langgraph, do you know it?
[AIMessage]: Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models.
LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. **Visualizing Model Architecture**: Provides a clear and detailed view of how different components of a language model are connected.
2. **Comparing Models**: Allows for easy comparison of different language models in terms of their structure, training data, and performance metrics.
3. **Understanding Training Processes**: Helps in understanding the training dynamics and the flow of data through the model.
4. **Identifying Bottlenecks**: Can help in identifying potential bottlenecks or areas for improvement in the model.
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
Summary of the conversation: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
--- 4 summary message ---
This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
Extend the summary by taking into account the new messages above.
Summary of the conversation: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
------------------------------------------------
--- 4 messages (input to summarize_conversation with summary) ---
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
[HumanMessage]: I am studying about langgraph, do you know it?
[AIMessage]: Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models.
LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. **Visualizing Model Architecture**: Provides a clear and detailed view of how different components of a language model are connected.
2. **Comparing Models**: Allows for easy comparison of different language models in terms of their structure, training data, and performance metrics.
3. **Understanding Training Processes**: Helps in understanding the training dynamics and the flow of data through the model.
4. **Identifying Bottlenecks**: Can help in identifying potential bottlenecks or areas for improvement in the model.
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
[HumanMessage]: This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
Extend the summary by taking into account the new messages above.
------------------------------------------------
--- 4 messages (output of summarize_conversation) ---
[AIMessage]: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
Assistant: Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models.
LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. **Visualizing Model Architecture**: Provides a clear and detailed view of how different components of a language model are connected.
2. **Comparing Models**: Allows for easy comparison of different language models in terms of their structure, training data, and performance metrics.
3. **Understanding Training Processes**: Helps in understanding the training dynamics and the flow of data through the model.
4. **Identifying Bottlenecks**: Can help in identifying potential bottlenecks or areas for improvement in the model.
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
User: I would like to know about using langsmith with huggingface llms, the integration of huggingface
--- 1 messages (input to filter_messages) ---
[AIMessage]: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
[HumanMessage]: I am studying about langgraph, do you know it?
[AIMessage]: Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models.
LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. **Visualizing Model Architecture**: Provides a clear and detailed view of how different components of a language model are connected.
2. **Comparing Models**: Allows for easy comparison of different language models in terms of their structure, training data, and performance metrics.
3. **Understanding Training Processes**: Helps in understanding the training dynamics and the flow of data through the model.
4. **Identifying Bottlenecks**: Can help in identifying potential bottlenecks or areas for improvement in the model.
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
[HumanMessage]: I would like to know about using langsmith with huggingface llms, the integration of huggingface
Summary of the conversation: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
--- 1 messages (output of filter_messages) ---
[RemoveMessage]:
[RemoveMessage]:
Summary of the conversation: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
--- 2 messages (input to trim_messages) ---
[AIMessage]: Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models.
LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. **Visualizing Model Architecture**: Provides a clear and detailed view of how different components of a language model are connected.
2. **Comparing Models**: Allows for easy comparison of different language models in terms of their structure, training data, and performance metrics.
3. **Understanding Training Processes**: Helps in understanding the training dynamics and the flow of data through the model.
4. **Identifying Bottlenecks**: Can help in identifying potential bottlenecks or areas for improvement in the model.
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
[HumanMessage]: I would like to know about using langsmith with huggingface llms, the integration of huggingface
Summary of the conversation: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
--- 2 messages (output of trim_messages - after trimming) ---
[HumanMessage]: I would like to know about using langsmith with huggingface llms, the integration of huggingface
Summary of the conversation: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
--- 3 messages (input to chat_model_node) ---
[HumanMessage]: I would like to know about using langsmith with huggingface llms, the integration of huggingface
Summary of the conversation: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
--- 3 messages (input to chat_model_node) ---
[SystemMessage]: Summary of the conversation earlier: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
[HumanMessage]: I would like to know about using langsmith with huggingface llms, the integration of huggingface
Summary of the conversation: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
--- 3 messages (output of chat_model_node) ---
[AIMessage]: Certainly! LangSmith and Hugging Face are both powerful tools in the domain of natural language processing (NLP), and integrating them can significantly enhance your workflow. Here’s a detailed look at how you can use LangSmith with Hugging Face models:
### What is LangSmith?
LangSmith is a platform designed to help developers and researchers build, test, and deploy natural language applications. It offers features such as:
- **Model Management**: Manage and version control your language models.
- **Data Labeling**: Annotate and label data for training and evaluation.
- **Model Evaluation**: Evaluate and compare different models and versions.
- **API Integration**: Integrate with various NLP tools and platforms.
### What is Hugging Face?
Hugging Face is a leading company in the NLP domain, known for its transformers library. Hugging Face provides a wide array of pre-trained models and tools for NLP tasks, including:
- **Pre-trained Models**: Access to a vast library of pre-trained models.
- **Transformers Library**: A powerful library for working with transformer models.
- **Hugging Face Hub**: A platform for sharing and accessing models, datasets, and metrics.
### Integrating LangSmith with Hugging Face Models
#### Step-by-Step Guide
1. **Install Required Libraries**
Ensure you have the necessary libraries installed:
```bash
pip install transformers datasets langsmith
```
2. **Load a Hugging Face Model**
Use the `transformers` library to load a pre-trained model:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "distilbert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
```
3. **Integrate with LangSmith**
- **Initialize LangSmith Client**:
```python
from langsmith import Client
client = Client()
```
- **Create or Load a Dataset**:
```python
from datasets import Dataset
# Example dataset
data = {
"text": ["This is a positive review.", "This is a negative review."],
"label": [1, 0]
}
dataset = Dataset.from_dict(data)
# Save dataset to LangSmith
dataset_id = client.create_dataset(name="my_dataset", data=dataset)
```
- **Evaluate the Model**:
```python
from langsmith import EvaluationResult
def evaluate_model(dataset, tokenizer, model):
results = []
for example in dataset:
inputs = tokenizer(example["text"], return_tensors="pt")
outputs = model(**inputs)
predicted_label = outputs.logits.argmax().item()
result = EvaluationResult(
example_id=example["id"],
predicted_label=predicted_label,
ground_truth_label=example["label"]
)
results.append(result)
return results
evaluation_results = evaluate_model(dataset, tokenizer, model)
```
- **Upload Evaluation Results to LangSmith**:
```python
client.log_results(dataset_id, evaluation_results)
```
4. **Visualize and Analyze Results**
- Use LangSmith’s web interface to visualize the evaluation results.
- Compare different models and versions to identify the best performing model.
### Additional Tips
- **Model Tuning**: Use Hugging Face’s `Trainer` class to fine-tune models on your datasets and then evaluate them using LangSmith.
- **Custom Metrics**: Define custom evaluation metrics and use them to assess model performance.
- **Collaboration**: Share datasets and models with team members using the Hugging Face Hub and LangSmith.
By following these steps, you can effectively integrate Hugging Face models with LangSmith, leveraging the strengths of both platforms to build and evaluate robust NLP applications.
If you have any specific questions or need further assistance, feel free to ask!
Summary of the conversation: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
Assistant: Certainly! LangSmith and Hugging Face are both powerful tools in the domain of natural language processing (NLP), and integrating them can significantly enhance your workflow. Here’s a detailed look at how you can use LangSmith with Hugging Face models:
### What is LangSmith?
LangSmith is a platform designed to help developers and researchers build, test, and deploy natural language applications. It offers features such as:
- **Model Management**: Manage and version control your language models.
- **Data Labeling**: Annotate and label data for training and evaluation.
- **Model Evaluation**: Evaluate and compare different models and versions.
- **API Integration**: Integrate with various NLP tools and platforms.
### What is Hugging Face?
Hugging Face is a leading company in the NLP domain, known for its transformers library. Hugging Face provides a wide array of pre-trained models and tools for NLP tasks, including:
- **Pre-trained Models**: Access to a vast library of pre-trained models.
- **Transformers Library**: A powerful library for working with transformer models.
- **Hugging Face Hub**: A platform for sharing and accessing models, datasets, and metrics.
### Integrating LangSmith with Hugging Face Models
#### Step-by-Step Guide
1. **Install Required Libraries**
Ensure you have the necessary libraries installed:
```bash
pip install transformers datasets langsmith
```
2. **Load a Hugging Face Model**
Use the `transformers` library to load a pre-trained model:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "distilbert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
```
3. **Integrate with LangSmith**
- **Initialize LangSmith Client**:
```python
from langsmith import Client
client = Client()
```
- **Create or Load a Dataset**:
```python
from datasets import Dataset
# Example dataset
data = {
"text": ["This is a positive review.", "This is a negative review."],
"label": [1, 0]
}
dataset = Dataset.from_dict(data)
# Save dataset to LangSmith
dataset_id = client.create_dataset(name="my_dataset", data=dataset)
```
- **Evaluate the Model**:
```python
from langsmith import EvaluationResult
def evaluate_model(dataset, tokenizer, model):
results = []
for example in dataset:
inputs = tokenizer(example["text"], return_tensors="pt")
outputs = model(**inputs)
predicted_label = outputs.logits.argmax().item()
result = EvaluationResult(
example_id=example["id"],
predicted_label=predicted_label,
ground_truth_label=example["label"]
)
results.append(result)
return results
evaluation_results = evaluate_model(dataset, tokenizer, model)
```
- **Upload Evaluation Results to LangSmith**:
```python
client.log_results(dataset_id, evaluation_results)
```
4. **Visualize and Analyze Results**
- Use LangSmith’s web interface to visualize the evaluation results.
- Compare different models and versions to identify the best performing model.
### Additional Tips
- **Model Tuning**: Use Hugging Face’s `Trainer` class to fine-tune models on your datasets and then evaluate them using LangSmith.
- **Custom Metrics**: Define custom evaluation metrics and use them to assess model performance.
- **Collaboration**: Share datasets and models with team members using the Hugging Face Hub and LangSmith.
By following these steps, you can effectively integrate Hugging Face models with LangSmith, leveraging the strengths of both platforms to build and evaluate robust NLP applications.
If you have any specific questions or need further assistance, feel free to ask!
--- 4 messages (input to summarize_conversation) ---
[HumanMessage]: I would like to know about using langsmith with huggingface llms, the integration of huggingface
[AIMessage]: Certainly! LangSmith and Hugging Face are both powerful tools in the domain of natural language processing (NLP), and integrating them can significantly enhance your workflow. Here’s a detailed look at how you can use LangSmith with Hugging Face models:
### What is LangSmith?
LangSmith is a platform designed to help developers and researchers build, test, and deploy natural language applications. It offers features such as:
- **Model Management**: Manage and version control your language models.
- **Data Labeling**: Annotate and label data for training and evaluation.
- **Model Evaluation**: Evaluate and compare different models and versions.
- **API Integration**: Integrate with various NLP tools and platforms.
### What is Hugging Face?
Hugging Face is a leading company in the NLP domain, known for its transformers library. Hugging Face provides a wide array of pre-trained models and tools for NLP tasks, including:
- **Pre-trained Models**: Access to a vast library of pre-trained models.
- **Transformers Library**: A powerful library for working with transformer models.
- **Hugging Face Hub**: A platform for sharing and accessing models, datasets, and metrics.
### Integrating LangSmith with Hugging Face Models
#### Step-by-Step Guide
1. **Install Required Libraries**
Ensure you have the necessary libraries installed:
```bash
pip install transformers datasets langsmith
```
2. **Load a Hugging Face Model**
Use the `transformers` library to load a pre-trained model:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "distilbert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
```
3. **Integrate with LangSmith**
- **Initialize LangSmith Client**:
```python
from langsmith import Client
client = Client()
```
- **Create or Load a Dataset**:
```python
from datasets import Dataset
# Example dataset
data = {
"text": ["This is a positive review.", "This is a negative review."],
"label": [1, 0]
}
dataset = Dataset.from_dict(data)
# Save dataset to LangSmith
dataset_id = client.create_dataset(name="my_dataset", data=dataset)
```
- **Evaluate the Model**:
```python
from langsmith import EvaluationResult
def evaluate_model(dataset, tokenizer, model):
results = []
for example in dataset:
inputs = tokenizer(example["text"], return_tensors="pt")
outputs = model(**inputs)
predicted_label = outputs.logits.argmax().item()
result = EvaluationResult(
example_id=example["id"],
predicted_label=predicted_label,
ground_truth_label=example["label"]
)
results.append(result)
return results
evaluation_results = evaluate_model(dataset, tokenizer, model)
```
- **Upload Evaluation Results to LangSmith**:
```python
client.log_results(dataset_id, evaluation_results)
```
4. **Visualize and Analyze Results**
- Use LangSmith’s web interface to visualize the evaluation results.
- Compare different models and versions to identify the best performing model.
### Additional Tips
- **Model Tuning**: Use Hugging Face’s `Trainer` class to fine-tune models on your datasets and then evaluate them using LangSmith.
- **Custom Metrics**: Define custom evaluation metrics and use them to assess model performance.
- **Collaboration**: Share datasets and models with team members using the Hugging Face Hub and LangSmith.
By following these steps, you can effectively integrate Hugging Face models with LangSmith, leveraging the strengths of both platforms to build and evaluate robust NLP applications.
If you have any specific questions or need further assistance, feel free to ask!
Summary of the conversation: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
--- 4 summary message ---
This is a summary of the conversation to date: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
Extend the summary by taking into account the new messages above.
Summary of the conversation: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
------------------------------------------------
--- 4 messages (input to summarize_conversation with summary) ---
[HumanMessage]: I would like to know about using langsmith with huggingface llms, the integration of huggingface
[AIMessage]: Certainly! LangSmith and Hugging Face are both powerful tools in the domain of natural language processing (NLP), and integrating them can significantly enhance your workflow. Here’s a detailed look at how you can use LangSmith with Hugging Face models:
### What is LangSmith?
LangSmith is a platform designed to help developers and researchers build, test, and deploy natural language applications. It offers features such as:
- **Model Management**: Manage and version control your language models.
- **Data Labeling**: Annotate and label data for training and evaluation.
- **Model Evaluation**: Evaluate and compare different models and versions.
- **API Integration**: Integrate with various NLP tools and platforms.
### What is Hugging Face?
Hugging Face is a leading company in the NLP domain, known for its transformers library. Hugging Face provides a wide array of pre-trained models and tools for NLP tasks, including:
- **Pre-trained Models**: Access to a vast library of pre-trained models.
- **Transformers Library**: A powerful library for working with transformer models.
- **Hugging Face Hub**: A platform for sharing and accessing models, datasets, and metrics.
### Integrating LangSmith with Hugging Face Models
#### Step-by-Step Guide
1. **Install Required Libraries**
Ensure you have the necessary libraries installed:
```bash
pip install transformers datasets langsmith
```
2. **Load a Hugging Face Model**
Use the `transformers` library to load a pre-trained model:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "distilbert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
```
3. **Integrate with LangSmith**
- **Initialize LangSmith Client**:
```python
from langsmith import Client
client = Client()
```
- **Create or Load a Dataset**:
```python
from datasets import Dataset
# Example dataset
data = {
"text": ["This is a positive review.", "This is a negative review."],
"label": [1, 0]
}
dataset = Dataset.from_dict(data)
# Save dataset to LangSmith
dataset_id = client.create_dataset(name="my_dataset", data=dataset)
```
- **Evaluate the Model**:
```python
from langsmith import EvaluationResult
def evaluate_model(dataset, tokenizer, model):
results = []
for example in dataset:
inputs = tokenizer(example["text"], return_tensors="pt")
outputs = model(**inputs)
predicted_label = outputs.logits.argmax().item()
result = EvaluationResult(
example_id=example["id"],
predicted_label=predicted_label,
ground_truth_label=example["label"]
)
results.append(result)
return results
evaluation_results = evaluate_model(dataset, tokenizer, model)
```
- **Upload Evaluation Results to LangSmith**:
```python
client.log_results(dataset_id, evaluation_results)
```
4. **Visualize and Analyze Results**
- Use LangSmith’s web interface to visualize the evaluation results.
- Compare different models and versions to identify the best performing model.
### Additional Tips
- **Model Tuning**: Use Hugging Face’s `Trainer` class to fine-tune models on your datasets and then evaluate them using LangSmith.
- **Custom Metrics**: Define custom evaluation metrics and use them to assess model performance.
- **Collaboration**: Share datasets and models with team members using the Hugging Face Hub and LangSmith.
By following these steps, you can effectively integrate Hugging Face models with LangSmith, leveraging the strengths of both platforms to build and evaluate robust NLP applications.
If you have any specific questions or need further assistance, feel free to ask!
[HumanMessage]: This is a summary of the conversation to date: Sure! Here's an extended summary of the conversation:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
---
Is there anything else you need assistance with?
Extend the summary by taking into account the new messages above.
------------------------------------------------
--- 4 messages (output of summarize_conversation) ---
[AIMessage]: Sure! Here's the extended summary of the conversation to date:
---
**User:** Hello
**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
**User:** I am studying about langgraph, do you know it?
**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
1. Visualizing Model Architecture
2. Comparing Models
3. Understanding Training Processes
4. Identifying Bottlenecks
If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
User: Hello
Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
User: Create a summary of the conversation above.
Qwen: [Provided the summary you are now reading.]
Is there anything else you need assistance with?
**Qwen:** [Extended the summary you are now reading.]
**User:** Is there anything else you need assistance with?
**Qwen:** If you have any more questions about LangGraph or any other topic, feel free to ask! I'm here to help with any information or assistance you need.
---
Is there anything else you need assistance with?
------------------------------------------------
Assistant: Certainly! LangSmith and Hugging Face are both powerful tools in the domain of natural language processing (NLP), and integrating them can significantly enhance your workflow. Here’s a detailed look at how you can use LangSmith with Hugging Face models:
### What is LangSmith?
LangSmith is a platform designed to help developers and researchers build, test, and deploy natural language applications. It offers features such as:
- **Model Management**: Manage and version control your language models.
- **Data Labeling**: Annotate and label data for training and evaluation.
- **Model Evaluation**: Evaluate and compare different models and versions.
- **API Integration**: Integrate with various NLP tools and platforms.
### What is Hugging Face?
Hugging Face is a leading company in the NLP domain, known for its transformers library. Hugging Face provides a wide array of pre-trained models and tools for NLP tasks, including:
- **Pre-trained Models**: Access to a vast library of pre-trained models.
- **Transformers Library**: A powerful library for working with transformer models.
- **Hugging Face Hub**: A platform for sharing and accessing models, datasets, and metrics.
### Integrating LangSmith with Hugging Face Models
#### Step-by-Step Guide
1. **Install Required Libraries**
Ensure you have the necessary libraries installed:
```bash
pip install transformers datasets langsmith
```
2. **Load a Hugging Face Model**
Use the `transformers` library to load a pre-trained model:
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "distilbert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
```
3. **Integrate with LangSmith**
- **Initialize LangSmith Client**:
```python
from langsmith import Client
client = Client()
```
- **Create or Load a Dataset**:
```python
from datasets import Dataset
# Example dataset
data = {
"text": ["This is a positive review.", "This is a negative review."],
"label": [1, 0]
}
dataset = Dataset.from_dict(data)
# Save dataset to LangSmith
dataset_id = client.create_dataset(name="my_dataset", data=dataset)
```
- **Evaluate the Model**:
```python
from langsmith import EvaluationResult
def evaluate_model(dataset, tokenizer, model):
results = []
for example in dataset:
inputs = tokenizer(example["text"], return_tensors="pt")
outputs = model(**inputs)
predicted_label = outputs.logits.argmax().item()
result = EvaluationResult(
example_id=example["id"],
predicted_label=predicted_label,
ground_truth_label=example["label"]
)
results.append(result)
return results
evaluation_results = evaluate_model(dataset, tokenizer, model)
```
- **Upload Evaluation Results to LangSmith**:
```python
client.log_results(dataset_id, evaluation_results)
```
4. **Visualize and Analyze Results**
- Use LangSmith’s web interface to visualize the evaluation results.
- Compare different models and versions to identify the best performing model.
### Additional Tips
- **Model Tuning**: Use Hugging Face’s `Trainer` class to fine-tune models on your datasets and then evaluate them using LangSmith.
- **Custom Metrics**: Define custom evaluation metrics and use them to assess model performance.
- **Collaboration**: Share datasets and models with team members using the Hugging Face Hub and LangSmith.
By following these steps, you can effectively integrate Hugging Face models with LangSmith, leveraging the strengths of both platforms to build and evaluate robust NLP applications.
If you have any specific questions or need further assistance, feel free to ask!
User: Exiting...
Assistant: Goodbye!

Si nos vamos al final de la conversación, podemos ver

--- 2 messages (input to trim_messages) ---
    	[AIMessage]: Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models.
    	
    	LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
    	
    	1. **Visualizing Model Architecture**: Provides a clear and detailed view of how different components of a language model are connected.
    	2. **Comparing Models**: Allows for easy comparison of different language models in terms of their structure, training data, and performance metrics.
    	3. **Understanding Training Processes**: Helps in understanding the training dynamics and the flow of data through the model.
    	4. **Identifying Bottlenecks**: Can help in identifying potential bottlenecks or areas for improvement in the model.
    	
    	If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
    	[HumanMessage]: I would like to know about using langsmith with huggingface llms, the integration of huggingface
    	Summary of the conversation: Sure! Here's an extended summary of the conversation:
    	
    	---
    	
    	**User:** Hello
    	
    	**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
    	
    	**User:** I am studying about langgraph, do you know it?
    	
    	**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
    	1. Visualizing Model Architecture
    	2. Comparing Models
    	3. Understanding Training Processes
    	4. Identifying Bottlenecks
    	
    	If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
    	
    	**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
    	User: Hello
    	Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
    	User: Create a summary of the conversation above.
    	Qwen: [Provided the summary you are now reading.]
    	
    	Is there anything else you need assistance with?
    	
    	**Qwen:** [Extended the summary you are now reading.]
    	
    	---
    	
    	Is there anything else you need assistance with?
    ------------------------------------------------

Vemos que en los mensajes del estado solo se conservan

[AIMessage]: Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models.
    	
    	LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
    	
    	1. **Visualizing Model Architecture**: Provides a clear and detailed view of how different components of a language model are connected.
    	2. **Comparing Models**: Allows for easy comparison of different language models in terms of their structure, training data, and performance metrics.
    	3. **Understanding Training Processes**: Helps in understanding the training dynamics and the flow of data through the model.
    	4. **Identifying Bottlenecks**: Can help in identifying potential bottlenecks or areas for improvement in the model.
    	
    	If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
    	[HumanMessage]: I would like to know about using langsmith with huggingface llms, the integration of huggingface

Es decir, la función de filtrado solo mantiene los 2 últimos mensajes.

Pero después podemos ver

--- 2 messages (output of trim_messages - after trimming) ---
    	[HumanMessage]: I would like to know about using langsmith with huggingface llms, the integration of huggingface
    	Summary of the conversation: Sure! Here's an extended summary of the conversation:
    	
    	---
    	
    	**User:** Hello
    	
    	**Qwen:** Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
    	
    	**User:** I am studying about langgraph, do you know it?
    	
    	**Qwen:** Yes, I can help with information about LangGraph! LangGraph is a language model graph that represents the relationships and connections between different language models and their components. It can be used to visualize and understand the architecture, training processes, and performance characteristics of various language models. LangGraph can be particularly useful for researchers and developers who are working on natural language processing (NLP) tasks. It helps in:
    	1. Visualizing Model Architecture
    	2. Comparing Models
    	3. Understanding Training Processes
    	4. Identifying Bottlenecks
    	
    	If you have specific questions or aspects of LangGraph you're interested in, feel free to let me know!
    	
    	**User:** This is a summary of the conversation to date: Sure! Here's a summary of the conversation above:
    	User: Hello
    	Qwen: Hello! How can I assist you today? Whether you need help with information, a specific task, or just want to chat, I'm here to help.
    	User: Create a summary of the conversation above.
    	Qwen: [Provided the summary you are now reading.]
    	
    	Is there anything else you need assistance with?
    	
    	**Qwen:** [Extended the summary you are now reading.]
    	
    	---
    	
    	Is there anything else you need assistance with?
    ------------------------------------------------

Es decir, la función de trimado elimina el mensaje del asistente porque supera los 100 tokens.

Aún eliminando mensajes, por lo que no los tiene como contexto el LLM, podemos tener una conversación gracias al resumen de la conversación que vamos generando.

Guardar estado en SQlitelink image 38

Hemos visto cómo guardar el estado del grafo en memoria, pero en cuanto terminamos el proceso, esa memoria se pierde, por lo que vamos a ver cómo guardarla en SQLite

Primero necesitamos instalar el paquete de sqlite para LangGraph.

pip install langgraph-checkpoint-sqlite

Importamos las librerías de sqlite y langgraph-checkpoint-sqlite. Antes, cuando guardábamos el estado en memoria usábamos memory_saver, ahora usaremos SqliteSaver para guardar el estado en una base de datos SQLite.

	
import sqlite3
from langgraph.checkpoint.sqlite import SqliteSaver
import os
# Create the directory if it doesn't exist
os.makedirs("state_db", exist_ok=True)
db_path = "state_db/langgraph_sqlite.db"
conn = sqlite3.connect(db_path, check_same_thread=False)
memory = SqliteSaver(conn)
Copy

Vamos a crear un chatbot básico para no añadir complejidad aparte de la funcionalidad que queremos probar.

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import HumanMessage, AIMessage
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Nodes
      def chat_model_node(state: State):
          # Return the LLM's response in the correct state format
          return {"messages": [llm.invoke(state["messages"])]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("chatbot_node", chat_model_node)
      
      # Connecto nodes
      graph_builder.add_edge(START, "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile(checkpointer=memory)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 27

Definimos la función para imprimir los mensajes del grafo.

	
# Colors for the terminal
COLOR_GREEN = "\033[32m"
COLOR_YELLOW = "\033[33m"
COLOR_RESET = "\033[0m"
def stream_graph_updates(user_input: str, config: dict):
# Initialize a flag to track if an assistant response has been printed
assistant_response_printed = False
# Print the user's input immediately
print(f"\n\n{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}user_input{closing_brace}")
# Create the user's message with the HumanMessage class
user_message = HumanMessage(content=user_input)
# Stream events from the graph execution
for event in graph.stream({"messages": [user_message]}, config, stream_mode="values"):
# event is a dictionary mapping node names to their output
# Example: {opening_brace}'chatbot_node': {opening_brace}'messages': [...]{closing_brace}{closing_brace} or {opening_brace}'summarize_conversation_node': {opening_brace}'summary': '...'{closing_brace}{closing_brace}
# Iterate through node name and its output
for node_name, value in event.items():
# Check if this event is from the chatbot node which should contain the assistant's reply
if node_name == 'messages':
# Ensure the output format is as expected (list of messages)
if isinstance(value, list):
# Get the messages from the event
messages = value
# Ensure 'messages' is a non-empty list
if isinstance(messages, list) and messages:
# Get the last message (presumably the assistant's reply)
last_message = messages[-1]
# Ensure the message is an instance of AIMessage
if isinstance(last_message, AIMessage):
# Ensure the message has content to display
if hasattr(last_message, 'content'):
# Print the assistant's message content
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}{last_message.content}")
assistant_response_printed = True # Mark that we've printed the response
# Fallback if no assistant response was printed (e.g., graph error before chatbot_node)
if not assistant_response_printed:
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}[No response generated or error occurred]")
Copy

Ejecutamos el grafo

	
USER1_THREAD_ID = "USER1"
config_USER1 = {opening_brace}"configurable": {opening_brace}"thread_id": USER1_THREAD_ID{closing_brace}{closing_brace}
while True:
user_input = input(f"\n\nUser: ")
if user_input.lower() in ["quit", "exit", "q"]:
print(f"{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}Exiting...")
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}Goodbye!")
break
events = stream_graph_updates(user_input, config_USER1)
Copy
	
User: Hello, my name is Máximo
Assistant: Hello Máximo! It's a pleasure to meet you. How can I assist you today?
User: Exiting...
Assistant: Goodbye!

Como se puede ver, solo le he dicho cómo me llamo

Ahora reiniciamos el notebook para que se eliminen todos los datos guardados en RAM del notebook y volvemos a ejecutar el código anterior.

Volvemos a crear la memoria de sqlite con SqliteSaver

	
import sqlite3
from langgraph.checkpoint.sqlite import SqliteSaver
import os
# Create the directory if it doesn't exist
os.makedirs("state_db", exist_ok=True)
db_path = "state_db/langgraph_sqlite.db"
conn = sqlite3.connect(db_path, check_same_thread=False)
memory = SqliteSaver(conn)
Copy

Volvemos a crear el grafo

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import HumanMessage, AIMessage
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Nodes
      def chat_model_node(state: State):
          # Return the LLM's response in the correct state format
          return {"messages": [llm.invoke(state["messages"])]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("chatbot_node", chat_model_node)
      
      # Connecto nodes
      graph_builder.add_edge(START, "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile(checkpointer=memory)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 28

Volvemos a definir la función para imprimir los mensajes del grafo.

	
# Colors for the terminal
COLOR_GREEN = "\033[32m"
COLOR_YELLOW = "\033[33m"
COLOR_RESET = "\033[0m"
def stream_graph_updates(user_input: str, config: dict):
# Initialize a flag to track if an assistant response has been printed
assistant_response_printed = False
# Print the user's input immediately
print(f"\n\n{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}{opening_brace}user_input{closing_brace}")
# Create the user's message with the HumanMessage class
user_message = HumanMessage(content=user_input)
# Stream events from the graph execution
for event in graph.stream({"messages": [user_message]}, config, stream_mode="values"):
# event is a dictionary mapping node names to their output
# Example: {opening_brace}'chatbot_node': {opening_brace}'messages': [...]{closing_brace}{closing_brace} or {opening_brace}'summarize_conversation_node': {opening_brace}'summary': '...'{closing_brace}{closing_brace}
# Iterate through node name and its output
for node_name, value in event.items():
# Check if this event is from the chatbot node which should contain the assistant's reply
if node_name == 'messages':
# Ensure the output format is as expected (list of messages)
if isinstance(value, list):
# Get the messages from the event
messages = value
# Ensure 'messages' is a non-empty list
if isinstance(messages, list) and messages:
# Get the last message (presumably the assistant's reply)
last_message = messages[-1]
# Ensure the message is an instance of AIMessage
if isinstance(last_message, AIMessage):
# Ensure the message has content to display
if hasattr(last_message, 'content'):
# Print the assistant's message content
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}{last_message.content}")
assistant_response_printed = True # Mark that we've printed the response
# Fallback if no assistant response was printed (e.g., graph error before chatbot_node)
if not assistant_response_printed:
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}[No response generated or error occurred]")
Copy

Y lo volvemos a ejecutar

	
USER1_THREAD_ID = "USER1"
config_USER1 = {opening_brace}"configurable": {opening_brace}"thread_id": USER1_THREAD_ID{closing_brace}{closing_brace}
while True:
user_input = input(f"\n\nUser: ")
if user_input.lower() in ["quit", "exit", "q"]:
print(f"{opening_brace}COLOR_GREEN{closing_brace}User: {opening_brace}COLOR_RESET{closing_brace}Exiting...")
print(f"{opening_brace}COLOR_YELLOW{closing_brace}Assistant: {opening_brace}COLOR_RESET{closing_brace}Goodbye!")
break
events = stream_graph_updates(user_input, config_USER1)
Copy
	
User: What's my name?
Assistant: Your name is Máximo. It's nice to know and use your name as we chat. How can I assist you today, Máximo?
User: Exiting...
Assistant: Goodbye!

Como se puede ver, hemos podido recuperar el estado del grafo de la base de datos SQLite.

Memoria a largo plazo, memoria entre hiloslink image 39

La memoria es una función cognitiva que permite a las personas almacenar, recuperar y utilizar información para comprender, a partir de su pasado, su presente y su futuro.

Existen varios tipos de memoria a largo plazo que pueden utilizarse en aplicaciones de IA.

Introducción a LangGraph Memory Storelink image 40

LangGraph proporciona el LangGraph Memory Store, que es una forma de guardar y recuperar memoria a largo plazo entre diferentes hilos. De esta manera, en una conversación, un usuario puede indicar que le gusta algo, y en otra conversación, el chatbot puede recuperar esa información para generar una respuesta más personalizada.

Se trata de una clase para almacenes persistentes de clave-valor (key-value).

Cuando se almacenan objetos en la memoria se necesitan tres cosas:

  • Un namespace para el objeto, se hace mediante una tupla
  • Una key única
  • El valor del objeto

Vamos a ver un ejemplo

	
import uuid
from langgraph.store.memory import InMemoryStore
in_memory_store = InMemoryStore()
# Namespace for the memory to save
user_id = "1"
namespace_for_memory = (user_id, "memories")
# Save a memory to namespace as key and value
key = str(uuid.uuid4())
# The value needs to be a dictionary
value = {opening_brace}"food_preference" : "I like pizza"{closing_brace}
# Save the memory
in_memory_store.put(namespace_for_memory, key, value)
Copy

El objeto in_memory_store que hemos creado tiene varios métodos y uno de ellos es search, que nos permite buscar por namespace

	
# Search
memories = in_memory_store.search(namespace_for_memory)
type(memories), len(memories)
Copy
	
(list, 1)

Es una lista de un único valor, lo cual es lógico, porque solo hemos almacenado un valor, así que vamos a verlo

	
value = memories[0]
value.dict()
Copy
	
{opening_brace}'namespace': ['1', 'memories'],
'key': '70006131-948a-4d7a-bdce-78351c44fc4d',
'value': {'food_preference': 'I like pizza'{closing_brace},
'created_at': '2025-05-11T07:24:31.462465+00:00',
'updated_at': '2025-05-11T07:24:31.462468+00:00',
'score': None}

Podemos ver su key y su value

	
# The key, value
memories[0].key, memories[0].value
Copy
	
('70006131-948a-4d7a-bdce-78351c44fc4d', {'food_preference': 'I like pizza'})

También podemos usar el método get para obtener un objeto de la memoria a partir de su namespace y su key

	
# Get the memory by namespace and key
memory = in_memory_store.get(namespace_for_memory, key)
memory.dict()
Copy
	
{opening_brace}'namespace': ['1', 'memories'],
'key': '70006131-948a-4d7a-bdce-78351c44fc4d',
'value': {'food_preference': 'I like pizza'{closing_brace},
'created_at': '2025-05-11T07:24:31.462465+00:00',
'updated_at': '2025-05-11T07:24:31.462468+00:00'}

Al igual que para la memoria a corto plazo hemos usado los checkpoints, para la memoria a largo plazo vamos a usar LangGraph Store

Chatbot con memoria a largo plazolink image 41

Creamos un chatbot básico, con memoria a largo plazo y memoria a corto plazo.

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, MessagesState, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
      from langgraph.checkpoint.memory import MemorySaver # Short-term memory
      from langgraph.store.base import BaseStore          # Long-term memory
      from langchain_core.runnables.config import RunnableConfig
      from langgraph.store.memory import InMemoryStore
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Chatbot instruction
      MODEL_SYSTEM_MESSAGE = """You are a helpful assistant that can answer questions and help with tasks.
      You have access to a long-term memory that you can use to answer questions and help with tasks.
      Here is the memory (it may be empty): {memory}"""
      
      # Create new memory from the chat history and any existing memory
      CREATE_MEMORY_INSTRUCTION = """You are a helpful assistant that gets information from the user to personalize your responses.
      
      # INFORMATION FROM THE USER:
      {memory}
      
      # INSTRUCTIONS:
      1. Carefully review the chat history
      2. Identify new information from the user, such as:
         - Personal details (name, location)
         - Preferences (likes, dislikes)
         - Interests and hobbies
         - Past experiences
         - Goals or future plans
      3. Combine any new information with the existing memory
      4. Format the memory as a clear, bulleted list
      5. If new information conflicts with existing memory, keep the most recent version
      
      Remember: Only include factual information directly stated by the user. Do not make assumptions or inferences.
      
      Based on the chat history below, please update the user information:"""
      
      # Nodes
      def call_model(state: MessagesState, config: RunnableConfig, store: BaseStore):
      
          """Load memory from the store and use it to personalize the chatbot's response."""
          
          # Get the user ID from the config
          user_id = config["configurable"]["user_id"]
      
          # Retrieve memory from the store
          namespace = ("memory", user_id)
          key = "user_memory"
          existing_memory = store.get(namespace, key)
      
          # Extract the actual memory content if it exists and add a prefix
          if existing_memory:
              # Value is a dictionary with a memory key
              existing_memory_content = existing_memory.value.get('memory')
          else:
              existing_memory_content = "No existing memory found."
          if isinstance(existing_memory_content, str):
              print(f"\t[Call model debug] Existing memory: {existing_memory_content}")
          else:
              print(f"\t[Call model debug] Existing memory: {existing_memory_content.content}")
      
          # Format the memory in the system prompt
          system_msg = MODEL_SYSTEM_MESSAGE.format(memory=existing_memory_content)
          
          # Respond using memory as well as the chat history
          response = llm.invoke([SystemMessage(content=system_msg)]+state["messages"])
      
          return {"messages": response}
      
      def write_memory(state: MessagesState, config: RunnableConfig, store: BaseStore):
      
          """Reflect on the chat history and save a memory to the store."""
          
          # Get the user ID from the config
          user_id = config["configurable"]["user_id"]
      
          # Retrieve existing memory from the store
          namespace = ("memory", user_id)
          existing_memory = store.get(namespace, "user_memory")
              
          # Extract the memory
          if existing_memory:
              existing_memory_content = existing_memory.value.get('memory')
          else:
              existing_memory_content = "No existing memory found."
          if isinstance(existing_memory_content, str):
              print(f"\t[Write memory debug] Existing memory: {existing_memory_content}")
          else:
              print(f"\t[Write memory debug] Existing memory: {existing_memory_content.content}")
      
          # Format the memory in the system prompt
          system_msg = CREATE_MEMORY_INSTRUCTION.format(memory=existing_memory_content)
          new_memory = llm.invoke([SystemMessage(content=system_msg)]+state['messages'])
          if isinstance(new_memory, str):
              print(f"\n\t[Write memory debug] New memory: {new_memory}")
          else:
              print(f"\n\t[Write memory debug] New memory: {new_memory.content}")
      
          # Overwrite the existing memory in the store 
          key = "user_memory"
      
          # Write value as a dictionary with a memory key
          store.put(namespace, key, {"memory": new_memory.content})
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("call_model", call_model)
      graph_builder.add_node("write_memory", write_memory)
      
      # Connect nodes
      graph_builder.add_edge(START, "call_model")
      graph_builder.add_edge("call_model", "write_memory")
      graph_builder.add_edge("write_memory", END)
      
      # Store for long-term (across-thread) memory
      long_term_memory = InMemoryStore()
      
      # Checkpointer for short-term (within-thread) memory
      short_term_memory = MemorySaver()
      
      # Compile the graph
      graph = graph_builder.compile(checkpointer=short_term_memory, store=long_term_memory)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 29

Vamos a probarlo

	
# We supply a thread ID for short-term (within-thread) memory
# We supply a user ID for long-term (across-thread) memory
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1", "user_id": "1"{closing_brace}{closing_brace}
# User input
input_messages = [HumanMessage(content="Hi, my name is Maximo")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Hi, my name is Maximo
[Call model debug] Existing memory: No existing memory found.
================================== Ai Message ==================================
Hello Maximo! It's nice to meet you. How can I assist you today?
[Write memory debug] Existing memory: No existing memory found.
[Write memory debug] New memory:
Here's the updated information I have about you:
- Name: Maximo
	
# User input
input_messages = [HumanMessage(content="I like to bike around San Francisco")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
I like to bike around San Francisco
[Call model debug] Existing memory:
Here's the updated information I have about you:
- Name: Maximo
================================== Ai Message ==================================
That sounds like a great way to explore the city! San Francisco has some fantastic biking routes. Are there any specific areas or routes you enjoy biking the most, or are you looking for some new recommendations?
[Write memory debug] Existing memory:
Here's the updated information I have about you:
- Name: Maximo
[Write memory debug] New memory:
Here's the updated information about you:
- Name: Maximo
- Location: San Francisco
- Interest: Biking around San Francisco

Si recuperamos la memoria a largo plazo

	
# Namespace for the memory to save
user_id = "1"
namespace = ("memory", user_id)
existing_memory = long_term_memory.get(namespace, "user_memory")
existing_memory.dict()
Copy
	
{opening_brace}'namespace': ['memory', '1'],
'key': 'user_memory',
'value': {'memory': " Here's the updated information about you: - Name: Maximo - Location: San Francisco - Interest: Biking around San Francisco"},
'created_at': '2025-05-11T09:41:26.739207+00:00',
'updated_at': '2025-05-11T09:41:26.739211+00:00'}

Obtenemos su valor

	
print(existing_memory.value.get('memory'))
Copy
	
Here's the updated information about you:
- Name: Maximo
- Location: San Francisco
- Interest: Biking around San Francisco

Ahora podemos empezar un nuevo hilo de conversación, pero con la misma memoria a largo plazo. Veremos que el chatbot recuerda la información del usuario.

	
# We supply a user ID for across-thread memory as well as a new thread ID
config = {opening_brace}"configurable": {opening_brace}"thread_id": "2", "user_id": "1"{closing_brace}{closing_brace}
# User input
input_messages = [HumanMessage(content="Hi! Where would you recommend that I go biking?")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Hi! Where would you recommend that I go biking?
[Call model debug] Existing memory:
Here's the updated information about you:
- Name: Maximo
- Location: San Francisco
- Interest: Biking around San Francisco
================================== Ai Message ==================================
Hi there! Given my interest in biking around San Francisco, I'd recommend a few great routes:
1. **Golden Gate Park**: This is a fantastic place to bike, with wide paths that are separated from vehicle traffic. You can start at the eastern end near Stow Lake and bike all the way to the western end at Ocean Beach. There are plenty of scenic spots to stop and enjoy along the way.
2. **The Embarcadero**: This route follows the waterfront from Fisherman’s Wharf to the Bay Bridge. It’s relatively flat and offers beautiful views of the San Francisco Bay and the city skyline. You can also stop by the Ferry Building for some delicious food and drinks.
3. **Presidio**: The Presidio is a large park with numerous trails that offer diverse landscapes, from forests to coastal bluffs. The Crissy Field area is especially popular for its views of the Golden Gate Bridge.
4. **Golden Gate Bridge**: Riding across the Golden Gate Bridge is a must-do experience. You can start from the San Francisco side, bike across the bridge, and then continue into Marin County for a longer ride with stunning views.
5. **Lombard Street**: While not a long ride, biking down the famous crooked section of Lombard Street can be a fun and memorable experience. Just be prepared for the steep hill on the way back up!
Each of these routes offers a unique experience, so you can choose based on your interests and the type of scenery you enjoy. Happy biking!
[Write memory debug] Existing memory:
Here's the updated information about you:
- Name: Maximo
- Location: San Francisco
- Interest: Biking around San Francisco
[Write memory debug] New memory: 😊
Let me know if you have any other questions or if you need more recommendations!

He abierto un nuevo hilo de conversación, le he preguntado dónde podría ir a montar en bici, ha recordado que le había dicho que me gusta ir a montar en bici por San Francisco y me ha respondido con lugares de San Francisco a los que podría ir

Chatbot con perfil de usuariolink image 42

Nota: Este apartado lo vamos a hacer usando Sonnet 3.7, ya que la integración de HuggingFace no tiene la funcionalidad de with_structured_output que proporciona una salida extructurada con un estructura definida.

Podemos crear tipados para que el LLM genere una salida con una estructura definida por nosotros.

Vamos a crear un tipado para el perfil de usuario.

	
from typing import TypedDict, List
class UserProfile(TypedDict):
"""User profile schema with typed fields"""
user_name: str # The user's preferred name
interests: List[str] # A list of the user's interests
Copy

Ahora volvemos a crear el grafo, pero ahora con el tipado UserProfile

Vamos a usar with_structured_output para que el LLM genere una salida con una estructura definida por nosotros, esa estructura la vamos a definir con la clase Subjects que es una clase de tipo BaseModel de Pydantic.

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, MessagesState, START, END
      from langgraph.graph.message import add_messages
      from langchain_anthropic import ChatAnthropic
      from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
      from langgraph.checkpoint.memory import MemorySaver # Short-term memory
      from langgraph.store.base import BaseStore          # Long-term memory
      from langchain_core.runnables.config import RunnableConfig
      from langgraph.store.memory import InMemoryStore
      from IPython.display import Image, display
      from pydantic import BaseModel, Field
      import os
      import dotenv
      
      dotenv.load_dotenv()
      ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
      llm_with_structured_output = llm.with_structured_output(UserProfile)
      
      # Chatbot instruction
      MODEL_SYSTEM_MESSAGE = """You are a helpful assistant with memory that provides information about the user. 
      If you have memory for this user, use it to personalize your responses.
      Here is the memory (it may be empty): {memory}"""
      
      # Create new memory from the chat history and any existing memory
      CREATE_MEMORY_INSTRUCTION = """Create or update a user profile memory based on the user's chat history. 
      This will be saved for long-term memory. If there is an existing memory, simply update it. 
      Here is the existing memory (it may be empty): {memory}"""
      
      # Nodes
      def call_model(state: MessagesState, config: RunnableConfig, store: BaseStore):
      
          """Load memory from the store and use it to personalize the chatbot's response."""
          
          # Get the user ID from the config
          user_id = config["configurable"]["user_id"]
      
          # Retrieve memory from the store
          namespace = ("memory", user_id)
          existing_memory = store.get(namespace, "user_memory")
      
          # Format the memories for the system prompt
          if existing_memory and existing_memory.value:
              memory_dict = existing_memory.value
              formatted_memory = (
                  f"Name: {memory_dict.get('user_name', 'Unknown')}\n"
                  f"Interests: {', '.join(memory_dict.get('interests', []))}"
              )
          else:
              formatted_memory = None
          # if isinstance(existing_memory_content, str):
          print(f"\t[Call model debug] Existing memory: {formatted_memory}")
          # else:
          #     print(f"\t[Call model debug] Existing memory: {existing_memory_content.content}")
      
          # Format the memory in the system prompt
          system_msg = MODEL_SYSTEM_MESSAGE.format(memory=formatted_memory)
      
          # Respond using memory as well as the chat history
          response = llm.invoke([SystemMessage(content=system_msg)]+state["messages"])
      
          return {"messages": response}
      
      def write_memory(state: MessagesState, config: RunnableConfig, store: BaseStore):
      
          """Reflect on the chat history and save a memory to the store."""
          
          # Get the user ID from the config
          user_id = config["configurable"]["user_id"]
      
          # Retrieve existing memory from the store
          namespace = ("memory", user_id)
          existing_memory = store.get(namespace, "user_memory")
      
          # Format the memories for the system prompt
          if existing_memory and existing_memory.value:
              memory_dict = existing_memory.value
              formatted_memory = (
                  f"Name: {memory_dict.get('user_name', 'Unknown')}\n"
                  f"Interests: {', '.join(memory_dict.get('interests', []))}"
              )
          else:
              formatted_memory = None
          print(f"\t[Write memory debug] Existing memory: {formatted_memory}")
              
          # Format the existing memory in the instruction
          system_msg = CREATE_MEMORY_INSTRUCTION.format(memory=formatted_memory)
      
          # Invoke the model to produce structured output that matches the schema
          new_memory = llm_with_structured_output.invoke([SystemMessage(content=system_msg)]+state['messages'])
          print(f"\t[Write memory debug] New memory: {new_memory}")
      
          # Overwrite the existing use profile memory
          key = "user_memory"
          store.put(namespace, key, new_memory)
      
      # Create graph builder
      graph_builder = StateGraph(MessagesState)
      
      # Add nodes
      graph_builder.add_node("call_model", call_model)
      graph_builder.add_node("write_memory", write_memory)
      
      # Connect nodes
      graph_builder.add_edge(START, "call_model")
      graph_builder.add_edge("call_model", "write_memory")
      graph_builder.add_edge("write_memory", END)
      
      # Store for long-term (across-thread) memory
      long_term_memory = InMemoryStore()
      
      # Checkpointer for short-term (within-thread) memory
      short_term_memory = MemorySaver()
      
      # Compile the graph
      graph = graph_builder.compile(checkpointer=short_term_memory, store=long_term_memory)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 30

Ejecutamos el grafo

	
# We supply a thread ID for short-term (within-thread) memory
# We supply a user ID for long-term (across-thread) memory
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1", "user_id": "1"{closing_brace}{closing_brace}
# User input
input_messages = [HumanMessage(content="Hi, my name is Maximo and I like to bike around Madrid and eat salads.")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Hi, my name is Maximo and I like to bike around Madrid and eat salads.
[Call model debug] Existing memory: None
================================== Ai Message ==================================
Hello Maximo! It's nice to meet you. I see you enjoy biking around Madrid and eating salads - those are great healthy habits! Madrid has some beautiful areas to explore by bike, and the city has been improving its cycling infrastructure in recent years.
Is there anything specific about Madrid's cycling routes or perhaps some good places to find delicious salads in the city that you'd like to know more about? I'd be happy to help with any questions you might have.
[Write memory debug] Existing memory: None
[Write memory debug] New memory: {'user_name': 'Maximo', 'interests': ['biking', 'Madrid', 'salads']{closing_brace}

Como vemos, el LLM ha generado una salida con la estructura definida por nosotros.

Vamos a ver cómo se ha guardado la memoria a largo plazo.

	
# Namespace for the memory to save
user_id = "1"
namespace = ("memory", user_id)
existing_memory = long_term_memory.get(namespace, "user_memory")
existing_memory.value
Copy
	
{opening_brace}'user_name': 'Maximo', 'interests': ['biking', 'Madrid', 'salads']{closing_brace}

Máslink image 43

Actualizar esquemas estructurados con Trustcalllink image 44

En el ejemplo anterior, hemos creado perfiles de usuario con datos estructurados

Enn realidad, lo que se hace por debajo, es regenerar el perfil de usuario en cada interacción. Lo cual genera un gasto innecesario de tokens y puede hacer que se pierda información importante del perfil del usuario.

Así que para solucionarlo vamos a usar la librería TrustCall, que es una librería open source para actualizar esquemas JSON. Cuando tiene que actualizar un esquema JSON, lo hace de manera incremental, es decir, no borra el esquema anterior, sino que va añadiendo los nuevos campos.

Vamos a crear un ejemplo de conversación para ver cómo funciona.

	
from langchain_core.messages import HumanMessage, AIMessage
# Conversation
conversation = [HumanMessage(content="Hi, I'm Maximo."),
AIMessage(content="Nice to meet you, Maximo."),
HumanMessage(content="I really like playing soccer.")]
Copy

Creamos un esquema estructurado y un modelo de LLM

	
from pydantic import BaseModel, Field
from typing import List
# Schema
class UserProfile(BaseModel):
"""User profile schema with typed fields"""
user_name: str = Field(description="The user's preferred name")
interests: List[str] = Field(description="A list of the user's interests")
from langchain_anthropic import ChatAnthropic
import os
import dotenv
dotenv.load_dotenv()
ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
# Create the LLM model
llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
Copy

Utilizamos la función create_extractor de trustcall para crear un extractor de datos estructurados

	
from trustcall import create_extractor
# Create the extractor
trustcall_extractor = create_extractor(
llm,
tools=[UserProfile],
tool_choice="UserProfile"
)
Copy

Como se puede ver, al método trustcall_extractor se le da un llm, que va a usarse como motor de búsqueda

Extrajimos los datos estructurados

	
from langchain_core.messages import SystemMessage
# Instruction
system_msg = "Extract the user profile from the following conversation"
# Invoke the extractor
result = trustcall_extractor.invoke({"messages": [SystemMessage(content=system_msg)]+conversation})
result
Copy
	
{opening_brace}'messages': [AIMessage(content=[{opening_brace}'id': 'toolu_01WfgbD1fG3rJYAXGrjqjfVY', 'input': {'user_name': 'Maximo', 'interests': ['soccer']{closing_brace}, 'name': 'UserProfile', 'type': 'tool_use'{closing_brace}], additional_kwargs={opening_brace}{closing_brace}, response_metadata={opening_brace}'id': 'msg_01TEB3FeDKLAeHJtbKo5noyW', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 497, 'output_tokens': 56}, 'model_name': 'claude-3-7-sonnet-20250219'{closing_brace}, id='run-8a15289b-fd39-4a2d-878a-fa6feaa805c5-0', tool_calls=[{opening_brace}'name': 'UserProfile', 'args': {'user_name': 'Maximo', 'interests': ['soccer']{closing_brace}, 'id': 'toolu_01WfgbD1fG3rJYAXGrjqjfVY', 'type': 'tool_call'{closing_brace}], usage_metadata={'input_tokens': 497, 'output_tokens': 56, 'total_tokens': 553, 'input_token_details': {'cache_read': 0, 'cache_creation': 0{closing_brace}{closing_brace})],
'responses': [UserProfile(user_name='Maximo', interests=['soccer'])],
'response_metadata': [{opening_brace}'id': 'toolu_01WfgbD1fG3rJYAXGrjqjfVY'{closing_brace}],
'attempts': 1}

Vamos a ver los mensajes que se han generado para extraer los datos estructurados

	
for m in result["messages"]:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
[{opening_brace}'id': 'toolu_01WfgbD1fG3rJYAXGrjqjfVY', 'input': {'user_name': 'Maximo', 'interests': ['soccer']{closing_brace}, 'name': 'UserProfile', 'type': 'tool_use'{closing_brace}]
Tool Calls:
UserProfile (toolu_01WfgbD1fG3rJYAXGrjqjfVY)
Call ID: toolu_01WfgbD1fG3rJYAXGrjqjfVY
Args:
user_name: Maximo
interests: ['soccer']

El esquema de UserProfile se ha actualizado con el nuevo dato.

	
schema = result["responses"]
schema
Copy
	
[UserProfile(user_name='Maximo', interests=['soccer'])]

Como vemos, el esquema es una lista, vamos a ver el tipo de dato de su único elemento

	
type(schema[0])
Copy
	
__main__.UserProfile

Podemos convertirlo a un diccionario con model_dump

	
schema[0].model_dump()
Copy
	
{opening_brace}'user_name': 'Maximo', 'interests': ['soccer']{closing_brace}

Gracias a haberle dado un LLM a trustcall_extractor, podemos pedirle qué queremos que extraiga

Vamos a simular que continua la conversación para ver cómo se actualiza el esquema

	
# Update the conversation
updated_conversation = [HumanMessage(content="Hi, I'm Maximo."),
AIMessage(content="Nice to meet you, Maximo."),
HumanMessage(content="I really like playing soccer."),
AIMessage(content="It is great to play soccer! Where do you go after playing soccer?"),
HumanMessage(content="I really like to go to a bakery after playing soccer."),]
Copy

Le pedimos al modelo que actualice el esquema (un JSON) mediante la librería trustcall

	
# Update the instruction
system_msg = f"""Update the memory (JSON doc) to incorporate new information from the following conversation"""
# Invoke the extractor with the updated instruction and existing profile with the corresponding tool name (UserProfile)
result = trustcall_extractor.invoke({"messages": [SystemMessage(content=system_msg)]+updated_conversation},
{opening_brace}"existing": {opening_brace}"UserProfile": schema[0].model_dump(){closing_brace}{closing_brace})
result
Copy
	
{opening_brace}'messages': [AIMessage(content=[{opening_brace}'id': 'toolu_01K1zTh33kXDAw1h18Yh2HBb', 'input': {'user_name': 'Maximo', 'interests': ['soccer', 'bakeries']{closing_brace}, 'name': 'UserProfile', 'type': 'tool_use'{closing_brace}], additional_kwargs={opening_brace}{closing_brace}, response_metadata={opening_brace}'id': 'msg_01RYUJvCdzL4b8kBYKo4BtQf', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 538, 'output_tokens': 60}, 'model_name': 'claude-3-7-sonnet-20250219'{closing_brace}, id='run-06994472-5ba0-46cc-a512-5fcacce283fc-0', tool_calls=[{opening_brace}'name': 'UserProfile', 'args': {'user_name': 'Maximo', 'interests': ['soccer', 'bakeries']{closing_brace}, 'id': 'toolu_01K1zTh33kXDAw1h18Yh2HBb', 'type': 'tool_call'{closing_brace}], usage_metadata={'input_tokens': 538, 'output_tokens': 60, 'total_tokens': 598, 'input_token_details': {'cache_read': 0, 'cache_creation': 0{closing_brace}{closing_brace})],
'responses': [UserProfile(user_name='Maximo', interests=['soccer', 'bakeries'])],
'response_metadata': [{opening_brace}'id': 'toolu_01K1zTh33kXDAw1h18Yh2HBb'{closing_brace}],
'attempts': 1}

Vamos a ver los mensajes que se han generado para actualizar el esquema

	
for m in result["messages"]:
m.pretty_print()
Copy
	
================================== Ai Message ==================================
[{opening_brace}'id': 'toolu_01K1zTh33kXDAw1h18Yh2HBb', 'input': {'user_name': 'Maximo', 'interests': ['soccer', 'bakeries']{closing_brace}, 'name': 'UserProfile', 'type': 'tool_use'{closing_brace}]
Tool Calls:
UserProfile (toolu_01K1zTh33kXDAw1h18Yh2HBb)
Call ID: toolu_01K1zTh33kXDAw1h18Yh2HBb
Args:
user_name: Maximo
interests: ['soccer', 'bakeries']

Vemos el esquema actualizado

	
updated_schema = result["responses"][0]
updated_schema.model_dump()
Copy
	
{opening_brace}'user_name': 'Maximo', 'interests': ['soccer', 'bakeries']{closing_brace}

Chatbot con perfil de usuario actualizado con Trustcalllink image 45

Volvemos a crear el grafo que actualiza el perfil de usuario, pero ahora con la librería trustcall

from pydantic import BaseModel, Field
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, MessagesState, START, END
      from langgraph.graph.message import add_messages
      from langchain_anthropic import ChatAnthropic
      from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
      from langgraph.checkpoint.memory import MemorySaver # Short-term memory
      from langgraph.store.base import BaseStore          # Long-term memory
      from langchain_core.runnables.config import RunnableConfig
      from langgraph.store.memory import InMemoryStore
      from IPython.display import Image, display
      from pydantic import BaseModel, Field
      import os
      import dotenv
      from trustcall import create_extractor
      
      dotenv.load_dotenv()
      ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Schema 
      class UserProfile(BaseModel):
          """ Profile of a user """
          user_name: str = Field(description="The user's preferred name")
          user_location: str = Field(description="The user's location")
          interests: list = Field(description="A list of the user's interests")
      
      # Create the LLM model
      llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
      
      # Create the extractor
      trustcall_extractor = create_extractor(
          llm,
          tools=[UserProfile],
          tool_choice="UserProfile", # Enforces use of the UserProfile tool
      )
      
      # Chatbot instruction
      MODEL_SYSTEM_MESSAGE = """You are a helpful assistant with memory that provides information about the user. 
      If you have memory for this user, use it to personalize your responses.
      Here is the memory (it may be empty): {memory}"""
      
      # Create new memory from the chat history and any existing memory
      TRUSTCALL_INSTRUCTION = """Create or update the memory (JSON doc) to incorporate information from the following conversation:"""
      
      # Nodes
      def call_model(state: MessagesState, config: RunnableConfig, store: BaseStore):
      
          """Load memory from the store and use it to personalize the chatbot's response."""
          
          """Load memory from the store and use it to personalize the chatbot's response."""
          
          # Get the user ID from the config
          user_id = config["configurable"]["user_id"]
      
          # Retrieve memory from the store
          namespace = ("memory", user_id)
          existing_memory = store.get(namespace, "user_memory")
      
          # Format the memories for the system prompt
          if existing_memory and existing_memory.value:
              memory_dict = existing_memory.value
              formatted_memory = (
                  f"Name: {memory_dict.get('user_name', 'Unknown')}\n"
                  f"Location: {memory_dict.get('user_location', 'Unknown')}\n"
                  f"Interests: {', '.join(memory_dict.get('interests', []))}"      
              )
          else:
              formatted_memory = None
          print(f"\t[Call model debug] Existing memory: {formatted_memory}")
      
          # Format the memory in the system prompt
          system_msg = MODEL_SYSTEM_MESSAGE.format(memory=formatted_memory)
      
          # Respond using memory as well as the chat history
          response = llm.invoke([SystemMessage(content=system_msg)]+state["messages"])
      
          return {"messages": response}
      
      def write_memory(state: MessagesState, config: RunnableConfig, store: BaseStore):
      
          """Reflect on the chat history and save a memory to the store."""
          
          # Get the user ID from the config
          user_id = config["configurable"]["user_id"]
      
          # Retrieve existing memory from the store
          namespace = ("memory", user_id)
          existing_memory = store.get(namespace, "user_memory")
              
          # Get the profile as the value from the list, and convert it to a JSON doc
          existing_profile = {"UserProfile": existing_memory.value} if existing_memory else None
          print(f"\t[Write memory debug] Existing profile: {existing_profile}")
          
          # Invoke the extractor
          result = trustcall_extractor.invoke({"messages": [SystemMessage(content=TRUSTCALL_INSTRUCTION)]+state["messages"], "existing": existing_profile})
          
          # Get the updated profile as a JSON object
          updated_profile = result["responses"][0].model_dump()
          print(f"\t[Write memory debug] Updated profile: {updated_profile}")
      
          # Save the updated profile
          key = "user_memory"
          store.put(namespace, key, updated_profile)
      
      # Create graph builder
      graph_builder = StateGraph(MessagesState)
      
      # Add nodes
      graph_builder.add_node("call_model", call_model)
      graph_builder.add_node("write_memory", write_memory)
      
      # Connect nodes
      graph_builder.add_edge(START, "call_model")
      graph_builder.add_edge("call_model", "write_memory")
      graph_builder.add_edge("write_memory", END)
      
      # Store for long-term (across-thread) memory
      long_term_memory = InMemoryStore()
      
      # Checkpointer for short-term (within-thread) memory
      short_term_memory = MemorySaver()
      
      # Compile the graph
      graph = graph_builder.compile(checkpointer=short_term_memory, store=long_term_memory)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 31

Iniciamos la conversación

	
# We supply a thread ID for short-term (within-thread) memory
# We supply a user ID for long-term (across-thread) memory
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1", "user_id": "1"{closing_brace}{closing_brace}
# User input
input_messages = [HumanMessage(content="Hi, my name is Maximo")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Hi, my name is Maximo
[Call model debug] Existing memory: None
================================== Ai Message ==================================
Hello Maximo! It's nice to meet you. How can I help you today? Whether you have questions, need information, or just want to chat, I'm here to assist you. Is there something specific you'd like to talk about?
[Write memory debug] Existing profile: None
[Write memory debug] Updated profile: {'user_name': 'Maximo', 'user_location': '<UNKNOWN>', 'interests': []}

Como vemos, no sabe ni la localización ni los intereses del usuario. Vamos a actualizar el perfil del usuario.

	
# User input
input_messages = [HumanMessage(content="I like to play soccer and I live in Madrid")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
I like to play soccer and I live in Madrid
[Call model debug] Existing memory: Name: Maximo
Location: <UNKNOWN>
Interests:
================================== Ai Message ==================================
Hello Maximo! It's great to learn that you live in Madrid and enjoy playing soccer. Madrid is a fantastic city with a rich soccer culture, being home to world-famous clubs like Real Madrid and Atlético Madrid.
Soccer is truly a way of life in Spain, so you're in a perfect location for your interest. Do you support any particular team in Madrid? Or perhaps you enjoy playing soccer recreationally in the city's parks and facilities?
Is there anything specific about Madrid or soccer you'd like to discuss further?
[Write memory debug] Existing profile: {'UserProfile': {'user_name': 'Maximo', 'user_location': '<UNKNOWN>', 'interests': []{closing_brace}{closing_brace}
[Write memory debug] Updated profile: {'user_name': 'Maximo', 'user_location': 'Madrid', 'interests': ['soccer']{closing_brace}

Ha actualizado el perfil con la localización y los intereses del usuario

Vamos a ver la memoria actualizada

	
# Namespace for the memory to save
user_id = "1"
namespace = ("memory", user_id)
existing_memory = long_term_memory.get(namespace, "user_memory")
existing_memory.dict()
Copy
	
{opening_brace}'namespace': ['memory', '1'],
'key': 'user_memory',
'value': {'user_name': 'Maximo',
'user_location': 'Madrid',
'interests': ['soccer']{closing_brace},
'created_at': '2025-05-12T17:35:03.583258+00:00',
'updated_at': '2025-05-12T17:35:03.583259+00:00'}

Vemos el esquema con el perfil del usuario actualizado

	
# The user profile saved as a JSON object
existing_memory.value
Copy
	
{opening_brace}'user_name': 'Maximo', 'user_location': 'Madrid', 'interests': ['soccer']{closing_brace}

Vamos a añadir un nuevo interés del usuario

	
# User input
input_messages = [HumanMessage(content="I also like to play basketball")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
I also like to play basketball
[Call model debug] Existing memory: Name: Maximo
Location: Madrid
Interests: soccer
================================== Ai Message ==================================
That's great to know, Maximo! It's nice that you enjoy both soccer and basketball. Basketball is also quite popular in Spain, with Liga ACB being one of the strongest basketball leagues in Europe.
In Madrid, you have the opportunity to follow Real Madrid's basketball section, which is one of the most successful basketball teams in Europe. The city offers plenty of courts and facilities where you can play basketball too.
Do you play basketball casually with friends, or are you part of any local leagues in Madrid? And how do you balance your time between soccer and basketball?
[Write memory debug] Existing profile: {'UserProfile': {'user_name': 'Maximo', 'user_location': 'Madrid', 'interests': ['soccer']{closing_brace}{closing_brace}
[Write memory debug] Updated profile: {'user_name': 'Maximo', 'user_location': 'Madrid', 'interests': ['soccer', 'basketball']{closing_brace}

Volvemos a ver la memoria actualizada

	
# Namespace for the memory to save
user_id = "1"
namespace = ("memory", user_id)
existing_memory = long_term_memory.get(namespace, "user_memory")
existing_memory.value
Copy
	
{opening_brace}'user_name': 'Maximo',
'user_location': 'Madrid',
'interests': ['soccer', 'basketball']{closing_brace}

Ha añadido bien el nuevo interés del usuario.

Con esta memoria a largo plazo guardada, podemos iniciar un nuevo hilo y el chatbot tendrá acceso a nuestro perfil actualizado.

	
# We supply a thread ID for short-term (within-thread) memory
# We supply a user ID for long-term (across-thread) memory
config = {opening_brace}"configurable": {opening_brace}"thread_id": "2", "user_id": "1"{closing_brace}{closing_brace}
# User input
input_messages = [HumanMessage(content="What soccer players do you recommend for me?")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
What soccer players do you recommend for me?
[Call model debug] Existing memory: Name: Maximo
Location: Madrid
Interests: soccer, basketball
================================== Ai Message ==================================
Based on your interest in soccer, I can recommend some players who might appeal to you. Since you're from Madrid, you might already follow Real Madrid or Atlético Madrid players, but here are some recommendations:
From La Liga:
- Vinícius Júnior and Jude Bellingham (Real Madrid)
- Antoine Griezmann (Atlético Madrid)
- Robert Lewandowski (Barcelona)
- Lamine Yamal (Barcelona's young talent)
International stars:
- Kylian Mbappé
- Erling Haaland
- Mohamed Salah
- Kevin De Bruyne
You might also enjoy watching players with creative playing styles since you're interested in basketball as well, which is a sport that values creativity and flair - players like Rodrigo De Paul or João Félix.
Is there a particular league or playing style you prefer in soccer?
[Write memory debug] Existing profile: {'UserProfile': {'user_name': 'Maximo', 'user_location': 'Madrid', 'interests': ['soccer', 'basketball']{closing_brace}{closing_brace}
[Write memory debug] Updated profile: {'user_name': 'Maximo', 'user_location': 'Madrid', 'interests': ['soccer', 'basketball']{closing_brace}

Como sabe que vivo en Madrid, primero me ha sugerido jugadores de fútbol de la Liga española. Y luego me ha sugerido jugadores de otras ligas

Chatbot con colecciones de documentos de usuario actualizadas con Trustcalllink image 46

Otro enfoque es que en vez de guardar en un solo documento el perfil del usuario, guardar una colección de documentos, de esta manera no estamos atados a un solo esquema cerrado

Vamos a ver cómo hacerlo

from langgraph.graph import StateGraph, MessagesState, START, END
      from langchain_anthropic import ChatAnthropic
      from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
      from langchain_core.messages import merge_message_runs
      from langgraph.checkpoint.memory import MemorySaver # Short-term memory
      from langgraph.store.base import BaseStore          # Long-term memory
      from langchain_core.runnables.config import RunnableConfig
      from langgraph.store.memory import InMemoryStore
      from IPython.display import Image, display
      from trustcall import create_extractor
      from pydantic import BaseModel, Field
      import uuid
      import os
      import dotenv
      
      dotenv.load_dotenv()
      ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Memory schema
      class Memory(BaseModel):
          """A memory item representing a piece of information learned about the user."""
          content: str = Field(description="The main content of the memory. For example: User expressed interest in learning about French.")
      
      # Create the LLM model
      llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
      
      # Create the extractor
      trustcall_extractor = create_extractor(
          llm,
          tools=[Memory],
          tool_choice="Memory",
          # This allows the extractor to insert new memories
          enable_inserts=True,
      )
      
      # Chatbot instruction
      MODEL_SYSTEM_MESSAGE = """You are a helpful chatbot. You are designed to be a companion to a user. 
      You have a long term memory which keeps track of information you learn about the user over time.
      Current Memory (may include updated memories from this conversation): 
      {memory}"""
      
      # Create new memory from the chat history and any existing memory
      TRUSTCALL_INSTRUCTION = """Reflect on following interaction. 
      Use the provided tools to retain any necessary memories about the user. 
      Use parallel tool calling to handle updates and insertions simultaneously:"""
      
      # Nodes
      def call_model(state: MessagesState, config: RunnableConfig, store: BaseStore):
      
          """Load memory from the store and use it to personalize the chatbot's response."""
          
          # Get the user ID from the config
          user_id = config["configurable"]["user_id"]
      
          # Retrieve memory from the store
          namespace = ("memories", user_id)
          memories = store.search(namespace)
          print(f"\t[Call model debug] Memories: {memories}")
      
          # Format the memories for the system prompt
          info = "\n".join(f"- {mem.value['content']}" for mem in memories)
          system_msg = MODEL_SYSTEM_MESSAGE.format(memory=info)
      
          # Respond using memory as well as the chat history
          response = llm.invoke([SystemMessage(content=system_msg)]+state["messages"])
      
          return {"messages": response}
      
      def write_memory(state: MessagesState, config: RunnableConfig, store: BaseStore):
      
          """Reflect on the chat history and save a memory to the store."""
          
          # Get the user ID from the config
          user_id = config["configurable"]["user_id"]
      
          # Define the namespace for the memories
          namespace = ("memories", user_id)
      
          # Retrieve the most recent memories for context
          existing_items = store.search(namespace)
      
          # Format the existing memories for the Trustcall extractor
          tool_name = "Memory"
          existing_memories = ([(existing_item.key, tool_name, existing_item.value)
                                for existing_item in existing_items]
                                if existing_items
                                else None
                              )
          print(f"\t[Write memory debug] Existing memories: {existing_memories}")
      
          # Merge the chat history and the instruction
          updated_messages=list(merge_message_runs(messages=[SystemMessage(content=TRUSTCALL_INSTRUCTION)] + state["messages"]))
      
          # Invoke the extractor
          result = trustcall_extractor.invoke({"messages": updated_messages, 
                                              "existing": existing_memories})
      
          # Save the memories from Trustcall to the store
          for r, rmeta in zip(result["responses"], result["response_metadata"]):
              store.put(namespace,
                        rmeta.get("json_doc_id", str(uuid.uuid4())),
                        r.model_dump(mode="json"),
                  )
          print(f"\t[Write memory debug] Saved memories: {result['responses']}")
      
      # Create graph builder
      graph_builder = StateGraph(MessagesState)
      
      # Add nodes
      graph_builder.add_node("call_model", call_model)
      graph_builder.add_node("write_memory", write_memory)
      
      # Connect nodes
      graph_builder.add_edge(START, "call_model")
      graph_builder.add_edge("call_model", "write_memory")
      graph_builder.add_edge("write_memory", END)
      
      # Store for long-term (across-thread) memory
      long_term_memory = InMemoryStore()
      
      # Checkpointer for short-term (within-thread) memory
      short_term_memory = MemorySaver()
      
      # Compile the graph
      graph = graph_builder.compile(checkpointer=short_term_memory, store=long_term_memory)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 32

Empezamos una nueva conversación

	
# We supply a thread ID for short-term (within-thread) memory
# We supply a user ID for long-term (across-thread) memory
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1", "user_id": "1"{closing_brace}{closing_brace}
# User input
input_messages = [HumanMessage(content="Hi, my name is Maximo")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Hi, my name is Maximo
[Call model debug] Memories: []
================================== Ai Message ==================================
Hello Maximo! It's nice to meet you. I'm your companion chatbot, here to chat, help answer questions, or just be someone to talk to.
I'll remember your name is Maximo for our future conversations. What would you like to talk about today? How are you doing?
[Write memory debug] Existing memories: None
[Write memory debug] Saved memories: [Memory(content="User's name is Maximo.")]

Añadimos un nuevo interés del usuario

	
# User input
input_messages = [HumanMessage(content="I like to play soccer")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
I like to play soccer
[Call model debug] Memories: [Item(namespace=['memories', '1'], key='6d06c4f5-3a74-46b2-92b4-1e29ba128c90', value={'content': "User's name is Maximo."}, created_at='2025-05-12T18:32:38.070902+00:00', updated_at='2025-05-12T18:32:38.070903+00:00', score=None)]
================================== Ai Message ==================================
That's great to know, Maximo! Soccer is such a wonderful sport. Do you play on a team, or more casually with friends? I'd also be curious to know what position you typically play, or if you have a favorite professional team you follow. I'll remember that you enjoy soccer for our future conversations.
[Write memory debug] Existing memories: [('6d06c4f5-3a74-46b2-92b4-1e29ba128c90', 'Memory', {'content': "User's name is Maximo."})]
[Write memory debug] Saved memories: [Memory(content='User enjoys playing soccer.')]

Como vemos, se ha añadido el nuevo interés del usuario a la memoria.

Vamos a ver la memoria actualizada

	
# Namespace for the memory to save
user_id = "1"
namespace = ("memories", user_id)
memories = long_term_memory.search(namespace)
for m in memories:
print(m.dict())
Copy
	
{opening_brace}'namespace': ['memories', '1'], 'key': '6d06c4f5-3a74-46b2-92b4-1e29ba128c90', 'value': {'content': "User's name is Maximo."}, 'created_at': '2025-05-12T18:32:38.070902+00:00', 'updated_at': '2025-05-12T18:32:38.070903+00:00', 'score': None}
{opening_brace}'namespace': ['memories', '1'], 'key': '25d2ee8c-5890-415b-85e0-d9fb0ea4cd43', 'value': {'content': 'User enjoys playing soccer.'{closing_brace}, 'created_at': '2025-05-12T18:32:42.558787+00:00', 'updated_at': '2025-05-12T18:32:42.558789+00:00', 'score': None}
	
for m in memories:
print(m.value)
Copy
	
{opening_brace}'content': "User's name is Maximo."}
{opening_brace}'content': 'User enjoys playing soccer.'}

Vemos que se guardan documentos de memoria, no un perfil del usuario.

Vamos a añadir un nuevo interés del usuario

	
# User input
input_messages = [HumanMessage(content="I also like to play basketball")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
I also like to play basketball
[Call model debug] Memories: [Item(namespace=['memories', '1'], key='6d06c4f5-3a74-46b2-92b4-1e29ba128c90', value={'content': "User's name is Maximo."}, created_at='2025-05-12T18:32:38.070902+00:00', updated_at='2025-05-12T18:32:38.070903+00:00', score=None), Item(namespace=['memories', '1'], key='25d2ee8c-5890-415b-85e0-d9fb0ea4cd43', value={'content': 'User enjoys playing soccer.'{closing_brace}, created_at='2025-05-12T18:32:42.558787+00:00', updated_at='2025-05-12T18:32:42.558789+00:00', score=None)]
================================== Ai Message ==================================
That's awesome, Maximo! Both soccer and basketball are fantastic sports. I'll remember that you enjoy basketball as well. Do you find yourself playing one more than the other? And similar to soccer, do you play basketball with a team or more casually? Many people enjoy the different skills and dynamics each sport offers - soccer with its continuous flow and footwork, and basketball with its fast pace and shooting precision. Any favorite basketball teams you follow?
[Write memory debug] Existing memories: [('6d06c4f5-3a74-46b2-92b4-1e29ba128c90', 'Memory', {'content': "User's name is Maximo."}), ('25d2ee8c-5890-415b-85e0-d9fb0ea4cd43', 'Memory', {'content': 'User enjoys playing soccer.'})]
[Write memory debug] Saved memories: [Memory(content='User enjoys playing basketball.')]

Volvemos a ver la memoria actualizada

	
# Namespace for the memory to save
user_id = "1"
namespace = ("memories", user_id)
memories = long_term_memory.search(namespace)
for m in memories:
print(m.value)
Copy
	
{opening_brace}'content': "User's name is Maximo."}
{opening_brace}'content': 'User enjoys playing soccer.'}
{opening_brace}'content': 'User enjoys playing basketball.'}

Iniciamos una nueva conversación con un nuevo hilo

	
# We supply a thread ID for short-term (within-thread) memory
# We supply a user ID for long-term (across-thread) memory
config = {opening_brace}"configurable": {opening_brace}"thread_id": "2", "user_id": "1"{closing_brace}{closing_brace}
# User input
input_messages = [HumanMessage(content="What soccer players do you recommend for me?")]
# Run the graph
for chunk in graph.stream({"messages": input_messages}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
What soccer players do you recommend for me?
[Call model debug] Memories: [Item(namespace=['memories', '1'], key='6d06c4f5-3a74-46b2-92b4-1e29ba128c90', value={'content': "User's name is Maximo."}, created_at='2025-05-12T18:32:38.070902+00:00', updated_at='2025-05-12T18:32:38.070903+00:00', score=None), Item(namespace=['memories', '1'], key='25d2ee8c-5890-415b-85e0-d9fb0ea4cd43', value={'content': 'User enjoys playing soccer.'{closing_brace}, created_at='2025-05-12T18:32:42.558787+00:00', updated_at='2025-05-12T18:32:42.558789+00:00', score=None), Item(namespace=['memories', '1'], key='965f2e52-bea0-44d4-8534-4fce2bbc1c4b', value={'content': 'User enjoys playing basketball.'{closing_brace}, created_at='2025-05-12T18:33:38.613626+00:00', updated_at='2025-05-12T18:33:38.613629+00:00', score=None)]
================================== Ai Message ==================================
Hi Maximo! Since you enjoy soccer, I'd be happy to recommend some players you might find interesting to follow or learn from.
Based on your interests in both soccer and basketball, I might suggest players who are known for their athleticism and skill:
1. Lionel Messi - Widely considered one of the greatest players of all time
2. Cristiano Ronaldo - Known for incredible athleticism and dedication
3. Kylian Mbappé - Young talent with amazing speed and technical ability
4. Kevin De Bruyne - Master of passing and vision
5. Erling Haaland - Goal-scoring phenomenon
Is there a particular position or playing style you're most interested in? That would help me refine my recommendations further. I could also suggest players from specific leagues or teams if you have preferences!
[Write memory debug] Existing memories: [('6d06c4f5-3a74-46b2-92b4-1e29ba128c90', 'Memory', {'content': "User's name is Maximo."}), ('25d2ee8c-5890-415b-85e0-d9fb0ea4cd43', 'Memory', {'content': 'User enjoys playing soccer.'}), ('965f2e52-bea0-44d4-8534-4fce2bbc1c4b', 'Memory', {'content': 'User enjoys playing basketball.'})]
[Write memory debug] Saved memories: [Memory(content='User asked for soccer player recommendations, suggesting an active interest in following professional soccer beyond just playing it.')]

Vemos que se acordaba que nos gustaba el fútbol y el baloncesto.

Human in the looplink image 47

Aunque un agente puede realizar tareas, para según qué tareas, es necesario que haya una supervisión humana. A esto se le llama human in the loop. Así que vamos a ver cómo se puede hacer esto con LangGraph.

La capa de persistencia de LangGraph admite flujos de trabajo con humanos en el bucle, lo que permite que la ejecución se detenga y reanude en función de los comentarios de los usuarios. La interfaz principal de esta funcionalidad es la función interrupt. Llamando a interrupt dentro de un nodo se detendrá la ejecución. La ejecución se puede reanudar, junto con la nueva aportación del humano, pasada en una primitiva Command. interrupt es similar al comando de Python input(), pero con algunas consideraciones extra.

Vamos a añadir al chatbot que tiene memoria a corto plazo y acceso a tools, pero haremos un cambio, que es agregar una simple herramienta human_assistance. Esta herramienta utiliza interrupt para recibir información de un humano.

Primero cargamos los valores de las API KEYs

	
import os
import dotenv
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
Copy

Creamos el grafo

	
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
Copy

Definimos la tool de búsqueda

	
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
search_tool = TavilySearchResults(api_wrapper=wrapper, max_results=2)
Copy

Ahora creamos la tool de ayuda humana

	
from langgraph.types import Command, interrupt
from langchain_core.tools import tool
@tool
def human_assistance(query: str) -> str:
"""
Request assistance from a human expert. Use this tool ONLY ONCE per conversation.
After receiving the expert's response, you should provide an elaborated response to the user based on the information received
based on the information received, without calling this tool again.
Args:
query: The query to ask the human expert.
Returns:
The response from the human expert.
"""
human_response = interrupt({"query": query})
return human_response["data"]
Copy

LangGraph obtiene información de las herramientas mediante la documentación de la herramienta, es decir, el docstring de la función. Por lo que es muy importante generar un buen docstring para la herramienta.

Creamos una lista de tools

	
tools_list = [search_tool, human_assistance]
Copy

A continuación, el LLM con las bind_tools y lo añadimos al grafo

	
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from huggingface_hub import login
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
# Create the LLM
login(token=HUGGINGFACE_TOKEN)
MODEL = "Qwen/Qwen2.5-72B-Instruct"
model = HuggingFaceEndpoint(
repo_id=MODEL,
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
# Create the chat model
llm = ChatHuggingFace(llm=model)
# Modification: tell the LLM which tools it can call
llm_with_tools = llm.bind_tools(tools_list)
# Define the chatbot function
def chatbot_function(state: State):
message = llm_with_tools.invoke(state["messages"])
assert len(message.tool_calls) <= 1
return {opening_brace}"messages": [message]}
# Add the chatbot node
graph_builder.add_node("chatbot_node", chatbot_function)
Copy
	
<langgraph.graph.state.StateGraph at 0x10764b380>

Si te fijas, hemos cambiado la forma de definir la función chatbot_function, ya que ahora tiene que manejar la interrupción.

Añadimos la tool_node al grafo

	
from langgraph.prebuilt import ToolNode, tools_condition
tool_node = ToolNode(tools=tools_list)
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges("chatbot_node", tools_condition)
graph_builder.add_edge("tools", "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x10764b380>

Añadimos el nodo de START al grafo

	
graph_builder.add_edge(START, "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x10764b380>

Creamos un checkpointer MemorySaver.

	
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
Copy

Compilamos el grafo con el checkpointer

	
graph = graph_builder.compile(checkpointer=memory)
Copy

Lo representamos gráficamente

from IPython.display import Image, display
      
      try:
          display(Image(graph.get_graph().draw_mermaid_png()))
      except Exception as e:
          print(f"Error al visualizar el grafo: {e}")
      
image uv 33

Ahora solicitemos al chatbot con una pregunta que involucrará a la nueva herramienta human_assistance:

	
user_input = "I need some expert guidance for building an AI agent. Could you request assistance for me?"
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1"{closing_brace}{closing_brace}
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
I need some expert guidance for building an AI agent. Could you request assistance for me?
================================== Ai Message ==================================
Tool Calls:
human_assistance (0)
Call ID: 0
Args:
query: I need some expert guidance for building an AI agent. Could you provide me with some advice?

Como se puede ver, el chatbot generó una llamada a la herramienta de asistencia humana.

Tool Calls:
  human_assistance (0)
 Call ID: 0
  Args:
    query: I need some expert guidance for building an AI agent. Could you provide advice on key considerations, best practices, and potential pitfalls to avoid?

Pero luego la ejecución se ha interrumpido. Vamos a ver el estado del grafo

	
snapshot = graph.get_state(config)
snapshot.next
Copy
	
('tools',)

Vemos que se detuvo en el nodo de tools. Analizamos cómo se ha definido la herramienta human_assistance.

from langgraph.types import Command, interrupt
from langchain_core.tools import tool
@tool
def human_assistance(query: str) -> str:
    """
    Request assistance from a human expert. Use this tool ONLY ONCE per conversation.
    After receiving the expert's response, you should provide an elaborated response to the user based on the information received
    based on the information received, without calling this tool again.
    Args:
        query: The query to ask the human expert.
    Returns:
        The response from the human expert.
    """
    human_response = interrupt({"query": query})
    return human_response["data"]

Llamando a la herramienta interrupt se detendrá la ejecución, similar a la función de Python input().

El progreso se mantiene en función de nuestra elección de checkpointer. Es decir, la elección de dónde se guarda el estado del grafo. Así que si estamos persistiendo (guardando el estado del grafo) con una base de datos como SQLite, Postgres, etc, podemos reanudar la ejecución en cualquier momento siempre y cuando la base de datos esté viva.

Aquí estamos persistiendo (guardando el estado del grafo) con el puntero de verificación en memoria RAM, por lo que podemos reanudar en cualquier momento mientras nuestro kernel de Python se esté ejecutando. En mi caso, mientras no resetee el kernel de mi Jupyter Notebook.

Para reanudar la ejecución, pasamos un objeto Command que contiene los datos esperados por la herramienta. El formato de estos datos se puede personalizar en función de nuestras necesidades. Aquí, solo necesitamos un diccionario con una key data

	
human_response = (
"We, the experts are here to help! We'd recommend you check out LangGraph to build your agent."
"It's much more reliable and extensible than simple autonomous agents."
)
human_command = Command(resume={opening_brace}"data": human_response})
events = graph.stream(human_command, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================== Ai Message ==================================
Tool Calls:
human_assistance (0)
Call ID: 0
Args:
query: I need some expert guidance for building an AI agent. Could you provide me with some advice?
================================= Tool Message =================================
Name: human_assistance
We, the experts are here to help! We'd recommend you check out LangGraph to build your agent.It's much more reliable and extensible than simple autonomous agents.
================================== Ai Message ==================================
The experts recommend checking out LangGraph for building your AI agent. It's known for being more reliable and extensible compared to simple autonomous agents.

Como vemos, el chatbot ha esperado a que un humano le proporcione la respuesta y luego ha generado una respuesta basada en la información recibida. Le hemos pedido por la ayuda sobre un experto sobre cómo crear agentes, el humano le ha dicho que lo mejor es usar LangGraph, y el chatbot ha generado una respuesta basada en esa información.

Pero sigue teniendo la posibilidad de realizar búsquedas en la web. Así que ahora le vamos a pedir las últimas noticias sobre LangGraph.

	
user_input = "What's the latest news about LangGraph?"
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
What's the latest news about LangGraph?
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain - Changelog", "url": "https://changelog.langchain.com/", "content": "LangGraph `interrupt`: Simplifying human-in-the-loop agents --------------------------------------------------- Our latest feature in LangGraph, interrupt , makes building human-in-the-loop workflows easier. Agents aren’t perfect, so keeping humans “in the loop”... December 16, 2024 [...] LangGraph 🔁 Modify graph state from tools in LangGraph --------------------------------------------- LangGraph's latest update gives you greater control over your agents by enabling tools to directly update the graph state. This is a game-changer for use... December 18, 2024 [...] LangGraph Platform Custom authentication & access control for LangGraph Platform ------------------------------------------------------------- Today, we're thrilled to announce Custom Authentication and Resource-Level Access Control for Python deployments in LangGraph Cloud and self-hosted... December 20, 2024", "score": 0.78650844}, {opening_brace}"title": "LangGraph 0.3 Release: Prebuilt Agents - LangChain Blog", "url": "https://blog.langchain.dev/langgraph-0-3-release-prebuilt-agents/", "content": "LangGraph 0.3 Release: Prebuilt Agents 2 min read Feb 27, 2025 By Nuno Campos and Vadym Barda Over the past year, we’ve invested heavily in making LangGraph the go-to framework for building AI agents. With companies like Replit, Klarna, LinkedIn and Uber choosing to build on top of LangGraph, we have more conviction than ever that we are on the right path. [...] Up to this point, we’ve had one higher level abstraction and it’s lived in the main langgraph package. It was create_react_agent, a wrapper for creating a simple tool calling agent. Today, we are splitting that out of langgraph as part of a 0.3 release, and moving it into langgraph-prebuilt. We are also introducing a new set of prebuilt agents built on top of LangGraph, in both Python and JavaScript. Over the past three weeks, we’ve already released a few of these: [...] Published Time: 2025-02-27T15:09:15.000Z LangGraph 0.3 Release: Prebuilt Agents Skip to content Case Studies In the Loop LangChain Docs Changelog Sign in Subscribe", "score": 0.72348577}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain - Changelog", "url": "https://changelog.langchain.com/", "content": "LangGraph 🔁 Modify graph state from tools in LangGraph --------------------------------------------- LangGraph's latest update gives you greater control over your agents by enabling tools to directly update the graph state. This is a game-changer for use... December 18, 2024 [...] LangGraph `interrupt`: Simplifying human-in-the-loop agents --------------------------------------------------- Our latest feature in LangGraph, interrupt , makes building human-in-the-loop workflows easier. Agents aren’t perfect, so keeping humans “in the loop”... December 16, 2024 [...] LangGraph Platform Custom authentication & access control for LangGraph Platform ------------------------------------------------------------- Today, we're thrilled to announce Custom Authentication and Resource-Level Access Control for Python deployments in LangGraph Cloud and self-hosted... December 20, 2024", "score": 0.79732054}, {opening_brace}"title": "LangGraph 0.3 Release: Prebuilt Agents - LangChain Blog", "url": "https://blog.langchain.dev/langgraph-0-3-release-prebuilt-agents/", "content": "LangGraph 0.3 Release: Prebuilt Agents 2 min read Feb 27, 2025 By Nuno Campos and Vadym Barda Over the past year, we’ve invested heavily in making LangGraph the go-to framework for building AI agents. With companies like Replit, Klarna, LinkedIn and Uber choosing to build on top of LangGraph, we have more conviction than ever that we are on the right path. [...] Up to this point, we’ve had one higher level abstraction and it’s lived in the main langgraph package. It was create_react_agent, a wrapper for creating a simple tool calling agent. Today, we are splitting that out of langgraph as part of a 0.3 release, and moving it into langgraph-prebuilt. We are also introducing a new set of prebuilt agents built on top of LangGraph, in both Python and JavaScript. Over the past three weeks, we’ve already released a few of these: [...] Published Time: 2025-02-27T15:09:15.000Z LangGraph 0.3 Release: Prebuilt Agents Skip to content Case Studies In the Loop LangChain Docs Changelog Sign in Subscribe", "score": 0.7552947}]
================================== Ai Message ==================================
The latest news about LangGraph includes several updates and releases. Firstly, the 'interrupt' feature has been added, which simplifies creating human-in-the-loop workflows, essential for maintaining oversight of AI agents. Secondly, an update allows tools to modify the graph state directly, providing more control over the agents. Lastly, custom authentication and resource-level access control have been implemented for Python deployments in LangGraph Cloud and self-hosted environments. In addition, LangGraph released version 0.3, which introduces prebuilt agents in both Python and JavaScript, aimed at making it even easier to develop AI agents.

Ha buscado las últimas noticias sobre LangGraph y ha generado una respuesta basada en la información recibida.

Vamos a escribir todo junto para que sea más comprensible

	
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from huggingface_hub import login
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import ToolMessage
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.types import Command, interrupt
from langchain_core.tools import tool
from langgraph.checkpoint.memory import MemorySaver
from IPython.display import Image, display
import json
import os
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
import dotenv
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
# State
class State(TypedDict):
messages: Annotated[list, add_messages]
# Tools
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
tool_search = TavilySearchResults(api_wrapper=wrapper, max_results=2)
@tool
def human_assistance(query: str) -> str:
"""
Request assistance from a human expert. Use this tool ONLY ONCE per conversation.
After receiving the expert's response, you should provide an elaborated response to the user based on the information received
based on the information received, without calling this tool again.
Args:
query: The query to ask the human expert.
Returns:
The response from the human expert.
"""
human_response = interrupt({"query": query})
return human_response["data"]
tools_list = [tool_search, human_assistance]
# Create the LLM model
login(token=HUGGINGFACE_TOKEN) # Login to HuggingFace to use the model
MODEL = "Qwen/Qwen2.5-72B-Instruct"
model = HuggingFaceEndpoint(
repo_id=MODEL,
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
# Create the chat model
llm = ChatHuggingFace(llm=model)
# Create the LLM with tools
llm_with_tools = llm.bind_tools(tools_list)
# Tool node
tool_node = ToolNode(tools=tools_list)
# Functions
def chatbot_function(state: State):
message = llm_with_tools.invoke(state["messages"])
assert len(message.tool_calls) <= 1
return {opening_brace}"messages": [message]}
# Start to build the graph
graph_builder = StateGraph(State)
# Add nodes to the graph
graph_builder.add_node("chatbot_node", chatbot_function)
graph_builder.add_node("tools", tool_node)
# Add edges
graph_builder.add_edge(START, "chatbot_node")
graph_builder.add_conditional_edges( "chatbot_node", tools_condition)
graph_builder.add_edge("tools", "chatbot_node")
# Compile the graph
memory = MemorySaver()
graph = graph_builder.compile(checkpointer=memory)
# Display the graph
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception as e:
print(f"Error al visualizar el grafo: {e}")
Copy
	
Error al visualizar el grafo: Failed to reach https://mermaid.ink/ API while trying to render your graph after 1 retries. To resolve this issue:
1. Check your internet connection and try again
2. Try with higher retry settings: `draw_mermaid_png(..., max_retries=5, retry_delay=2.0)`
3. Use the Pyppeteer rendering method which will render your graph locally in a browser: `draw_mermaid_png(..., draw_method=MermaidDrawMethod.PYPPETEER)`

Volvemos a pedirle ayuda al chatbot para crear agentes. Le pedimos que busque ayuda

	
user_input = "I need some expert guidance for building an AI agent. Could you request assistance for me?"
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1"{closing_brace}{closing_brace}
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
I need some expert guidance for building an AI agent. Could you request assistance for me?
================================== Ai Message ==================================
Tool Calls:
human_assistance (0)
Call ID: 0
Args:
query: I need expert guidance for building an AI agent.

Vemos en qué estado se ha quedado el grafo

	
snapshot = graph.get_state(config)
snapshot.next
Copy
	
('tools',)

Le damos la asistencia que está pidiendo

	
human_response = (
"We, the experts are here to help! We'd recommend you check out LangGraph to build your agent."
"It's much more reliable and extensible than simple autonomous agents."
)
human_command = Command(resume={opening_brace}"data": human_response})
events = graph.stream(human_command, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================== Ai Message ==================================
Tool Calls:
human_assistance (0)
Call ID: 0
Args:
query: I need expert guidance for building an AI agent.
================================= Tool Message =================================
Name: human_assistance
We, the experts are here to help! We'd recommend you check out LangGraph to build your agent.It's much more reliable and extensible than simple autonomous agents.
================================== Ai Message ==================================
Tool Calls:
human_assistance (0)
Call ID: 0
Args:
query: I need some expert guidance for building an AI agent. Could you recommend a platform and any tips for getting started?

Y por último le pedimos que busque en internet las últimas noticias de LangGraph

	
user_input = "What's the latest news about LangGraph?"
events = graph.stream(
{opening_brace}"messages": [{opening_brace}"role": "user", "content": user_input{closing_brace}]{closing_brace},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
What's the latest news about LangGraph?
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain Blog", "url": "https://blog.langchain.dev/", "content": "LangSmith Incident on May 1, 2025 Requests to the US LangSmith API from both the web application and SDKs experienced an elevated error rate for 28 minutes on May 1, 2025 Featured How Klarna's AI assistant redefined customer support at scale for 85 million active users Is LangGraph Used In Production? Introducing Interrupt: The AI Agent Conference by LangChain Top 5 LangGraph Agents in Production 2024 [...] See how Harmonic uses LangSmith and LangGraph products to streamline venture investing workflows. Why Definely chose LangGraph for building their multi-agent AI system See how Definely used LangGraph to design a multi-agent system to help lawyers speed up their workflows. Introducing End-to-End OpenTelemetry Support in LangSmith LangSmith now provides end-to-end OpenTelemetry (OTel) support for applications built on LangChain and/or LangGraph.", "score": 0.6811549}, {opening_brace}"title": "LangGraph + UiPath: advancing agentic automation together", "url": "https://www.uipath.com/blog/product-and-updates/langgraph-uipath-advancing-agentic-automation-together", "content": "Raghu Malpani, Chief Technology Officer at UiPath, emphasizes the significance of these announcements for the UiPath developer community: Our collaboration with LangChain on LangSmith and Agent Protocol advances interoperability across agent frameworks. Further, by enabling the deployment of LangGraph agents into UiPath's enterprise-grade infrastructure, we are expanding the capabilities of our platform and opening up more possibilities for our developer community. [...] Today, we’re excited to announce: Native support for LangSmith observability in the UiPath LLM Gateway via OpenTelemetry (OTLP), enabling developers to monitor, debug, and evaluate LLM-powered features in UiPath using LangSmith either in LangChain’s cloud or self-hosted on-premises. This feature is currently in private preview.", "score": 0.6557114}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain Blog", "url": "https://blog.langchain.dev/", "content": "LangSmith Incident on May 1, 2025 Requests to the US LangSmith API from both the web application and SDKs experienced an elevated error rate for 28 minutes on May 1, 2025 Featured How Klarna's AI assistant redefined customer support at scale for 85 million active users Is LangGraph Used In Production? Introducing Interrupt: The AI Agent Conference by LangChain Top 5 LangGraph Agents in Production 2024 [...] See how Harmonic uses LangSmith and LangGraph products to streamline venture investing workflows. Why Definely chose LangGraph for building their multi-agent AI system See how Definely used LangGraph to design a multi-agent system to help lawyers speed up their workflows. Introducing End-to-End OpenTelemetry Support in LangSmith LangSmith now provides end-to-end OpenTelemetry (OTel) support for applications built on LangChain and/or LangGraph.", "score": 0.6811549}, {opening_brace}"title": "LangGraph + UiPath: advancing agentic automation together", "url": "https://www.uipath.com/blog/product-and-updates/langgraph-uipath-advancing-agentic-automation-together", "content": "Raghu Malpani, Chief Technology Officer at UiPath, emphasizes the significance of these announcements for the UiPath developer community: Our collaboration with LangChain on LangSmith and Agent Protocol advances interoperability across agent frameworks. Further, by enabling the deployment of LangGraph agents into UiPath's enterprise-grade infrastructure, we are expanding the capabilities of our platform and opening up more possibilities for our developer community. [...] Today, we’re excited to announce: Native support for LangSmith observability in the UiPath LLM Gateway via OpenTelemetry (OTLP), enabling developers to monitor, debug, and evaluate LLM-powered features in UiPath using LangSmith either in LangChain’s cloud or self-hosted on-premises. This feature is currently in private preview.", "score": 0.6557114}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain Blog", "url": "https://blog.langchain.dev/", "content": "LangSmith Incident on May 1, 2025 Requests to the US LangSmith API from both the web application and SDKs experienced an elevated error rate for 28 minutes on May 1, 2025 Featured How Klarna's AI assistant redefined customer support at scale for 85 million active users Is LangGraph Used In Production? Introducing Interrupt: The AI Agent Conference by LangChain Top 5 LangGraph Agents in Production 2024 [...] See how Harmonic uses LangSmith and LangGraph products to streamline venture investing workflows. Why Definely chose LangGraph for building their multi-agent AI system See how Definely used LangGraph to design a multi-agent system to help lawyers speed up their workflows. Introducing End-to-End OpenTelemetry Support in LangSmith LangSmith now provides end-to-end OpenTelemetry (OTel) support for applications built on LangChain and/or LangGraph.", "score": 0.6811549}, {opening_brace}"title": "LangGraph + UiPath: advancing agentic automation together", "url": "https://www.uipath.com/blog/product-and-updates/langgraph-uipath-advancing-agentic-automation-together", "content": "Raghu Malpani, Chief Technology Officer at UiPath, emphasizes the significance of these announcements for the UiPath developer community: Our collaboration with LangChain on LangSmith and Agent Protocol advances interoperability across agent frameworks. Further, by enabling the deployment of LangGraph agents into UiPath's enterprise-grade infrastructure, we are expanding the capabilities of our platform and opening up more possibilities for our developer community. [...] Today, we’re excited to announce: Native support for LangSmith observability in the UiPath LLM Gateway via OpenTelemetry (OTLP), enabling developers to monitor, debug, and evaluate LLM-powered features in UiPath using LangSmith either in LangChain’s cloud or self-hosted on-premises. This feature is currently in private preview.", "score": 0.6557114}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain Blog", "url": "https://blog.langchain.dev/", "content": "LangSmith Incident on May 1, 2025 Requests to the US LangSmith API from both the web application and SDKs experienced an elevated error rate for 28 minutes on May 1, 2025 Featured How Klarna's AI assistant redefined customer support at scale for 85 million active users Is LangGraph Used In Production? Introducing Interrupt: The AI Agent Conference by LangChain Top 5 LangGraph Agents in Production 2024 [...] See how Harmonic uses LangSmith and LangGraph products to streamline venture investing workflows. Why Definely chose LangGraph for building their multi-agent AI system See how Definely used LangGraph to design a multi-agent system to help lawyers speed up their workflows. Introducing End-to-End OpenTelemetry Support in LangSmith LangSmith now provides end-to-end OpenTelemetry (OTel) support for applications built on LangChain and/or LangGraph.", "score": 0.6811549}, {opening_brace}"title": "LangGraph + UiPath: advancing agentic automation together", "url": "https://www.uipath.com/blog/product-and-updates/langgraph-uipath-advancing-agentic-automation-together", "content": "Raghu Malpani, Chief Technology Officer at UiPath, emphasizes the significance of these announcements for the UiPath developer community: Our collaboration with LangChain on LangSmith and Agent Protocol advances interoperability across agent frameworks. Further, by enabling the deployment of LangGraph agents into UiPath's enterprise-grade infrastructure, we are expanding the capabilities of our platform and opening up more possibilities for our developer community. [...] Today, we’re excited to announce: Native support for LangSmith observability in the UiPath LLM Gateway via OpenTelemetry (OTLP), enabling developers to monitor, debug, and evaluate LLM-powered features in UiPath using LangSmith either in LangChain’s cloud or self-hosted on-premises. This feature is currently in private preview.", "score": 0.6557114}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain Blog", "url": "https://blog.langchain.dev/", "content": "LangSmith Incident on May 1, 2025 Requests to the US LangSmith API from both the web application and SDKs experienced an elevated error rate for 28 minutes on May 1, 2025 Featured How Klarna's AI assistant redefined customer support at scale for 85 million active users Is LangGraph Used In Production? Introducing Interrupt: The AI Agent Conference by LangChain Top 5 LangGraph Agents in Production 2024 [...] See how Harmonic uses LangSmith and LangGraph products to streamline venture investing workflows. Why Definely chose LangGraph for building their multi-agent AI system See how Definely used LangGraph to design a multi-agent system to help lawyers speed up their workflows. Introducing End-to-End OpenTelemetry Support in LangSmith LangSmith now provides end-to-end OpenTelemetry (OTel) support for applications built on LangChain and/or LangGraph.", "score": 0.6811549}, {opening_brace}"title": "LangGraph + UiPath: advancing agentic automation together", "url": "https://www.uipath.com/blog/product-and-updates/langgraph-uipath-advancing-agentic-automation-together", "content": "Raghu Malpani, Chief Technology Officer at UiPath, emphasizes the significance of these announcements for the UiPath developer community: Our collaboration with LangChain on LangSmith and Agent Protocol advances interoperability across agent frameworks. Further, by enabling the deployment of LangGraph agents into UiPath's enterprise-grade infrastructure, we are expanding the capabilities of our platform and opening up more possibilities for our developer community. [...] Today, we’re excited to announce: Native support for LangSmith observability in the UiPath LLM Gateway via OpenTelemetry (OTLP), enabling developers to monitor, debug, and evaluate LLM-powered features in UiPath using LangSmith either in LangChain’s cloud or self-hosted on-premises. This feature is currently in private preview.", "score": 0.6557114}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain Blog", "url": "https://blog.langchain.dev/", "content": "LangSmith Incident on May 1, 2025 Requests to the US LangSmith API from both the web application and SDKs experienced an elevated error rate for 28 minutes on May 1, 2025 Featured How Klarna's AI assistant redefined customer support at scale for 85 million active users Is LangGraph Used In Production? Introducing Interrupt: The AI Agent Conference by LangChain Top 5 LangGraph Agents in Production 2024 [...] See how Harmonic uses LangSmith and LangGraph products to streamline venture investing workflows. Why Definely chose LangGraph for building their multi-agent AI system See how Definely used LangGraph to design a multi-agent system to help lawyers speed up their workflows. Introducing End-to-End OpenTelemetry Support in LangSmith LangSmith now provides end-to-end OpenTelemetry (OTel) support for applications built on LangChain and/or LangGraph.", "score": 0.6811549}, {opening_brace}"title": "LangGraph + UiPath: advancing agentic automation together", "url": "https://www.uipath.com/blog/product-and-updates/langgraph-uipath-advancing-agentic-automation-together", "content": "Raghu Malpani, Chief Technology Officer at UiPath, emphasizes the significance of these announcements for the UiPath developer community: Our collaboration with LangChain on LangSmith and Agent Protocol advances interoperability across agent frameworks. Further, by enabling the deployment of LangGraph agents into UiPath's enterprise-grade infrastructure, we are expanding the capabilities of our platform and opening up more possibilities for our developer community. [...] Today, we’re excited to announce: Native support for LangSmith observability in the UiPath LLM Gateway via OpenTelemetry (OTLP), enabling developers to monitor, debug, and evaluate LLM-powered features in UiPath using LangSmith either in LangChain’s cloud or self-hosted on-premises. This feature is currently in private preview.", "score": 0.6557114}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain Blog", "url": "https://blog.langchain.dev/", "content": "LangSmith Incident on May 1, 2025 Requests to the US LangSmith API from both the web application and SDKs experienced an elevated error rate for 28 minutes on May 1, 2025 Featured How Klarna's AI assistant redefined customer support at scale for 85 million active users Is LangGraph Used In Production? Introducing Interrupt: The AI Agent Conference by LangChain Top 5 LangGraph Agents in Production 2024 [...] See how Harmonic uses LangSmith and LangGraph products to streamline venture investing workflows. Why Definely chose LangGraph for building their multi-agent AI system See how Definely used LangGraph to design a multi-agent system to help lawyers speed up their workflows. Introducing End-to-End OpenTelemetry Support in LangSmith LangSmith now provides end-to-end OpenTelemetry (OTel) support for applications built on LangChain and/or LangGraph.", "score": 0.6811549}, {opening_brace}"title": "LangGraph + UiPath: advancing agentic automation together", "url": "https://www.uipath.com/blog/product-and-updates/langgraph-uipath-advancing-agentic-automation-together", "content": "Raghu Malpani, Chief Technology Officer at UiPath, emphasizes the significance of these announcements for the UiPath developer community: Our collaboration with LangChain on LangSmith and Agent Protocol advances interoperability across agent frameworks. Further, by enabling the deployment of LangGraph agents into UiPath's enterprise-grade infrastructure, we are expanding the capabilities of our platform and opening up more possibilities for our developer community. [...] Today, we’re excited to announce: Native support for LangSmith observability in the UiPath LLM Gateway via OpenTelemetry (OTLP), enabling developers to monitor, debug, and evaluate LLM-powered features in UiPath using LangSmith either in LangChain’s cloud or self-hosted on-premises. This feature is currently in private preview.", "score": 0.6557114}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain Blog", "url": "https://blog.langchain.dev/", "content": "LangSmith Incident on May 1, 2025 Requests to the US LangSmith API from both the web application and SDKs experienced an elevated error rate for 28 minutes on May 1, 2025 Featured How Klarna's AI assistant redefined customer support at scale for 85 million active users Is LangGraph Used In Production? Introducing Interrupt: The AI Agent Conference by LangChain Top 5 LangGraph Agents in Production 2024 [...] See how Harmonic uses LangSmith and LangGraph products to streamline venture investing workflows. Why Definely chose LangGraph for building their multi-agent AI system See how Definely used LangGraph to design a multi-agent system to help lawyers speed up their workflows. Introducing End-to-End OpenTelemetry Support in LangSmith LangSmith now provides end-to-end OpenTelemetry (OTel) support for applications built on LangChain and/or LangGraph.", "score": 0.6811549}, {opening_brace}"title": "LangGraph + UiPath: advancing agentic automation together", "url": "https://www.uipath.com/blog/product-and-updates/langgraph-uipath-advancing-agentic-automation-together", "content": "Raghu Malpani, Chief Technology Officer at UiPath, emphasizes the significance of these announcements for the UiPath developer community: Our collaboration with LangChain on LangSmith and Agent Protocol advances interoperability across agent frameworks. Further, by enabling the deployment of LangGraph agents into UiPath's enterprise-grade infrastructure, we are expanding the capabilities of our platform and opening up more possibilities for our developer community. [...] Today, we’re excited to announce: Native support for LangSmith observability in the UiPath LLM Gateway via OpenTelemetry (OTLP), enabling developers to monitor, debug, and evaluate LLM-powered features in UiPath using LangSmith either in LangChain’s cloud or self-hosted on-premises. This feature is currently in private preview.", "score": 0.6557114}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: Latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangChain Blog", "url": "https://blog.langchain.dev/", "content": "LangSmith Incident on May 1, 2025 Requests to the US LangSmith API from both the web application and SDKs experienced an elevated error rate for 28 minutes on May 1, 2025 Featured How Klarna's AI assistant redefined customer support at scale for 85 million active users Is LangGraph Used In Production? Introducing Interrupt: The AI Agent Conference by LangChain Top 5 LangGraph Agents in Production 2024 [...] See how Harmonic uses LangSmith and LangGraph products to streamline venture investing workflows. Why Definely chose LangGraph for building their multi-agent AI system See how Definely used LangGraph to design a multi-agent system to help lawyers speed up their workflows. Introducing End-to-End OpenTelemetry Support in LangSmith LangSmith now provides end-to-end OpenTelemetry (OTel) support for applications built on LangChain and/or LangGraph.", "score": 0.67758125}, {opening_brace}"title": "LangGraph + UiPath: advancing agentic automation together", "url": "https://www.uipath.com/blog/product-and-updates/langgraph-uipath-advancing-agentic-automation-together", "content": "Raghu Malpani, Chief Technology Officer at UiPath, emphasizes the significance of these announcements for the UiPath developer community: Our collaboration with LangChain on LangSmith and Agent Protocol advances interoperability across agent frameworks. Further, by enabling the deployment of LangGraph agents into UiPath's enterprise-grade infrastructure, we are expanding the capabilities of our platform and opening up more possibilities for our developer community. [...] Today, we’re excited to announce: Native support for LangSmith observability in the UiPath LLM Gateway via OpenTelemetry (OTLP), enabling developers to monitor, debug, and evaluate LLM-powered features in UiPath using LangSmith either in LangChain’s cloud or self-hosted on-premises. This feature is currently in private preview.", "score": 0.6522641}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: latest news about LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangGraph - LangChain", "url": "https://www.langchain.com/langgraph", "content": "“As Ally advances its exploration of Generative AI, our tech labs is excited by LangGraph, the new library from LangChain, which is central to our experiments", "score": 0.98559}, {opening_brace}"title": "Evaluating LangGraph Framework : Series 1 | by Jalaj Agrawal", "url": "https://medium.com/@jalajagr/evaluating-langgraph-as-a-multiagent-framework-a-10-dimensional-framework-series-1-c7203b7f4659", "content": ": LangGraph excels with its intuitive graph-based abstraction that allows new developers to build working multi-agent systems within hours.", "score": 0.98196}]
================================== Ai Message ==================================
It looks like LangGraph has been generating some significant buzz in the AI community, especially for its capabilities in building multi-agent systems. Here are a few highlights from the latest news:
1. **LangGraph in Production**: Companies like Klarna and Definely are already using LangGraph to build and optimize their AI systems. Klarna has leveraged LangGraph to enhance their customer support, and Definely has used it to design a multi-agent system to speed up legal workflows.
2. **Integration with UiPath**: LangChain and UiPath have collaborated to advance agentic automation. This partnership includes native support for LangSmith observability in UiPath’s LLM Gateway via OpenTelemetry, which will allow developers to monitor, debug, and evaluate LLM-powered features more effectively.
3. **Intuitive Design**: LangGraph is praised for its intuitive graph-based abstraction, which enables developers to build working multi-agent systems quickly, even if they are new to the field.
4. **Community and Conferences**: LangChain is also hosting an AI Agent Conference called "Interrupt," which could be a great opportunity to learn more about the latest developments and best practices in building AI agents.
If you're considering using LangGraph for your project, these resources and updates might provide valuable insights and support. Would you like more detailed information on any specific aspect of LangGraph?

Máslink image 48

Aprobación del uso de herramientaslink image 49

Nota: Este apartado lo vamos a hacer usando Sonnet 3.7, ya que a día de la escritura del post, es el mejor modelo para uso con agentes, y es el único que entiende cuándo tiene que llamar a las tools y cuándo no para este ejemplo

Podemos meter un human in the loop para aprobar el uso de herramientas. Vamos a crear un chatbot con varias herramientas para hacer operaciones matemáticas, para ello a la hora de construir el grafo indicamos dónde queremos meter el breakpoint (graph_builder.compile(interrupt_before=["tools"], checkpointer=memory))

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
      from langgraph.prebuilt import ToolNode, tools_condition
      from langgraph.checkpoint.memory import MemorySaver
      from langchain_core.tools import tool
      from langchain_anthropic import ChatAnthropic
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
      
      memory = MemorySaver()
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Tools
      @tool
      def multiply(a: int, b: int) -> int:
          """Multiply a and b.
      
          Args:
              a: first int
              b: second int
      
          Returns:
              The product of a and b.
          """
          return a * b
      
      @tool
      def add(a: int, b: int) -> int:
          """Adds a and b.
      
          Args:
              a: first int
              b: second int
      
          Returns:
              The sum of a and b.
          """
          return a + b
      
      @tool
      def subtract(a: int, b: int) -> int:
          """Subtract b from a.
      
          Args:
              a: first int
              b: second int
      
          Returns:
              The difference between a and b.
          """
          return a - b
      
      @tool
      def divide(a: int, b: int) -> float:
          """Divide a by b.
      
          Args:
              a: first int
              b: second int
      
          Returns:
              The quotient of a and b.
          """
          return a / b
      
      tools_list = [multiply, add, subtract, divide]
      
      # Create the LLM model
      llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
      llm_with_tools = llm.bind_tools(tools_list)
      
      # Nodes
      def chat_model_node(state: State):
          system_message = "You are a helpful assistant that can use tools to answer questions. Once you have the result of a tool, provide a final answer without calling more tools."
          messages = [SystemMessage(content=system_message)] + state["messages"]
          return {"messages": [llm_with_tools.invoke(messages)]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("chatbot_node", chat_model_node)
      tool_node = ToolNode(tools=tools_list)
      graph_builder.add_node("tools", tool_node)
      
      # Connecto nodes
      graph_builder.add_edge(START, "chatbot_node")
      graph_builder.add_conditional_edges("chatbot_node", tools_condition)
      graph_builder.add_edge("tools", "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile(interrupt_before=["tools"], checkpointer=memory)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 34

Como vemos en el grafo, hay un interrupt antes de usar las tools. Eso significa que se va a parar antes de usarlas para pedirnos permiso

	
# Input
initial_input = {opening_brace}"messages": HumanMessage(content="Multiply 2 and 3")}
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1"{closing_brace}{closing_brace}
# Run the graph until the first interruption
for event in graph.stream(initial_input, config, stream_mode="updates"):
if 'chatbot_node' in event:
print(event['chatbot_node']['messages'][-1].pretty_print())
else:
print(event)
Copy
	
================================== Ai Message ==================================
[{'text': "I'll multiply 2 and 3 for you.", 'type': 'text'{closing_brace}, {opening_brace}'id': 'toolu_01QDuind1VBHWtvifELN9SPf', 'input': {'a': 2, 'b': 3}, 'name': 'multiply', 'type': 'tool_use'{closing_brace}]
Tool Calls:
multiply (toolu_01QDuind1VBHWtvifELN9SPf)
Call ID: toolu_01QDuind1VBHWtvifELN9SPf
Args:
a: 2
b: 3
None
{opening_brace}'__interrupt__': ()}

Como podemos ver, el LLM sabe que tiene que usar la herramienta multiply, pero se interrumpe la ejecución, porque tiene que esperar a que un humano le autorice el uso de la herramienta.

Podemos ver el estado en el que se ha quedado el grafo

	
state = graph.get_state(config)
state.next
Copy
	
('tools',)

Como vemos, se ha quedado en el nodo de tools.

Podemos crear una función (no en el grafo, sino fuera del grafo, para mejorar la experiencia de usuario y que entienda por qué se para la ejecución) que le pida al usuario que apruebe el uso de la herramienta.

Creamos un nuevo thread_id para que se cree un nuevo estado.

	
# Input
initial_input = {opening_brace}"messages": HumanMessage(content="Multiply 2 and 3")}
config = {opening_brace}"configurable": {opening_brace}"thread_id": "2"{closing_brace}{closing_brace}
# Run the graph until the first interruption
for event in graph.stream(initial_input, config, stream_mode="updates"):
function_name = None
function_args = None
if 'chatbot_node' in event:
for element in event['chatbot_node']['messages'][-1].content:
if element['type'] == 'text':
print(element['text'])
elif element['type'] == 'tool_use':
function_name = element['name']
function_args = element['input']
print(f"The LLM wants to use the tool {function_name} with the arguments {function_args}")
elif '__interrupt__' in event:
pass
else:
print(event)
question = f"Do you approve the use of the tool {function_name} with the arguments {function_args}? (y/n)"
user_approval = input(question)
print(f"{question}: {user_approval}")
if user_approval.lower() == 'y':
print("User approved the use of the tool")
for event in graph.stream(None, config, stream_mode="updates"):
if 'chatbot_node' in event:
for element in event['chatbot_node']['messages'][-1].content:
if isinstance(element, str):
print(element, end="")
elif 'tools' in event:
result = event['tools']['messages'][-1].content
tool_used = event['tools']['messages'][-1].name
print(f"The result of the tool {tool_used} is {result}")
else:
print(event)
Copy
	
I'll multiply 2 and 3 for you.
The LLM wants to use the tool multiply with the arguments {'a': 2, 'b': 3}
Do you approve the use of the tool None with the arguments None? (y/n): y
User approved the use of the tool
The result of the tool multiply is 6
The result of multiplying 2 and 3 is 6.

Podemos ver que nos ha preguntado si aprobamos el uso de la tool de multiplicación, la hemos aprobado y el grafo ha terminado la ejecución. Viendo el estado del grafo.

	
state = graph.get_state(config)
state.next
Copy
	
()

Vemos que el siguiente estado del grafo está vacío, eso indica que ha terminado la ejecución del grafo

Modificación del estadolink image 50

Nota: Este apartado lo vamos a hacer usando Sonnet 3.7, ya que a día de la escritura del post, es el mejor modelo para uso con agentes, y es el único que entiende cuándo tiene que llamar a las tools y cuándo no para este ejemplo

Vamos a repetir el ejemplo de antes, pero en vez de interrumpir el grafo antes del uso de una tool, lo vamos a interrumpir en el LLM, para ello, a la hora de construir el grafo indicamos que queremos pararlo en el agente (graph_builder.compile(interrupt_before=["chatbot_node"], checkpointer=memory))

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
      from langgraph.prebuilt import ToolNode, tools_condition
      from langgraph.checkpoint.memory import MemorySaver
      from langchain_core.tools import tool
      from langchain_anthropic import ChatAnthropic
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
      
      memory = MemorySaver()
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Tools
      @tool
      def multiply(a: int, b: int) -> int:
          """Multiply a and b.
      
          Args:
              a: first int
              b: second int
      
          Returns:
              The product of a and b.
          """
          return a * b
      
      @tool
      def add(a: int, b: int) -> int:
          """Adds a and b.
      
          Args:
              a: first int
              b: second int
      
          Returns:
              The sum of a and b.
          """
          return a + b
      
      @tool
      def subtract(a: int, b: int) -> int:
          """Subtract b from a.
      
          Args:
              a: first int
              b: second int
      
          Returns:
              The difference between a and b.
          """
          return a - b
      
      @tool
      def divide(a: int, b: int) -> float:
          """Divide a by b.
      
          Args:
              a: first int
              b: second int
      
          Returns:
              The quotient of a and b.
          """
          return a / b
      
      tools_list = [multiply, add, subtract, divide]
      
      # Create the LLM model
      llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
      llm_with_tools = llm.bind_tools(tools_list)
      
      # Nodes
      def chat_model_node(state: State):
          system_message = "You are a helpful assistant that can use tools to answer questions. Once you have the result of a tool, provide a final answer without calling more tools."
          messages = [SystemMessage(content=system_message)] + state["messages"]
          return {"messages": [llm_with_tools.invoke(messages)]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("chatbot_node", chat_model_node)
      tool_node = ToolNode(tools=tools_list)
      graph_builder.add_node("tools", tool_node)
      
      # Connecto nodes
      graph_builder.add_edge(START, "chatbot_node")
      graph_builder.add_conditional_edges("chatbot_node", tools_condition)
      graph_builder.add_edge("tools", "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile(interrupt_before=["chatbot_node"], checkpointer=memory)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 35

Vemos en la representación del grafo que hay un interrupt antes de la ejecución de chatbot_node, así que antes de que se ejecute el chatbot se interrumpirá la ejecución y tendremos que hacer nosotros que continúe

Ahora le volvemos a pedir una multiplicación

	
# Input
initial_input = {opening_brace}"messages": HumanMessage(content="Multiply 2 and 3")}
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1"{closing_brace}{closing_brace}
# Run the graph until the first interruption
for event in graph.stream(initial_input, config, stream_mode="updates"):
if 'chatbot_node' in event:
print(event['chatbot_node']['messages'][-1].pretty_print())
else:
print(event)
Copy
	
{opening_brace}'__interrupt__': ()}

Podemos ver que no ha hecho nada. Si vemos el estado

	
state = graph.get_state(config)
state.next
Copy
	
('chatbot_node',)

Vemos que el siguiente nodo es el de chatbot. Además, si vemos sus valores, vemos el mensaje que le hemos mandado

	
state.values
Copy
	
{opening_brace}'messages': [HumanMessage(content='Multiply 2 and 3', additional_kwargs={opening_brace}{closing_brace}, response_metadata={opening_brace}{closing_brace}, id='08fd6084-ecd2-4156-ab24-00d2d5c26f00')]}

Ahora procedemos a modificar el estado, añadiendo un nuevo mensaje

	
graph.update_state(
config,
{opening_brace}"messages": [HumanMessage(content="No, actually multiply 3 and 3!")]}
)
Copy
	
{opening_brace}'configurable': {'thread_id': '1',
'checkpoint_ns': '',
'checkpoint_id': '1f027eb6-6c8b-6b6a-8001-bc0f8942566c'{closing_brace}{closing_brace}

Obtenemos el nuevo estado

	
new_state = graph.get_state(config)
new_state.next
Copy
	
('chatbot_node',)

El siguiente nodo sigue siendo el del chatbot, pero si ahora vemos los mensajes

	
new_state.values
Copy
	
{opening_brace}'messages': [HumanMessage(content='Multiply 2 and 3', additional_kwargs={opening_brace}{closing_brace}, response_metadata={opening_brace}{closing_brace}, id='08fd6084-ecd2-4156-ab24-00d2d5c26f00'),
HumanMessage(content='No, actually multiply 3 and 3!', additional_kwargs={opening_brace}{closing_brace}, response_metadata={opening_brace}{closing_brace}, id='e95394c2-e62e-47d2-b9b2-51eba40f3e22')]}

Vemos que se ha añadido el nuevo. Así que hacemos que continúe la ejecución

	
for event in graph.stream(None, config, stream_mode="values"):
event['messages'][-1].pretty_print()
Copy
	
================================ Human Message =================================
No, actually multiply 3 and 3!
================================== Ai Message ==================================
[{'text': "I'll multiply 3 and 3 for you.", 'type': 'text'{closing_brace}, {opening_brace}'id': 'toolu_01UABhLnEdg5ZqxVQTE5pGUx', 'input': {'a': 3, 'b': 3}, 'name': 'multiply', 'type': 'tool_use'{closing_brace}]
Tool Calls:
multiply (toolu_01UABhLnEdg5ZqxVQTE5pGUx)
Call ID: toolu_01UABhLnEdg5ZqxVQTE5pGUx
Args:
a: 3
b: 3
================================= Tool Message =================================
Name: multiply
9

Se ha hecho la multiplicación de 3 por 3, que es la modificación del estado que hemos hecho, y no 2 por 3, que es lo que le pedimos la primera vez

Esto puede ser útil cuando tenemos un agente y queremos revisar que lo que hace esté bien, por lo que podemos entrar en la ejecución y modificar el estado

Breakpoints dinámicoslink image 51

Hasta ahora hemos creado breakpoints estáticos mediante la compilación del grafo, pero podemos crear breakpoints dinámicos mediante NodeInterrupt. Esto es útil porque se puede interrumpir la ejecución por reglas lógicas introducidas por programación

Estos NodeInterrupt permiten personalizar cómo se va a notificar al usuario de la interrupción

from typing import Annotated
      from typing_extensions import TypedDict
      from langgraph.graph import StateGraph, START, END
      from langgraph.graph.message import add_messages
      from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
      from langchain_core.messages import SystemMessage, HumanMessage, AIMessage
      from langgraph.checkpoint.memory import MemorySaver
      from langgraph.errors import NodeInterrupt
      from huggingface_hub import login
      from IPython.display import Image, display
      import os
      import dotenv
      
      dotenv.load_dotenv()
      HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
      
      memory_saver = MemorySaver()
      
      class State(TypedDict):
          messages: Annotated[list, add_messages]
      
      os.environ["LANGCHAIN_TRACING_V2"] = "false"    # Disable LangSmith tracing
      
      # Create the LLM model
      login(token=HUGGINGFACE_TOKEN)  # Login to HuggingFace to use the model
      MODEL = "Qwen/Qwen2.5-72B-Instruct"
      model = HuggingFaceEndpoint(
          repo_id=MODEL,
          task="text-generation",
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.03,
      )
      # Create the chat model
      llm = ChatHuggingFace(llm=model)
      
      # Nodes
      def chatbot_function(state: State):
          max_len = 15
          input_message = state["messages"][-1]
      
          # Check len message
          if len(input_message.content) > max_len:
              raise NodeInterrupt(f"Received input is longer than {max_len} characters --> {input_message}")
      
          # Invoke the LLM with the messages
          response = llm.invoke(state["messages"])
      
          # Return the LLM's response in the correct state format
          return {"messages": [response]}
      
      # Create graph builder
      graph_builder = StateGraph(State)
      
      # Add nodes
      graph_builder.add_node("chatbot_node", chatbot_function)
      
      # Connecto nodes
      graph_builder.add_edge(START, "chatbot_node")
      graph_builder.add_edge("chatbot_node", END)
      
      # Compile the graph
      graph = graph_builder.compile(checkpointer=memory_saver)
      
      display(Image(graph.get_graph().draw_mermaid_png()))
      
image uv 36

Como se puede ver hemos creado una interrupción en caso de que el mensaje sea largo. Vamos a probarlo

	
initial_input = {opening_brace}"messages": HumanMessage(content="Hello, how are you? My name is Máximo")}
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1"{closing_brace}{closing_brace}
# Run the graph until the first interruption
for event in graph.stream(initial_input, config, stream_mode="updates"):
if 'chatbot_node' in event:
print(event['chatbot_node']['messages'][-1].pretty_print())
else:
print(event)
Copy
	
{opening_brace}'__interrupt__': (Interrupt(value="Received input is longer than 15 characters --> content='Hello, how are you? My name is Máximo' additional_kwargs={opening_brace}{closing_brace} response_metadata={opening_brace}{closing_brace} id='2bdc6d41-0cfe-4d3c-8748-ca7d46fd5a60'", resumable=False, ns=None),)}

Efectivamente se ha parado la interrupción y nos ha dado el mensaje de error que hemos creado

Si vemos el nodo en el que se ha parado

	
state = graph.get_state(config)
state.next
Copy
	
('chatbot_node',)

Vemos que está parado en el nodo del chatbot. Podemos volver a hacer que continue con la ejecución, pero nos va a dar el mismo error

	
for event in graph.stream(None, config, stream_mode="updates"):
if 'chatbot_node' in event:
print(event['chatbot_node']['messages'][-1].pretty_print())
else:
print(event)
Copy
	
{opening_brace}'__interrupt__': (Interrupt(value="Received input is longer than 15 characters --> content='Hello, how are you? My name is Máximo' additional_kwargs={opening_brace}{closing_brace} response_metadata={opening_brace}{closing_brace} id='2bdc6d41-0cfe-4d3c-8748-ca7d46fd5a60'", resumable=False, ns=None),)}

Así que tenemos que modificar el estado

	
graph.update_state(
config,
{opening_brace}"messages": [HumanMessage(content="How are you?")]}
)
Copy
	
{opening_brace}'configurable': {'thread_id': '1',
'checkpoint_ns': '',
'checkpoint_id': '1f027f13-5827-6a18-8001-4209d5a866f0'{closing_brace}{closing_brace}

Volvemos a ver el estado y sus valores

	
new_state = graph.get_state(config)
print(f"Siguiente nodo: {new_state.next}")
print("Valores:")
for value in new_state.values["messages"]:
print(f"\t{value.content}")
Copy
	
Siguiente nodo: ('chatbot_node',)
Valores:
Hello, how are you? My name is Máximo
How are you?

El último mensaje es más corto, por lo que intentamos reanudar la ejecución del grafo

	
for event in graph.stream(None, config, stream_mode="updates"):
if 'chatbot_node' in event:
print(event['chatbot_node']['messages'][-1].pretty_print())
else:
print(event)
Copy
	
================================== Ai Message ==================================
Hello Máximo! I'm doing well, thank you for asking. How about you? How can I assist you today?
None

Personalización del estadolink image 52

Nota: Este apartado lo vamos a hacer usando Sonnet 3.7, ya que a día de la escritura del post, es el mejor modelo para uso con agentes, y es el único que entiende cuando tiene que llamar a las tools y cuando no

Hasta ahora, hemos confiado en un estado simple con una entrada, una lista de mensajes. Se puede llegar lejos con este estado simple, pero si se desea definir un comportamiento complejo sin depender de la lista de mensajes, se pueden agregar campos adicionales al estado.

Aquí vamos a ver un nuevo escenario, en el que el chatbot está utilizando la herramienta de búsqueda para encontrar información específica, y reenviándola a un ser humano para su revisión. Vamos a hacer que el chatbot investigue el cumpleaños de una entidad. Agregaremos name y birthday como claves del estado.

Primero cargamos los valores de las API KEYs

	
import os
import dotenv
dotenv.load_dotenv()
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
Copy

Creamos el nuevo estado

	
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
name: str
birthday: str
Copy

Agregar esta información al estado hace que sea fácilmente accesible por otros nodos del grafo (por ejemplo, un nodo que almacena o procesa la información), así como la capa de persistencia del grafo.

Ahora creamos el grafo

	
from langgraph.graph import StateGraph, START, END
graph_builder = StateGraph(State)
Copy

Definimos la tool de búsqueda

	
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
search_tool = TavilySearchResults(api_wrapper=wrapper, max_results=2)
Copy

Ahora creamos la tool de asistencia humana. En esta tool rellenaremos las claves de estado dentro de nuestra herramienta human_assistance. Esto permite a un ser humano revisar la información antes de que se almacene en el estado. Volveremos a usar Command, esta vez para emitir una actualización de estado desde el interior de nuestra herramienta.

	
from langchain_core.messages import ToolMessage
from langchain_core.tools import InjectedToolCallId, tool
from langgraph.types import Command, interrupt
@tool
# Note that because we are generating a ToolMessage for a state update, we
# generally require the ID of the corresponding tool call. We can use
# LangChain's InjectedToolCallId to signal that this argument should not
# be revealed to the model in the tool's schema.
def human_assistance(
name: str, birthday: str, tool_call_id: Annotated[str, InjectedToolCallId]
) -> str:
"""
Request assistance from a human expert. Use this tool ONLY ONCE per conversation.
After receiving the expert's response, you should provide an elaborated response to the user based on the information received
based on the information received, without calling this tool again.
Args:
query: The query to ask the human expert.
Returns:
The response from the human expert.
"""
human_response = interrupt(
{opening_brace}
"question": "Is this correct?",
"name": name,
"birthday": birthday,
},
)
# If the information is correct, update the state as-is.
if human_response.get("correct", "").lower().startswith("y"):
verified_name = name
verified_birthday = birthday
response = "Correct"
# Otherwise, receive information from the human reviewer.
else:
verified_name = human_response.get("name", name)
verified_birthday = human_response.get("birthday", birthday)
response = f"Made a correction: {human_response}"
# This time we explicitly update the state with a ToolMessage inside
# the tool.
state_update = {opening_brace}
"name": verified_name,
"birthday": verified_birthday,
"messages": [ToolMessage(response, tool_call_id=tool_call_id)],
{closing_brace}
# We return a Command object in the tool to update our state.
return Command(update=state_update)
Copy

Hemos usado ToolMessage que se usa para pasar el resultado de ejecutar una tool de nuevo a un modelo y InjectedToolCallId

Creamos una lista de tools

	
tools_list = [search_tool, human_assistance]
Copy

A continuación, el LLM con las bind_tools y lo añadimos al grafo

	
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from langchain_anthropic import ChatAnthropic
# Create the LLM
llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
# Modification: tell the LLM which tools it can call
llm_with_tools = llm.bind_tools(tools_list)
# Define the chatbot function
def chatbot_function(state: State):
message = llm_with_tools.invoke(state["messages"])
# Because we will be interrupting during tool execution,
# we disable parallel tool calling to avoid repeating any
# tool invocations when we resume.
assert len(message.tool_calls) <= 1
return {opening_brace}"messages": [message]}
# Add the chatbot node
graph_builder.add_node("chatbot_node", chatbot_function)
Copy
	
<langgraph.graph.state.StateGraph at 0x120b4f380>

Añadimos la tool al grafo

	
from langgraph.prebuilt import ToolNode, tools_condition
tool_node = ToolNode(tools=tools_list)
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges("chatbot_node", tools_condition)
graph_builder.add_edge("tools", "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x120b4f380>

Añadimos el nodo de START al grafo

	
graph_builder.add_edge(START, "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x120b4f380>

Creamos un checkpointer MemorySaver.

	
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
Copy

Compilamos el grafo con el checkpointer

	
graph = graph_builder.compile(checkpointer=memory)
Copy

Lo representamos gráficamente

from IPython.display import Image, display
      
      try:
          display(Image(graph.get_graph().draw_mermaid_png()))
      except Exception as e:
          print(f"Error al visualizar el grafo: {e}")
      
image uv 37

Vamos a pedir a nuestro chatbot que busque el "cumpleaños" de la biblioteca de LangGraph.

Dirigiremos al chatbot hasta la herramienta human_assistance una vez que tenga la información requerida. Los argumentos name y birthday son obligatorios para la herramienta human_assistance, así que obligan al chatbot a generar propuestas para estos campos.

user_input = (
          "Can you look up when LangGraph was released? "
          "When you have the answer, use the human_assistance tool for review."
      )
      config = {"configurable": {"thread_id": "1"}}
      
      events = graph.stream(
          {"messages": [{"role": "user", "content": user_input}]},
          config,
          stream_mode="values",
      )
      for event in events:
          if "messages" in event:
              event["messages"][-1].pretty_print()
      
================================ Human Message =================================
      
      Can you look up when LangGraph was released? When you have the answer, use the human_assistance tool for review.
      
Failed to multipart ingest runs: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      
================================== Ai Message ==================================
      
      [{'text': "I'll help you look up when LangGraph was released, and then I'll use the human_assistance tool for review as requested.\n\nFirst, let me search for information about LangGraph\'s release date:", 'type': 'text'}, {'id': 'toolu_011KHWFxYbFnUvGEF6MPt3dE', 'input': {'query': 'LangGraph release date when was LangGraph released'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
      Tool Calls:
        tavily_search_results_json (toolu_011KHWFxYbFnUvGEF6MPt3dE)
       Call ID: toolu_011KHWFxYbFnUvGEF6MPt3dE
        Args:
          query: LangGraph release date when was LangGraph released
      
Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      
================================= Tool Message =================================
      Name: tavily_search_results_json
      
      [{"title": "LangGraph Studio: The first agent IDE | by Bhavik Jikadara - Medium", "url": "https://bhavikjikadara.medium.com/langgraph-studio-the-first-agent-ide-468132628274", "content": "LangGraph, launched in January 2023, is a low-level orchestration framework designed for building controllable and complex agentic applications.", "score": 0.80405265}, {"title": "langgraph - PyPI", "url": "https://pypi.org/project/langgraph/", "content": "langgraph · PyPI\nSkip to main content Switch to mobile version\n\nSearch PyPI  Search\n\nHelp\nSponsors\nLog in\nRegister\n\nMenu\n\nHelp\nSponsors\nLog in\nRegister\n\nSearch PyPI  Search\nlanggraph 0.2.70\npip install langgraph Copy PIP instructions\nLatest versionReleased: Feb 6, 2025\nBuilding stateful, multi-actor applications with LLMs\nNavigation\n\nProject description\nRelease history\nDownload files [...] 0.2.20 Sep 13, 2024\n\n0.2.19 Sep 6, 2024\n\n0.2.18 Sep 6, 2024\n\n0.2.17 Sep 5, 2024\n\n0.2.16 Sep 1, 2024\n\n0.2.15 Aug 30, 2024\n\n0.2.14 Aug 24, 2024\n\n0.2.13 Aug 23, 2024\n\n0.2.12 Aug 22, 2024\n\n0.2.11 Aug 22, 2024\n\n0.2.10 Aug 21, 2024\n\n0.2.9 Aug 21, 2024\n\n0.2.8 Aug 21, 2024\n\n0.2.7 Aug 21, 2024\n\n0.2.7a0 pre-release Aug 21, 2024\n\n0.2.6 Aug 21, 2024\n\n0.2.5 Aug 21, 2024\n\n0.2.5a0 pre-release Aug 20, 2024\n\n0.2.4 Aug 15, 2024\n\n0.2.3 Aug 8, 2024\n\n0.2.2 Aug 7, 2024\n\n0.2.1 Aug 7, 2024\n\n0.2.0 Aug 7, 2024 [...] Download URL: langgraph-0.2.70.tar.gz\nUpload date: Feb 6, 2025\nSize: 129.7 kB\nTags: Source\nUploaded using Trusted Publishing? Yes\nUploaded via: twine/6.1.0 CPython/3.12.8", "score": 0.75659186}]
      
Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      
================================== Ai Message ==================================
      
      [{'text': 'Based on my search, I found that LangGraph was launched in January 2023. However, I noticed some inconsistencies in the information, as one source mentions it was launched in January 2023, while the PyPI page shows a version history starting from 2024.\n\nLet me request human assistance to verify this information:', 'type': 'text'}, {'id': 'toolu_019EopKn8bLi3ksvUVY2Mt5p', 'input': {'name': 'LangGraph', 'birthday': 'January 2023'}, 'name': 'human_assistance', 'type': 'tool_use'}]
      Tool Calls:
        human_assistance (toolu_019EopKn8bLi3ksvUVY2Mt5p)
       Call ID: toolu_019EopKn8bLi3ksvUVY2Mt5p
        Args:
          name: LangGraph
          birthday: January 2023
      
Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      

Se ha parado por el interrupt en la herramienta human_assistance. En este caso, el chatbot, con la herramienta de búsqueda, ha determinado que la fecha de LangGraph es enero de 2023, pero no es la fecha exacta, es el 17 de enero de 2024, por lo que podemos introducirla nosotros mismos.

	
human_command = Command(
resume={opening_brace}
"name": "LangGraph",
"birthday": "Jan 17, 2024",
},
)
events = graph.stream(human_command, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================== Ai Message ==================================
[{'text': 'Based on my search, I found that LangGraph was launched in January 2023. However, I noticed some inconsistencies in the information, as one source mentions it was launched in January 2023, while the PyPI page shows a version history starting from 2024. Let me request human assistance to verify this information:', 'type': 'text'{closing_brace}, {opening_brace}'id': 'toolu_019EopKn8bLi3ksvUVY2Mt5p', 'input': {opening_brace}'name': 'LangGraph', 'birthday': 'January 2023'{closing_brace}, 'name': 'human_assistance', 'type': 'tool_use'{closing_brace}]
Tool Calls:
human_assistance (toolu_019EopKn8bLi3ksvUVY2Mt5p)
Call ID: toolu_019EopKn8bLi3ksvUVY2Mt5p
Args:
name: LangGraph
birthday: January 2023
================================= Tool Message =================================
Name: human_assistance
Made a correction: {opening_brace}'name': 'LangGraph', 'birthday': 'Jan 17, 2024'}
================================== Ai Message ==================================
Thank you for the expert review and correction! Based on the human expert's feedback, I can now provide you with the accurate information:
LangGraph was released on January 17, 2024, not January 2023 as one of the search results incorrectly stated.
This is an important correction, as it means LangGraph is a relatively recent framework in the LLM orchestration space, having been available for less than a year at this point. LangGraph is developed by LangChain and is designed for building stateful, multi-actor applications with LLMs.
	
snapshot = graph.get_state(config)
{opening_brace}k: v for k, v in snapshot.values.items() if k in ("name", "birthday")}
Copy
	
{opening_brace}'name': 'LangGraph', 'birthday': 'Jan 17, 2024'}

Ahora la fecha es la correcta gracias a la intervención humana para modificar los valores del estado

Vuelvo a escribir todo el código para que sea más fácil de entender

	
import os
import dotenv
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph.message import add_messages
from langgraph.graph import StateGraph, START, END
from langgraph.types import Command, interrupt
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.checkpoint.memory import MemorySaver
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.messages import ToolMessage
from langchain_core.tools import InjectedToolCallId, tool
from langchain_anthropic import ChatAnthropic
dotenv.load_dotenv()
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
ANTHROPIC_TOKEN = os.getenv("ANTHROPIC_LANGGRAPH_API_KEY")
# State
class State(TypedDict):
messages: Annotated[list, add_messages]
name: str
birthday: str
# Tools
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
search_tool = TavilySearchResults(api_wrapper=wrapper, max_results=2)
@tool
# Note that because we are generating a ToolMessage for a state update, we
# generally require the ID of the corresponding tool call. We can use
# LangChain's InjectedToolCallId to signal that this argument should not
# be revealed to the model in the tool's schema.
def human_assistance(
name: str, birthday: str, tool_call_id: Annotated[str, InjectedToolCallId]
) -> str:
"""
Request assistance from a human expert. Use this tool ONLY ONCE per conversation.
After receiving the expert's response, you should provide an elaborated response to the user based on the information received
based on the information received, without calling this tool again.
Args:
query: The query to ask the human expert.
Returns:
The response from the human expert.
"""
human_response = interrupt(
{opening_brace}
"question": "Is this correct?",
"name": name,
"birthday": birthday,
},
)
# If the information is correct, update the state as-is.
if human_response.get("correct", "").lower().startswith("y"):
verified_name = name
verified_birthday = birthday
response = "Correct"
# Otherwise, receive information from the human reviewer.
else:
verified_name = human_response.get("name", name)
verified_birthday = human_response.get("birthday", birthday)
response = f"Made a correction: {human_response}"
# This time we explicitly update the state with a ToolMessage inside
# the tool.
state_update = {opening_brace}
"name": verified_name,
"birthday": verified_birthday,
"messages": [ToolMessage(response, tool_call_id=tool_call_id)],
{closing_brace}
# We return a Command object in the tool to update our state.
return Command(update=state_update)
tools_list = [search_tool, human_assistance]
tool_node = ToolNode(tools=tools_list)
# Create the LLM
llm = ChatAnthropic(model="claude-3-7-sonnet-20250219", api_key=ANTHROPIC_TOKEN)
llm_with_tools = llm.bind_tools(tools_list)
# Define the chatbot function
def chatbot_function(state: State):
message = llm_with_tools.invoke(state["messages"])
# Because we will be interrupting during tool execution,
# we disable parallel tool calling to avoid repeating any
# tool invocations when we resume.
assert len(message.tool_calls) <= 1
return {opening_brace}"messages": [message]}
# Graph
graph_builder = StateGraph(State)
# Nodes
graph_builder.add_node("tools", tool_node)
graph_builder.add_node("chatbot_node", chatbot_function)
# Edges
graph_builder.add_edge(START, "chatbot_node")
graph_builder.add_conditional_edges("chatbot_node", tools_condition)
graph_builder.add_edge("tools", "chatbot_node")
# Checkpointer
memory = MemorySaver()
# Compile
graph = graph_builder.compile(checkpointer=memory)
# Visualize
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception as e:
print(f"Error al visualizar el grafo: {e}")
Copy
	
Error al visualizar el grafo: Failed to reach https://mermaid.ink/ API while trying to render your graph after 1 retries. To resolve this issue:
1. Check your internet connection and try again
2. Try with higher retry settings: `draw_mermaid_png(..., max_retries=5, retry_delay=2.0)`
3. Use the Pyppeteer rendering method which will render your graph locally in a browser: `draw_mermaid_png(..., draw_method=MermaidDrawMethod.PYPPETEER)`

Vamos a pedir a nuestro chatbot que busque el "cumpleaños" de la biblioteca de LangGraph.

user_input = (
          "Can you look up when LangGraph was released? "
          "When you have the answer, use the human_assistance tool for review."
      )
      config = {"configurable": {"thread_id": "1"}}
      
      events = graph.stream(
          {"messages": [{"role": "user", "content": user_input}]},
          config,
          stream_mode="values",
      )
      for event in events:
          if "messages" in event:
              event["messages"][-1].pretty_print()
      
================================ Human Message =================================
      
      Can you look up when LangGraph was released? When you have the answer, use the human_assistance tool for review.
      
Failed to multipart ingest runs: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      
================================== Ai Message ==================================
      
      [{'text': "I'll look up when LangGraph was released and then get human verification of the information.", 'type': 'text'}, {'id': 'toolu_017SLLSEnFQZVdBpj85BKHyy', 'input': {'query': 'when was LangGraph released launch date'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
      Tool Calls:
        tavily_search_results_json (toolu_017SLLSEnFQZVdBpj85BKHyy)
       Call ID: toolu_017SLLSEnFQZVdBpj85BKHyy
        Args:
          query: when was LangGraph released launch date
      
Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      
================================= Tool Message =================================
      Name: tavily_search_results_json
      
      [{"title": "LangChain Introduces LangGraph Studio: The First Agent IDE for ...", "url": "https://www.marktechpost.com/2024/08/03/langchain-introduces-langgraph-studio-the-first-agent-ide-for-visualizing-interacting-with-and-debugging-complex-agentic-applications/", "content": "LangGraph, launched in January 2023, is a highly controllable, low-level orchestration framework for building agentic applications. Since its inception, it has undergone significant improvements, leading to a stable 0.1 release in June. LangGraph features a persistence layer enabling human-in-the-loop interactions and excels at building complex applications requiring domain-specific cognitive architecture.", "score": 0.83742094}, {"title": "LangGraph Studio: The first agent IDE | by Bhavik Jikadara - Medium", "url": "https://bhavikjikadara.medium.com/langgraph-studio-the-first-agent-ide-468132628274", "content": "LangGraph, launched in January 2023, is a low-level orchestration framework designed for building controllable and complex agentic applications. It’s beneficial for creating applications requiring highly domain-specific cognitive architecture and human-in-the-loop interactions. LangGraph is open source, available in Python and JavaScript, and integrates seamlessly with LangSmith, whether or not you use LangChain.\n\nLangGraph: A Comprehensive Guide for Beginners", "score": 0.79369855}]
      
Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      
================================== Ai Message ==================================
      
      [{'text': "Based on my search, I found that LangGraph was launched in January 2023. It's described as a low-level orchestration framework for building agentic applications. Since its release, it has seen significant improvements, including a stable 0.1 release in June (presumably 2024).\n\nLet me now get human verification of this information:", 'type': 'text'}, {'id': 'toolu_016h3391yFhtPDhQvwjNgs7W', 'input': {'name': 'Information Verification', 'birthday': 'January 2023'}, 'name': 'human_assistance', 'type': 'tool_use'}]
      Tool Calls:
        human_assistance (toolu_016h3391yFhtPDhQvwjNgs7W)
       Call ID: toolu_016h3391yFhtPDhQvwjNgs7W
        Args:
          name: Information Verification
          birthday: January 2023
      
Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      Failed to send compressed multipart ingest: langsmith.utils.LangSmithError: Failed to POST https://eu.api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('403 Client Error: Forbidden for url: https://eu.api.smith.langchain.com/runs/multipart', '{"error":"Forbidden"}\n')
      

Se ha parado por el interrupt en la herramienta human_assistance. En este caso, el chatbot, con la herramienta de búsqueda, ha determinado que la fecha de LangGraph es en enero de 2023, pero no es la fecha exacta, es el 17 de enero de 2024, por lo que podemos introducirla nosotros mismos.

	
human_command = Command(
resume={opening_brace}
"name": "LangGraph",
"birthday": "Jan 17, 2024",
},
)
events = graph.stream(human_command, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================== Ai Message ==================================
[{'text': "Based on my search, I found that LangGraph was launched in January 2023. It's described as a low-level orchestration framework for building agentic applications. Since its release, it has seen significant improvements, including a stable 0.1 release in June (presumably 2024). Let me now get human verification of this information:", 'type': 'text'{closing_brace}, {opening_brace}'id': 'toolu_016h3391yFhtPDhQvwjNgs7W', 'input': {opening_brace}'name': 'Information Verification', 'birthday': 'January 2023'{closing_brace}, 'name': 'human_assistance', 'type': 'tool_use'{closing_brace}]
Tool Calls:
human_assistance (toolu_016h3391yFhtPDhQvwjNgs7W)
Call ID: toolu_016h3391yFhtPDhQvwjNgs7W
Args:
name: Information Verification
birthday: January 2023
================================= Tool Message =================================
Name: human_assistance
Made a correction: {opening_brace}'name': 'LangGraph', 'birthday': 'Jan 17, 2024'}
================================== Ai Message ==================================
Thank you for the expert correction! I need to update my response with the accurate information.
LangGraph was actually released on January 17, 2024 - not January 2023 as I initially found in my search results. This is a significant correction, as it means LangGraph is a much more recent framework than the search results indicated.
The expert has provided the specific date (January 17, 2024) for LangGraph's release, making it a fairly new tool in the AI orchestration ecosystem. This timing aligns better with the mention of its stable 0.1 release in June 2024, as this would be about 5 months after its initial launch.
	
snapshot = graph.get_state(config)
{opening_brace}k: v for k, v in snapshot.values.items() if k in ("name", "birthday")}
Copy
	
{opening_brace}'name': 'LangGraph', 'birthday': 'Jan 17, 2024'}

Ahora la fecha es la correcta gracias a la intervención humana para modificar los valores del estado

Actualización manual del estadolink image 53

LangGraph proporciona un alto grado de control sobre el estado de aplicación. Por ejemplo, en cualquier punto (incluso cuando se interrumpe), podemos soreescribir manualmente una key del estado usando graph.update_state:

Vamos a actualizar el name del estado a LangGraph (library).

	
graph.update_state(config, {opening_brace}"name": "LangGraph (library)"})
Copy
	
{opening_brace}'configurable': {'thread_id': '1',
'checkpoint_ns': '',
'checkpoint_id': '1f010a5a-8a70-618e-8006-89107653db68'{closing_brace}{closing_brace}

Si ahora vemos el estado con graph.get_state(config) veremos que el name se ha actualizado.

	
snapshot = graph.get_state(config)
{opening_brace}k: v for k, v in snapshot.values.items() if k in ("name", "birthday")}
Copy
	
{opening_brace}'name': 'LangGraph (library)', 'birthday': 'Jan 17, 2024'}

Las actualizaciones de estado manuales generarán una traza en LangSmith. Se pueden usar para controlar flujos de trabajo de human in the loop, como se puede ver en esta guía.

Checkpointslink image 54

En un flujo de trabajo típico de un chatbot, el usuario interactúa con el chatbot una o más veces para realizar una tarea. En las secciones anteriores, vimos cómo agregar memoria y un human in the loop para poder verificar nuestro estado de gráfico y controlar las respuestas futuras.

Pero, a lo mejor un usuario quiere comenzar desde una respuesta anterior y o quiere ramificar para explorar un resultado separado. Esto es útil para aplicaciones de agentes, cuando un flujo falla pueden volver a un checkpoint anterior y probar otra estrategia.

LangGraph da esta posibilidad mediante los checkpoints

Primero cargamos los valores de las API KEYs

	
import os
import dotenv
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
Copy

Creamos el nuevo estado

	
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph.message import add_messages
class State(TypedDict):
messages: Annotated[list, add_messages]
Copy

Ahora creamos el grafo

	
from langgraph.graph import StateGraph, START, END
graph_builder = StateGraph(State)
Copy

Definimos la tool de búsqueda

	
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
search_tool = TavilySearchResults(api_wrapper=wrapper, max_results=2)
Copy

Creamos una lista de tools

	
tools_list = [search_tool]
Copy

A continuación, el LLM con las bind_tools y lo añadimos al grafo

	
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from huggingface_hub import login
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
# Create the LLM
login(token=HUGGINGFACE_TOKEN)
MODEL = "Qwen/Qwen2.5-72B-Instruct"
model = HuggingFaceEndpoint(
repo_id=MODEL,
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
# Create the chat model
llm = ChatHuggingFace(llm=model)
# Modification: tell the LLM which tools it can call
llm_with_tools = llm.bind_tools(tools_list)
# Define the chatbot function
def chatbot_function(state: State):
message = llm_with_tools.invoke(state["messages"])
return {opening_brace}"messages": [message]}
# Add the chatbot node
graph_builder.add_node("chatbot_node", chatbot_function)
Copy
	
<langgraph.graph.state.StateGraph at 0x10d8ce7b0>

Añadimos la tool al grafo

	
from langgraph.prebuilt import ToolNode, tools_condition
tool_node = ToolNode(tools=tools_list)
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges("chatbot_node", tools_condition)
graph_builder.add_edge("tools", "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x10d8ce7b0>

Añadimos el nodo de START al grafo

	
graph_builder.add_edge(START, "chatbot_node")
Copy
	
<langgraph.graph.state.StateGraph at 0x10d8ce7b0>

Creamos un checkpointer MemorySaver.

	
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
Copy

Compilamos el grafo con el checkpointer

	
graph = graph_builder.compile(checkpointer=memory)
Copy

Lo representamos gráficamente

from IPython.display import Image, display
      
      try:
          display(Image(graph.get_graph().draw_mermaid_png()))
      except Exception as e:
          print(f"Error al visualizar el grafo: {e}")
      
image uv 38

Hagamos que nuestro grafo dé un par de pasos. Cada paso será guardado en el historial del estado.

Hacemos la primera llamada al modelo

	
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1"{closing_brace}{closing_brace}
user_input = (
"I'm learning LangGraph. "
"Could you do some research on it for me?"
)
events = graph.stream(
{opening_brace}"messages": [{"role": "user","content": user_input},],},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
I'm learning LangGraph. Could you do some research on it for me?
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangGraph Quickstart - GitHub Pages", "url": "https://langchain-ai.github.io/langgraph/tutorials/introduction/", "content": "[](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-9-1)Assistant: LangGraph is a library designed to help build stateful multi-agent applications using language models. It provides tools for creating workflows and state machines to coordinate multiple AI agents or language model interactions. LangGraph is built on top of LangChain, leveraging its components while adding graph-based coordination capabilities. It's particularly useful for developing more complex, [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-6) LangGraph is a library designed for building stateful, multi-actor applications with Large Language Models (LLMs). It's particularly useful for creating agent and multi-agent workflows. [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-7) [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-8)2. Developer: [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-19)LangGraph is likely a framework or library designed specifically for creating AI agents with advanced capabilities. Here are a few points to consider based on this recommendation: [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-20)", "score": 0.9328032}, {opening_brace}"title": "langchain-ai/langgraph: Build resilient language agents as graphs.", "url": "https://github.com/langchain-ai/langgraph", "content": "LangGraph — used by Replit, Uber, LinkedIn, GitLab and more — is a low-level orchestration framework for building controllable agents. While langchain provides integrations and composable components to streamline LLM application development, the LangGraph library enables agent orchestration — offering customizable architectures, long-term memory, and human-in-the-loop to reliably handle complex tasks. ``` pip install -U langgraph ```", "score": 0.8884594}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangGraph Quickstart - GitHub Pages", "url": "https://langchain-ai.github.io/langgraph/tutorials/introduction/", "content": "[](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-9-1)Assistant: LangGraph is a library designed to help build stateful multi-agent applications using language models. It provides tools for creating workflows and state machines to coordinate multiple AI agents or language model interactions. LangGraph is built on top of LangChain, leveraging its components while adding graph-based coordination capabilities. It's particularly useful for developing more complex, [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-6) LangGraph is a library designed for building stateful, multi-actor applications with Large Language Models (LLMs). It's particularly useful for creating agent and multi-agent workflows. [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-7) [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-8)2. Developer: [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-19)LangGraph is likely a framework or library designed specifically for creating AI agents with advanced capabilities. Here are a few points to consider based on this recommendation: [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-20)", "score": 0.9328032}, {opening_brace}"title": "langchain-ai/langgraph: Build resilient language agents as graphs.", "url": "https://github.com/langchain-ai/langgraph", "content": "LangGraph — used by Replit, Uber, LinkedIn, GitLab and more — is a low-level orchestration framework for building controllable agents. While langchain provides integrations and composable components to streamline LLM application development, the LangGraph library enables agent orchestration — offering customizable architectures, long-term memory, and human-in-the-loop to reliably handle complex tasks. ``` pip install -U langgraph ```", "score": 0.8884594}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph tutorial and documentation
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangGraph Quickstart - GitHub Pages", "url": "https://langchain-ai.github.io/langgraph/tutorials/introduction/", "content": "[](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-66-36)5. Documentation: The LangGraph documentation has been revamped, which should make it easier for learners like yourself to understand and use the tool. [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-66-37) [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-28) [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-29)1. Search for the official LangGraph documentation or website to learn more about its features and how to use it. [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-30)2. Look for tutorials or guides specifically focused on building AI agents with LangGraph. [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-9-1)Assistant: LangGraph is a library designed to help build stateful multi-agent applications using language models. It provides tools for creating workflows and state machines to coordinate multiple AI agents or language model interactions. LangGraph is built on top of LangChain, leveraging its components while adding graph-based coordination capabilities. It's particularly useful for developing more complex,", "score": 0.8775715}, {opening_brace}"title": "Tutorial 1-Getting Started With LangGraph- Building Stateful Multi AI ...", "url": "https://www.youtube.com/watch?v=gqvFmK7LpDo", "content": "and we will also use Lang Smith so let's go ahead and execute this lsmith I hope everybody knows what exactly is so till then I will also go ahead and show you the documentation page of Lang graph so this is what langra is all about right it has python it has it supports JavaScript and all but uh uh if I just go ahead and show you Lang graph tutorials right so here you can see this is the tutorial right and uh not this sorry uh let's see yes yes here you go right in the Lang graph page it", "score": 0.80405265}]
================================== Ai Message ==================================
LangGraph is a powerful library designed for building stateful, multi-agent applications using Large Language Models (LLMs). Here are some key points about LangGraph:
### Overview
- **Purpose**: LangGraph is specifically designed to create complex workflows and state machines to coordinate multiple AI agents or language model interactions. It is particularly useful for developing sophisticated multi-agent systems.
- **Framework**: It is built on top of LangChain, leveraging its components and adding graph-based coordination capabilities.
- **Features**:
- **Customizable Architectures**: Allows you to design and implement custom workflows and state machines.
- **Long-Term Memory**: Supports long-term memory for agents, enabling them to maintain context over time.
- **Human-in-the-Loop**: Facilitates human interaction in the workflow, making it easier to handle complex tasks that require human oversight.
### Getting Started
- **Installation**:
```bash
pip install -U langgraph
```
- **Documentation**: The LangGraph documentation has been revamped to make it easier for learners to understand and use the tool. You can find the official documentation [here](https://langchain-ai.github.io/langgraph/tutorials/introduction/).
### Use Cases
- **Multi-Agent Systems**: Ideal for building systems where multiple AI agents need to interact and coordinate their actions.
- **Complex Task Handling**: Suitable for tasks that require multiple steps and decision-making processes.
- **Custom Workflows**: Enables the creation of custom workflows tailored to specific use cases.
### Tutorials and Resources
- **Official Documentation**: The official LangGraph documentation is a comprehensive resource for learning about its features and usage.
- **Tutorials**: Look for tutorials and guides specifically focused on building AI agents with LangGraph. You can find a tutorial video [here](https://www.youtube.com/watch?v=gqvFmK7LpDo).
### Companies Using LangGraph
- **Replit, Uber, LinkedIn, GitLab, and more**: These companies are using LangGraph to build resilient and controllable language agents.
### Next Steps
1. **Review the Documentation**: Start by going through the official LangGraph documentation to get a deeper understanding of its features and capabilities.
2. **Follow Tutorials**: Watch tutorials and follow step-by-step guides to build your first multi-agent application.
3. **Experiment with Examples**: Try out the examples provided in the documentation to get hands-on experience with LangGraph.
If you have any specific questions or need further assistance, feel free to ask!

Y ahora la segunda llamada

	
user_input = (
"Ya that's helpful. Maybe I'll "
"build an autonomous agent with it!"
)
events = graph.stream(
{opening_brace}"messages": [{"role": "user","content": user_input},],},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Ya that's helpful. Maybe I'll build an autonomous agent with it!
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph tutorial build autonomous agent
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangGraph Tutorial: Building LLM Agents with LangChain's ... - Zep", "url": "https://www.getzep.com/ai-agents/langgraph-tutorial", "content": "This article focuses on building agents with LangGraph rather than LangChain. It provides a tutorial for building LangGraph agents, beginning with a discussion of LangGraph and its components. These concepts are reinforced by building a LangGraph agent from scratch and managing conversation memory with LangGraph agents. Finally, we use Zep's long-term memory for egents to create an agent that remembers previous conversations and user facts. ‍ Summary of key LangGraph tutorial concepts [...] human intervention, and the ability to handle complex workflows with cycles and branches. Building a LangGraph agent | Creating a LangGraph agent is the best way to understand the core concepts of nodes, edges, and state. The LangGraph Python libraries are modular and provide the functionality to build a stateful graph by incrementally adding nodes and edges.Incorporating tools enables an agent to perform specific tasks and access", "score": 0.8338803}, {opening_brace}"title": "Build Autonomous AI Agents with ReAct and LangGraph Tools", "url": "https://www.youtube.com/watch?v=ZfjaIshGkmk", "content": "LangGraph Intro - Build Autonomous AI Agents with ReAct and LangGraph Tools GrabDuck! 4110 subscribers 18 likes 535 views 21 Jan 2025 In this video, LangGraph Intro: Build Autonomous AI Agents with ReAct and LangGraph Tools, we dive into creating a powerful agentic system where the LLM decides when to trigger tools and when to finalize results. You’ll see how to build a generic agent architecture using the ReAct principle, applying it to real-world examples like analyzing Tesla stock data. [...] reasoning like what they're doing so uh it's this way you're using tool and this is another thing from longchain core library and here you define the function and then you have to Define name description there are other parameters like for example you can provide very specific description of all the parameters like why you need them which one are those Etc but it's a bit over complicated for this tutorial I'm skipping it and uh interesting thing this one return direct is false and this is uh [...] Whether you’re wondering how to create AI agents, looking for a LangGraph tutorial, or eager to explore the power of LangChain agents, this video is packed with valuable insights to help you get started. Support the channel while you shop on Amazon! Use my affiliate link https://amzn.to/4hssSvT Every purchase via this Amazon link helps keep our content free for you! 🌟 Related Courses & Tutorials", "score": 0.8286204}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph tutorial build autonomous agent
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangGraph Tutorial: Building LLM Agents with LangChain's ... - Zep", "url": "https://www.getzep.com/ai-agents/langgraph-tutorial", "content": "This article focuses on building agents with LangGraph rather than LangChain. It provides a tutorial for building LangGraph agents, beginning with a discussion of LangGraph and its components. These concepts are reinforced by building a LangGraph agent from scratch and managing conversation memory with LangGraph agents. Finally, we use Zep's long-term memory for egents to create an agent that remembers previous conversations and user facts. ‍ Summary of key LangGraph tutorial concepts [...] human intervention, and the ability to handle complex workflows with cycles and branches. Building a LangGraph agent | Creating a LangGraph agent is the best way to understand the core concepts of nodes, edges, and state. The LangGraph Python libraries are modular and provide the functionality to build a stateful graph by incrementally adding nodes and edges.Incorporating tools enables an agent to perform specific tasks and access", "score": 0.8338803}, {opening_brace}"title": "Build Autonomous AI Agents with ReAct and LangGraph Tools", "url": "https://www.youtube.com/watch?v=ZfjaIshGkmk", "content": "LangGraph Intro - Build Autonomous AI Agents with ReAct and LangGraph Tools GrabDuck! 4110 subscribers 18 likes 535 views 21 Jan 2025 In this video, LangGraph Intro: Build Autonomous AI Agents with ReAct and LangGraph Tools, we dive into creating a powerful agentic system where the LLM decides when to trigger tools and when to finalize results. You’ll see how to build a generic agent architecture using the ReAct principle, applying it to real-world examples like analyzing Tesla stock data. [...] reasoning like what they're doing so uh it's this way you're using tool and this is another thing from longchain core library and here you define the function and then you have to Define name description there are other parameters like for example you can provide very specific description of all the parameters like why you need them which one are those Etc but it's a bit over complicated for this tutorial I'm skipping it and uh interesting thing this one return direct is false and this is uh [...] Whether you’re wondering how to create AI agents, looking for a LangGraph tutorial, or eager to explore the power of LangChain agents, this video is packed with valuable insights to help you get started. Support the channel while you shop on Amazon! Use my affiliate link https://amzn.to/4hssSvT Every purchase via this Amazon link helps keep our content free for you! 🌟 Related Courses & Tutorials", "score": 0.8286204}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph tutorial build autonomous agent
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "LangGraph Tutorial: Building LLM Agents with LangChain's ... - Zep", "url": "https://www.getzep.com/ai-agents/langgraph-tutorial", "content": "This article focuses on building agents with LangGraph rather than LangChain. It provides a tutorial for building LangGraph agents, beginning with a discussion of LangGraph and its components. These concepts are reinforced by building a LangGraph agent from scratch and managing conversation memory with LangGraph agents. Finally, we use Zep's long-term memory for egents to create an agent that remembers previous conversations and user facts. ‍ Summary of key LangGraph tutorial concepts [...] human intervention, and the ability to handle complex workflows with cycles and branches. Building a LangGraph agent | Creating a LangGraph agent is the best way to understand the core concepts of nodes, edges, and state. The LangGraph Python libraries are modular and provide the functionality to build a stateful graph by incrementally adding nodes and edges.Incorporating tools enables an agent to perform specific tasks and access", "score": 0.8338803}, {opening_brace}"title": "Build Autonomous AI Agents with ReAct and LangGraph Tools", "url": "https://www.youtube.com/watch?v=ZfjaIshGkmk", "content": "LangGraph Intro - Build Autonomous AI Agents with ReAct and LangGraph Tools GrabDuck! 4110 subscribers 18 likes 535 views 21 Jan 2025 In this video, LangGraph Intro: Build Autonomous AI Agents with ReAct and LangGraph Tools, we dive into creating a powerful agentic system where the LLM decides when to trigger tools and when to finalize results. You’ll see how to build a generic agent architecture using the ReAct principle, applying it to real-world examples like analyzing Tesla stock data. [...] reasoning like what they're doing so uh it's this way you're using tool and this is another thing from longchain core library and here you define the function and then you have to Define name description there are other parameters like for example you can provide very specific description of all the parameters like why you need them which one are those Etc but it's a bit over complicated for this tutorial I'm skipping it and uh interesting thing this one return direct is false and this is uh [...] Whether you’re wondering how to create AI agents, looking for a LangGraph tutorial, or eager to explore the power of LangChain agents, this video is packed with valuable insights to help you get started. Support the channel while you shop on Amazon! Use my affiliate link https://amzn.to/4hssSvT Every purchase via this Amazon link helps keep our content free for you! 🌟 Related Courses & Tutorials", "score": 0.8286204}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph tutorial for building autonomous AI agents
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "How to Build AI Agents with LangGraph: A Step-by-Step Guide", "url": "https://medium.com/@lorevanoudenhove/how-to-build-ai-agents-with-langgraph-a-step-by-step-guide-5d84d9c7e832", "content": "By following these steps, you have successfully created an AI assistant using LangGraph that can calculate solar panel energy savings based on user inputs. This tutorial demonstrates the power of LangGraph in managing complex, multi-step processes and highlights how to leverage advanced AI tools to solve real-world challenges efficiently. Whether you’re developing AI agents for customer support, energy management, or other applications, LangGraph provides the flexibility, scalability, and [...] Step 7: Build the Graph Structure In this step, we construct the graph structure for the AI assistant using LangGraph, which controls how the assistant processes user input, triggers tools, and moves between stages. The graph defines nodes for the core actions (like invoking the assistant and tool) and edges that dictate the flow between these nodes. [...] Now that we have a solid understanding of what LangGraph is and how it enhances AI development, let’s dive into a practical example. In this scenario, we’ll build an AI agent designed to calculate potential energy savings for solar panels based on user input. This agent can be implemented as a lead generation tool on a solar panel seller’s website, where it interacts with potential customers, offering personalized savings estimates. By gathering key data such as monthly electricity costs, this", "score": 0.8576849}, {opening_brace}"title": "Building AI Agents with LangGraph: A Beginner's Guide - YouTube", "url": "https://www.youtube.com/watch?v=assrhPxNdSk", "content": "In this tutorial, we'll break down the fundamentals of building AI agents using LangGraph! Whether you're new to AI development or looking", "score": 0.834852}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph tutorial step-by-step
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "How to Build AI Agents with LangGraph: A Step-by-Step Guide", "url": "https://medium.com/@lorevanoudenhove/how-to-build-ai-agents-with-langgraph-a-step-by-step-guide-5d84d9c7e832", "content": "By following these steps, you have successfully created an AI assistant using LangGraph that can calculate solar panel energy savings based on user inputs. This tutorial demonstrates the power of LangGraph in managing complex, multi-step processes and highlights how to leverage advanced AI tools to solve real-world challenges efficiently. Whether you’re developing AI agents for customer support, energy management, or other applications, LangGraph provides the flexibility, scalability, and [...] Step 7: Build the Graph Structure In this step, we construct the graph structure for the AI assistant using LangGraph, which controls how the assistant processes user input, triggers tools, and moves between stages. The graph defines nodes for the core actions (like invoking the assistant and tool) and edges that dictate the flow between these nodes. [...] In this article, we’ll explore how LangGraph transforms AI development and provide a step-by-step guide on how to build your own AI agent using an example that computes energy savings for solar panels. This example will showcase how LangGraph’s unique features can create intelligent, adaptable, and real-world-ready AI systems. What is LangGraph?", "score": 0.86441374}, {opening_brace}"title": "What Is LangGraph and How to Use It? - DataCamp", "url": "https://www.datacamp.com/tutorial/langgraph-tutorial", "content": "Building a Simple LangGraph Application Here’s a step-by-step example of creating a basic chatbot application using LangGraph. Step 1: Define the StateGraph Define a StateGraph object to structure the chatbot as a state machine. The State is a class object defined with a single key messages of type List and uses the add_messages() function to append new messages rather than overwrite them. from typing import Annotated from typing_extensions import TypedDict [...] Getting Started With LangGraph Installation Basic Concepts Building a Simple LangGraph Application Step 1: Define the StateGraph Step 2: Initialize an LLM and add it as a Chatbot node Step 3: Set edges Step 5: Run the chatbot Advanced LangGraph Features Custom node types Edge types Error handling Real-World Applications of LangGraph Chatbots Autonomous agents Multi-Agent systems Workflow automation tools Recommendation systems Personalized learning environments Conclusion", "score": 0.82492816}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph tutorial for beginners
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Introduction to LangGraph: A Beginner's Guide - Medium", "url": "https://medium.com/@cplog/introduction-to-langgraph-a-beginners-guide-14f9be027141", "content": "Conclusion LangGraph is a versatile tool for building complex, stateful applications with LLMs. By understanding its core concepts and working through simple examples, beginners can start to leverage its power for their projects. Remember to pay attention to state management, conditional edges, and ensuring there are no dead-end nodes in your graph. Happy coding! [...] LangGraph is a powerful tool for building stateful, multi-actor applications with Large Language Models (LLMs). It extends the LangChain library, allowing you to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. In this article, we’ll introduce LangGraph, walk you through its basic concepts, and share some insights and common points of confusion for beginners. What is LangGraph?", "score": 0.8793233}, {opening_brace}"title": "LangGraph Tutorial: A Comprehensive Guide for Beginners", "url": "https://blog.futuresmart.ai/langgraph-tutorial-for-beginners", "content": "These examples highlight how LangGraph helps bridge the gap between AI capabilities and the complexities of real-world situations. Conclusion This concludes our LangGraph tutorial! As you've learned, LangGraph enables the creation of AI applications that go beyond simple input-output loops by offering a framework for building stateful, agent-driven systems. You've gained hands-on experience defining graphs, managing state, and incorporating tools. [...] LangGraph, a powerful library within the LangChain ecosystem, provides an elegant solution for building and managing multi-agent LLM applications. By representing workflows as cyclical graphs, LangGraph allows developers to orchestrate the interactions of multiple LLM agents, ensuring smooth communication and efficient execution of complex tasks. [...] LangGraph Tutorial: A Comprehensive Guide for Beginners FutureSmart AI Blog Follow FutureSmart AI Blog Follow LangGraph Tutorial: A Comprehensive Guide for Beginners +1 Rounak Show with 1 co-author ·Oct 1, 2024·12 min read Table of contents Introduction Understanding LangGraph Key Concepts Graph Structures State Management Getting Started with LangGraph Installation Creating a Basic Chatbot in LangGraph", "score": 0.8684817}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph tutorial for beginners
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "Introduction to LangGraph: A Beginner's Guide - Medium", "url": "https://medium.com/@cplog/introduction-to-langgraph-a-beginners-guide-14f9be027141", "content": "Conclusion LangGraph is a versatile tool for building complex, stateful applications with LLMs. By understanding its core concepts and working through simple examples, beginners can start to leverage its power for their projects. Remember to pay attention to state management, conditional edges, and ensuring there are no dead-end nodes in your graph. Happy coding! [...] LangGraph is a powerful tool for building stateful, multi-actor applications with Large Language Models (LLMs). It extends the LangChain library, allowing you to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. In this article, we’ll introduce LangGraph, walk you through its basic concepts, and share some insights and common points of confusion for beginners. What is LangGraph?", "score": 0.8793233}, {opening_brace}"title": "LangGraph Tutorial: A Comprehensive Guide for Beginners", "url": "https://blog.futuresmart.ai/langgraph-tutorial-for-beginners", "content": "These examples highlight how LangGraph helps bridge the gap between AI capabilities and the complexities of real-world situations. Conclusion This concludes our LangGraph tutorial! As you've learned, LangGraph enables the creation of AI applications that go beyond simple input-output loops by offering a framework for building stateful, agent-driven systems. You've gained hands-on experience defining graphs, managing state, and incorporating tools. [...] LangGraph, a powerful library within the LangChain ecosystem, provides an elegant solution for building and managing multi-agent LLM applications. By representing workflows as cyclical graphs, LangGraph allows developers to orchestrate the interactions of multiple LLM agents, ensuring smooth communication and efficient execution of complex tasks. [...] LangGraph Tutorial: A Comprehensive Guide for Beginners FutureSmart AI Blog Follow FutureSmart AI Blog Follow LangGraph Tutorial: A Comprehensive Guide for Beginners +1 Rounak Show with 1 co-author ·Oct 1, 2024·12 min read Table of contents Introduction Understanding LangGraph Key Concepts Graph Structures State Management Getting Started with LangGraph Installation Creating a Basic Chatbot in LangGraph", "score": 0.8684817}]
================================== Ai Message ==================================
Building an autonomous AI agent with LangGraph can be a rewarding and exciting project! Here's a step-by-step guide to get you started, based on the tutorials and resources available:
### Step 1: Understand the Basics of LangGraph
LangGraph is a library that extends the capabilities of LangChain, focusing on building stateful, multi-actor applications using Large Language Models (LLMs). It allows you to coordinate multiple LLM agents in complex workflows, using a graph-based approach.
### Step 2: Install LangGraph
First, you need to install the LangGraph library. You can do this using pip:
```bash
pip install -U langgraph
```
### Step 3: Define the Graph Structure
The core of LangGraph is the graph structure, which defines the nodes (actions or states) and edges (transitions between nodes).
#### Example: Solar Panel Energy Savings Calculator
Let's build a simple AI agent that calculates potential energy savings for solar panels based on user input.
1. **Define the StateGraph**:
- **Nodes**: These represent actions or states in your application.
- **Edges**: These define the transitions between nodes.
```python
from langgraph import StateGraph, State, Edge
# Define the nodes
start_node = State(key="start", action="greet_user")
input_node = State(key="input", action="get_user_input")
calculate_node = State(key="calculate", action="calculate_savings")
result_node = State(key="result", action="show_results")
# Define the edges
start_to_input = Edge(from_node=start_node, to_node=input_node)
input_to_calculate = Edge(from_node=input_node, to_node=calculate_node)
calculate_to_result = Edge(from_node=calculate_node, to_node=result_node)
# Create the graph
graph = StateGraph()
graph.add_state(start_node)
graph.add_state(input_node)
graph.add_state(calculate_node)
graph.add_state(result_node)
graph.add_edge(start_to_input)
graph.add_edge(input_to_calculate)
graph.add_edge(calculate_to_result)
```
### Step 4: Define the Actions
Each node in the graph has an associated action. These actions are Python functions that perform specific tasks.
```python
def greet_user(state, context):
return {"message": "Hello! I can help you calculate energy savings for solar panels."}
def get_user_input(state, context):
return {"message": "Please provide your monthly electricity cost in dollars."}
def calculate_savings(state, context):
# Example calculation (you can use real data and more complex logic here)
monthly_cost = float(input("Enter your monthly electricity cost: "))
savings_per_kWh = 0.10 # Example savings rate
annual_savings = monthly_cost * 12 * savings_per_kWh
return {"savings": annual_savings}
def show_results(state, context):
annual_savings = context.get("savings")
return {"message": f"Your annual savings with solar panels could be ${annual_savings:.2f}."}
```
### Step 5: Run the Graph
Finally, you can run the graph to see how the agent processes user input and performs the calculations.
```python
# Initialize the graph and run it
context = {opening_brace}{closing_brace}
current_node = start_node
while current_node:
action_result = current_node.action(current_node, context)
print(action_result["message"])
if "savings" in action_result:
context["savings"] = action_result["savings"]
current_node = graph.get_next_node(current_node, action_result)
```
### Step 6: Enhance with Advanced Features
Once you have the basic structure in place, you can enhance your agent with advanced features such as:
- **Long-term Memory**: Use external storage (e.g., Zep) to remember user conversations and preferences.
- **Conditional Edges**: Define conditions for transitions between nodes to handle different scenarios.
- **Human-in-the-Loop**: Allow human intervention for complex tasks or error handling.
### Additional Resources
- **Official Documentation**: [LangGraph Documentation](https://langchain-ai.github.io/langgraph/tutorials/introduction/)
- **Comprehensive Guide**: [LangGraph Tutorial for Beginners](https://blog.futuresmart.ai/langgraph-tutorial-for-beginners)
- **Example Project**: [Building AI Agents with LangGraph](https://medium.com/@lorevanoudenhove/how-to-build-ai-agents-with-langgraph-a-step-by-step-guide-5d84d9c7e832)
### Conclusion
By following these steps, you can build a robust and flexible AI agent using LangGraph. Start with simple examples and gradually add more complex features to create powerful, stateful, and multi-actor applications. Happy coding!

Ahora que hemos hecho dos llamadas al modelo, vamos a ver el historial del estado.

	
to_replay = None
for state in graph.get_state_history(config):
print(f"Num Messages: {len(state.values["messages"])}, Next: {state.next}, checkpoint id = {state.config["configurable"]['checkpoint_id']}")
print("-" * 80)
# Get state when first iteracction us done
if len(state.next) == 0:
to_replay = state
Copy
	
Num Messages: 24, Next: (), checkpoint id = 1f027f2f-e5b4-6c84-8018-9fcb33b5f397
--------------------------------------------------------------------------------
Num Messages: 23, Next: ('chatbot_node',), checkpoint id = 1f027f2f-e414-6b0e-8017-3ad465b70767
--------------------------------------------------------------------------------
Num Messages: 22, Next: ('tools',), checkpoint id = 1f027f2f-d382-6692-8016-fcfaf9c9a9f7
--------------------------------------------------------------------------------
Num Messages: 21, Next: ('chatbot_node',), checkpoint id = 1f027f2f-d1cf-6930-8015-f64aa0e6f750
--------------------------------------------------------------------------------
Num Messages: 20, Next: ('tools',), checkpoint id = 1f027f2f-bca9-6164-8014-86452cb10d83
--------------------------------------------------------------------------------
Num Messages: 19, Next: ('chatbot_node',), checkpoint id = 1f027f2f-bac1-6d24-8013-b539f3e4cedb
--------------------------------------------------------------------------------
Num Messages: 18, Next: ('tools',), checkpoint id = 1f027f2f-aa0e-69fa-8012-4ca2d9109f4e
--------------------------------------------------------------------------------
Num Messages: 17, Next: ('chatbot_node',), checkpoint id = 1f027f2f-a861-62c4-8011-5707badab130
--------------------------------------------------------------------------------
Num Messages: 16, Next: ('tools',), checkpoint id = 1f027f2f-93cf-6112-8010-ee536e76cdf7
--------------------------------------------------------------------------------
Num Messages: 15, Next: ('chatbot_node',), checkpoint id = 1f027f2f-91f5-63fa-800f-6ff45b0ebf86
--------------------------------------------------------------------------------
Num Messages: 14, Next: ('tools',), checkpoint id = 1f027f2f-7e07-6190-800e-e0269b0cb0f4
--------------------------------------------------------------------------------
Num Messages: 13, Next: ('chatbot_node',), checkpoint id = 1f027f2f-7bf9-62a4-800d-bd2bf25381ac
--------------------------------------------------------------------------------
Num Messages: 12, Next: ('tools',), checkpoint id = 1f027f2f-639f-6172-800c-e54c8b1b1f4a
--------------------------------------------------------------------------------
Num Messages: 11, Next: ('chatbot_node',), checkpoint id = 1f027f2f-621b-6972-800b-184a824ce9cb
--------------------------------------------------------------------------------
Num Messages: 10, Next: ('tools',), checkpoint id = 1f027f2f-56df-66a8-800a-d56ee9317382
--------------------------------------------------------------------------------
Num Messages: 9, Next: ('chatbot_node',), checkpoint id = 1f027f2f-5546-60d0-8009-41ee7c932b49
--------------------------------------------------------------------------------
Num Messages: 8, Next: ('__start__',), checkpoint id = 1f027f2f-5542-6ff2-8008-e2f4e8278c23
--------------------------------------------------------------------------------
Num Messages: 8, Next: (), checkpoint id = 1f027f2c-8873-61d6-8007-8a1c60438002
--------------------------------------------------------------------------------
Num Messages: 7, Next: ('chatbot_node',), checkpoint id = 1f027f2c-8504-663a-8006-517227b123b6
--------------------------------------------------------------------------------
Num Messages: 6, Next: ('tools',), checkpoint id = 1f027f2c-75dc-6248-8005-e198dd299848
--------------------------------------------------------------------------------
Num Messages: 5, Next: ('chatbot_node',), checkpoint id = 1f027f2c-7448-69d6-8004-e3c6d5c4c5a4
--------------------------------------------------------------------------------
Num Messages: 4, Next: ('tools',), checkpoint id = 1f027f2c-627b-6f6e-8003-22208fac7c89
--------------------------------------------------------------------------------
Num Messages: 3, Next: ('chatbot_node',), checkpoint id = 1f027f2c-6122-6190-8002-b745c42a724e
--------------------------------------------------------------------------------
Num Messages: 2, Next: ('tools',), checkpoint id = 1f027f2c-4c4c-6720-8001-8a1c73b894c1
--------------------------------------------------------------------------------
Num Messages: 1, Next: ('chatbot_node',), checkpoint id = 1f027f2c-4a91-6278-8000-56b65f6d77cd
--------------------------------------------------------------------------------
Num Messages: 0, Next: ('__start__',), checkpoint id = 1f027f2c-4a8d-6a1a-bfff-2f7cbde97290
--------------------------------------------------------------------------------

Hemos guardado en to_replay el estado del grafo cuando nos ha dado la primera respuesta, justo antes de introducir el segundo mensaje. Podemos volver a un estado pasado y continuar el flujo desde allí.

La configuración del checkpoint contiene el checkpoint_id, que es un timestamp del flujo. Podemos verlo para comprobar que estamos en el estado que queremos estar.

	
print(to_replay.config)
Copy
	
{opening_brace}'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1f027f2c-8873-61d6-8007-8a1c60438002'{closing_brace}{closing_brace}

Si nos fijamos en la lista de estados de antes, vemos que el ID coincide con el momento de introducir el segundo mensaje

Dando este checkpoint_id a LangGraph carga el estado en ese momento del flujo. Así que creamos un nuevo mensaje y se lo pasamos al grafo

	
user_input = (
"Thanks"
)
# The `checkpoint_id` in the `to_replay.config` corresponds to a state we've persisted to our checkpointer.
events = graph.stream({"messages": [{"role": "user","content": user_input},],},
to_replay.config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Thanks
================================== Ai Message ==================================
You're welcome! If you have any more questions about LangGraph or any other topics, feel free to ask. Happy learning! 🚀
	
for state in graph.get_state_history(config):
print(f"Num Messages: {len(state.values["messages"])}, Next: {state.next}, checkpoint id = {state.config["configurable"]['checkpoint_id']}")
print("-" * 80)
Copy
	
Num Messages: 10, Next: (), checkpoint id = 1f027f43-71ae-67e0-800a-d84a557441fc
--------------------------------------------------------------------------------
Num Messages: 9, Next: ('chatbot_node',), checkpoint id = 1f027f43-5b1f-6ad8-8009-34f409789bc4
--------------------------------------------------------------------------------
Num Messages: 8, Next: ('__start__',), checkpoint id = 1f027f43-5b1b-68a2-8008-fbbcbd1c175e
--------------------------------------------------------------------------------
Num Messages: 24, Next: (), checkpoint id = 1f027f2f-e5b4-6c84-8018-9fcb33b5f397
--------------------------------------------------------------------------------
Num Messages: 23, Next: ('chatbot_node',), checkpoint id = 1f027f2f-e414-6b0e-8017-3ad465b70767
--------------------------------------------------------------------------------
Num Messages: 22, Next: ('tools',), checkpoint id = 1f027f2f-d382-6692-8016-fcfaf9c9a9f7
--------------------------------------------------------------------------------
Num Messages: 21, Next: ('chatbot_node',), checkpoint id = 1f027f2f-d1cf-6930-8015-f64aa0e6f750
--------------------------------------------------------------------------------
Num Messages: 20, Next: ('tools',), checkpoint id = 1f027f2f-bca9-6164-8014-86452cb10d83
--------------------------------------------------------------------------------
Num Messages: 19, Next: ('chatbot_node',), checkpoint id = 1f027f2f-bac1-6d24-8013-b539f3e4cedb
--------------------------------------------------------------------------------
Num Messages: 18, Next: ('tools',), checkpoint id = 1f027f2f-aa0e-69fa-8012-4ca2d9109f4e
--------------------------------------------------------------------------------
Num Messages: 17, Next: ('chatbot_node',), checkpoint id = 1f027f2f-a861-62c4-8011-5707badab130
--------------------------------------------------------------------------------
Num Messages: 16, Next: ('tools',), checkpoint id = 1f027f2f-93cf-6112-8010-ee536e76cdf7
--------------------------------------------------------------------------------
Num Messages: 15, Next: ('chatbot_node',), checkpoint id = 1f027f2f-91f5-63fa-800f-6ff45b0ebf86
--------------------------------------------------------------------------------
Num Messages: 14, Next: ('tools',), checkpoint id = 1f027f2f-7e07-6190-800e-e0269b0cb0f4
--------------------------------------------------------------------------------
Num Messages: 13, Next: ('chatbot_node',), checkpoint id = 1f027f2f-7bf9-62a4-800d-bd2bf25381ac
--------------------------------------------------------------------------------
Num Messages: 12, Next: ('tools',), checkpoint id = 1f027f2f-639f-6172-800c-e54c8b1b1f4a
--------------------------------------------------------------------------------
Num Messages: 11, Next: ('chatbot_node',), checkpoint id = 1f027f2f-621b-6972-800b-184a824ce9cb
--------------------------------------------------------------------------------
Num Messages: 10, Next: ('tools',), checkpoint id = 1f027f2f-56df-66a8-800a-d56ee9317382
--------------------------------------------------------------------------------
Num Messages: 9, Next: ('chatbot_node',), checkpoint id = 1f027f2f-5546-60d0-8009-41ee7c932b49
--------------------------------------------------------------------------------
Num Messages: 8, Next: ('__start__',), checkpoint id = 1f027f2f-5542-6ff2-8008-e2f4e8278c23
--------------------------------------------------------------------------------
Num Messages: 8, Next: (), checkpoint id = 1f027f2c-8873-61d6-8007-8a1c60438002
--------------------------------------------------------------------------------
Num Messages: 7, Next: ('chatbot_node',), checkpoint id = 1f027f2c-8504-663a-8006-517227b123b6
--------------------------------------------------------------------------------
Num Messages: 6, Next: ('tools',), checkpoint id = 1f027f2c-75dc-6248-8005-e198dd299848
--------------------------------------------------------------------------------
Num Messages: 5, Next: ('chatbot_node',), checkpoint id = 1f027f2c-7448-69d6-8004-e3c6d5c4c5a4
--------------------------------------------------------------------------------
Num Messages: 4, Next: ('tools',), checkpoint id = 1f027f2c-627b-6f6e-8003-22208fac7c89
--------------------------------------------------------------------------------
Num Messages: 3, Next: ('chatbot_node',), checkpoint id = 1f027f2c-6122-6190-8002-b745c42a724e
--------------------------------------------------------------------------------
Num Messages: 2, Next: ('tools',), checkpoint id = 1f027f2c-4c4c-6720-8001-8a1c73b894c1
--------------------------------------------------------------------------------
Num Messages: 1, Next: ('chatbot_node',), checkpoint id = 1f027f2c-4a91-6278-8000-56b65f6d77cd
--------------------------------------------------------------------------------
Num Messages: 0, Next: ('__start__',), checkpoint id = 1f027f2c-4a8d-6a1a-bfff-2f7cbde97290
--------------------------------------------------------------------------------

Podemos ver en el historial que el grafo ejecutó todo lo primero que hicimos, pero que luego piso el historial y volvió a ejecutar desde un punto anterior

Vuelvo a escribir todo el grafo junto

	
import os
import dotenv
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph.message import add_messages
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.checkpoint.memory import MemorySaver
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from huggingface_hub import login
os.environ["LANGCHAIN_TRACING_V2"] = "false" # Disable LangSmith tracing
from IPython.display import Image, display
class State(TypedDict):
messages: Annotated[list, add_messages]
dotenv.load_dotenv()
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_LANGGRAPH")
TAVILY_API_KEY = os.getenv("TAVILY_LANGGRAPH_API_KEY")
# Tools
wrapper = TavilySearchAPIWrapper(tavily_api_key=TAVILY_API_KEY)
search_tool = TavilySearchResults(api_wrapper=wrapper, max_results=2)
tools_list = [search_tool]
tool_node = ToolNode(tools=tools_list)
# Create the LLM
login(token=HUGGINGFACE_TOKEN)
MODEL = "Qwen/Qwen2.5-72B-Instruct"
model = HuggingFaceEndpoint(
repo_id=MODEL,
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
# Create the chat model
llm = ChatHuggingFace(llm=model)
# Modification: tell the LLM which tools it can call
llm_with_tools = llm.bind_tools(tools_list)
# Define the chatbot function
def chatbot_function(state: State):
message = llm_with_tools.invoke(state["messages"])
return {opening_brace}"messages": [message]}
# Create the graph
graph_builder = StateGraph(State)
# Add nodes
graph_builder.add_node("chatbot_node", chatbot_function)
graph_builder.add_node("tools", tool_node)
graph_builder.add_edge("tools", "chatbot_node")
# Add edges
graph_builder.add_edge(START, "chatbot_node")
graph_builder.add_conditional_edges("chatbot_node", tools_condition)
# Add checkpointer
memory = MemorySaver()
# Compile
graph = graph_builder.compile(checkpointer=memory)
# Visualize
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception as e:
print(f"Error al visualizar el grafo: {e}")
Copy
	
Error al visualizar el grafo: Failed to reach https://mermaid.ink/ API while trying to render your graph after 1 retries. To resolve this issue:
1. Check your internet connection and try again
2. Try with higher retry settings: `draw_mermaid_png(..., max_retries=5, retry_delay=2.0)`
3. Use the Pyppeteer rendering method which will render your graph locally in a browser: `draw_mermaid_png(..., draw_method=MermaidDrawMethod.PYPPETEER)`

Hacemos la primera llamada al modelo

	
config = {opening_brace}"configurable": {opening_brace}"thread_id": "1"{closing_brace}{closing_brace}
user_input = (
"I'm learning LangGraph. "
"Could you do some research on it for me?"
)
events = graph.stream(
{opening_brace}"messages": [{"role": "user","content": user_input},],},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
I'm learning LangGraph. Could you do some research on it for me?
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "What is LangGraph? - IBM", "url": "https://www.ibm.com/think/topics/langgraph", "content": "LangGraph, created by LangChain, is an open source AI agent framework designed to build, deploy and manage complex generative AI agent workflows. It provides a set of tools and libraries that enable users to create, run and optimize large language models (LLMs) in a scalable and efficient manner. At its core, LangGraph uses the power of graph-based architectures to model and manage the intricate relationships between various components of an AI agent workflow. [...] Agent systems: LangGraph provides a framework for building agent-based systems, which can be used in applications such as robotics, autonomous vehicles or video games. LLM applications: By using LangGraph’s capabilities, developers can build more sophisticated AI models that learn and improve over time. Norwegian Cruise Line uses LangGraph to compile, construct and refine guest-facing AI solutions. This capability allows for improved and personalized guest experiences. [...] By using a graph-based architecture, LangGraph enables users to scale artificial intelligence workflows without slowing down or sacrificing efficiency. LangGraph uses enhanced decision-making by modeling complex relationships between nodes, which means it uses AI agents to analyze their past actions and feedback. In the world of LLMs, this process is referred to as reflection.", "score": 0.9353998}, {opening_brace}"title": "LangGraph Quickstart - GitHub Pages", "url": "https://langchain-ai.github.io/langgraph/tutorials/introduction/", "content": "[](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-9-1)Assistant: LangGraph is a library designed to help build stateful multi-agent applications using language models. It provides tools for creating workflows and state machines to coordinate multiple AI agents or language model interactions. LangGraph is built on top of LangChain, leveraging its components while adding graph-based coordination capabilities. It's particularly useful for developing more complex, [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-6) LangGraph is a library designed for building stateful, multi-actor applications with Large Language Models (LLMs). It's particularly useful for creating agent and multi-agent workflows. [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-7) [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-8)2. Developer: [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-19)LangGraph is likely a framework or library designed specifically for creating AI agents with advanced capabilities. Here are a few points to consider based on this recommendation: [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-20)", "score": 0.9328032}]
================================== Ai Message ==================================
Tool Calls:
tavily_search_results_json (0)
Call ID: 0
Args:
query: LangGraph
================================= Tool Message =================================
Name: tavily_search_results_json
[{opening_brace}"title": "What is LangGraph? - IBM", "url": "https://www.ibm.com/think/topics/langgraph", "content": "LangGraph, created by LangChain, is an open source AI agent framework designed to build, deploy and manage complex generative AI agent workflows. It provides a set of tools and libraries that enable users to create, run and optimize large language models (LLMs) in a scalable and efficient manner. At its core, LangGraph uses the power of graph-based architectures to model and manage the intricate relationships between various components of an AI agent workflow. [...] Agent systems: LangGraph provides a framework for building agent-based systems, which can be used in applications such as robotics, autonomous vehicles or video games. LLM applications: By using LangGraph’s capabilities, developers can build more sophisticated AI models that learn and improve over time. Norwegian Cruise Line uses LangGraph to compile, construct and refine guest-facing AI solutions. This capability allows for improved and personalized guest experiences. [...] By using a graph-based architecture, LangGraph enables users to scale artificial intelligence workflows without slowing down or sacrificing efficiency. LangGraph uses enhanced decision-making by modeling complex relationships between nodes, which means it uses AI agents to analyze their past actions and feedback. In the world of LLMs, this process is referred to as reflection.", "score": 0.9353998}, {opening_brace}"title": "LangGraph Quickstart - GitHub Pages", "url": "https://langchain-ai.github.io/langgraph/tutorials/introduction/", "content": "[](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-9-1)Assistant: LangGraph is a library designed to help build stateful multi-agent applications using language models. It provides tools for creating workflows and state machines to coordinate multiple AI agents or language model interactions. LangGraph is built on top of LangChain, leveraging its components while adding graph-based coordination capabilities. It's particularly useful for developing more complex, [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-6) LangGraph is a library designed for building stateful, multi-actor applications with Large Language Models (LLMs). It's particularly useful for creating agent and multi-agent workflows. [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-7) [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-21-8)2. Developer: [...] [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-19)LangGraph is likely a framework or library designed specifically for creating AI agents with advanced capabilities. Here are a few points to consider based on this recommendation: [](https://langchain-ai.github.io/langgraph/tutorials/introduction/#__codelineno-48-20)", "score": 0.9328032}]
================================== Ai Message ==================================
LangGraph is an open-source AI agent framework developed by LangChain, designed to build, deploy, and manage complex generative AI agent workflows. Here are some key points about LangGraph:
### Overview
- **Purpose**: LangGraph is aimed at creating, running, and optimizing large language models (LLMs) in a scalable and efficient manner.
- **Graph-Based Architecture**: It uses graph-based architectures to model and manage the intricate relationships between various components of an AI agent workflow.
### Features
- **Agent Systems**: LangGraph provides a framework for building agent-based systems, which can be used in applications such as robotics, autonomous vehicles, or video games.
- **LLM Applications**: Developers can build more sophisticated AI models that learn and improve over time. For example, Norwegian Cruise Line uses LangGraph to compile, construct, and refine guest-facing AI solutions, enhancing personalized guest experiences.
- **Scalability**: By using a graph-based architecture, LangGraph enables users to scale artificial intelligence workflows without sacrificing efficiency.
- **Enhanced Decision-Making**: LangGraph uses AI agents to analyze their past actions and feedback, a process referred to as "reflection" in the context of LLMs.
### Developer Resources
- **Quickstart Guide**: The LangGraph Quickstart guide on GitHub provides a detailed introduction to building stateful multi-agent applications using language models. It covers tools for creating workflows and state machines to coordinate multiple AI agents or language model interactions.
- **Built on LangChain**: LangGraph is built on top of LangChain, leveraging its components while adding graph-based coordination capabilities. This makes it particularly useful for developing more complex, stateful, multi-actor applications with LLMs.
### Further Reading
- **What is LangGraph? - IBM**: [Link](https://www.ibm.com/think/topics/langgraph)
- **LangGraph Quickstart - GitHub Pages**: [Link](https://langchain-ai.github.io/langgraph/tutorials/introduction/)
These resources should provide a solid foundation for understanding and getting started with LangGraph. If you have any specific questions or need further details, feel free to ask!

Y ahora la segunda llamada

	
user_input = (
"Ya that's helpful. Maybe I'll "
"build an autonomous agent with it!"
)
events = graph.stream(
{opening_brace}"messages": [{"role": "user","content": user_input},],},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Ya that's helpful. Maybe I'll build an autonomous agent with it!
================================== Ai Message ==================================
That sounds like an exciting project! Building an autonomous agent using LangGraph can be a rewarding experience. Here are some steps and tips to help you get started:
### 1. **Understand the Basics of LangGraph**
- **Read the Documentation**: Start with the official LangGraph documentation and quickstart guide. This will give you a solid understanding of the framework's capabilities and how to use its tools.
- **Quickstart Guide**: [LangGraph Quickstart - GitHub Pages](https://langchain-ai.github.io/langgraph/tutorials/introduction/)
### 2. **Set Up Your Development Environment**
- **Install LangChain and LangGraph**: Ensure you have the necessary dependencies installed. LangGraph is built on top of LangChain, so you'll need to set up both.
```bash
pip install langchain langgraph
```
### 3. **Define Your Agent's Objectives**
- **Identify the Use Case**: What specific tasks do you want your autonomous agent to perform? This could be anything from navigating a virtual environment, responding to user queries, or managing a robotic system.
- **Define the State and Actions**: Determine the states your agent can be in and the actions it can take. This will help you design the state machine and workflows.
### 4. **Design the Graph-Based Workflow**
- **Create Nodes and Edges**: In LangGraph, you'll define nodes (agents or components) and edges (interactions or transitions). Each node can represent a different part of your agent's functionality.
- **Define State Transitions**: Use the graph-based architecture to define how the agent transitions between different states based on actions and events.
### 5. **Implement the Agent**
- **Write the Code**: Start coding your agent using the LangGraph library. You can use the provided tools to create and manage the agent's workflows.
- **Example**: Here’s a simple example to get you started:
```python
from langgraph import AgentGraph, Node, Edge
# Define nodes
node1 = Node("Sensor", process=sensor_process)
node2 = Node("Decision", process=decision_process)
node3 = Node("Actuator", process=actuator_process)
# Define edges
edge1 = Edge(node1, node2)
edge2 = Edge(node2, node3)
# Create the agent graph
agent_graph = AgentGraph()
agent_graph.add_node(node1)
agent_graph.add_node(node2)
agent_graph.add_node(node3)
agent_graph.add_edge(edge1)
agent_graph.add_edge(edge2)
# Run the graph
agent_graph.run()
```
### 6. **Test and Iterate**
- **Run Simulations**: Test your agent in a simulated environment to see how it performs. Use this to identify and fix any issues.
- **Refine the Model**: Based on the test results, refine your agent's model and workflows. You can add more nodes, edges, or improve the decision-making processes.
### 7. **Deploy and Monitor**
- **Deploy the Agent**: Once you are satisfied with the performance, you can deploy your agent in the real world or a production environment.
- **Monitor and Maintain**: Continuously monitor the agent's performance and make adjustments as needed. Use feedback loops to improve the agent over time.
### 8. **Community and Support**
- **Join the Community**: Engage with the LangChain and LangGraph community. You can find support, share ideas, and get feedback from other developers.
- **GitHub**: [LangGraph GitHub](https://github.com/langchain-ai/langgraph)
- **Forums and Discussion Boards**: Check out forums and discussion boards related to LangGraph and LangChain.
### Additional Resources
- **Tutorials and Examples**: Look for tutorials and example projects to get more hands-on experience.
- **Research Papers and Articles**: Read research papers and articles to deepen your understanding of AI agent design and graph-based architectures.
Good luck with your project! If you have any specific questions or need further guidance, feel free to ask.

Vemos el historial del estado

	
to_replay = None
for state in graph.get_state_history(config):
print(f"Num Messages: {len(state.values["messages"])}, Next: {state.next}, checkpoint id = {state.config["configurable"]['checkpoint_id']}")
print("-" * 80)
# Get state when first iteracction us done
if len(state.next) == 0:
to_replay = state
Copy
	
Num Messages: 8, Next: (), checkpoint id = 1f03263e-a96c-6446-8008-d2c11df0b6cb
--------------------------------------------------------------------------------
Num Messages: 7, Next: ('chatbot_node',), checkpoint id = 1f03263d-7a35-6660-8007-a37d4b584c88
--------------------------------------------------------------------------------
Num Messages: 6, Next: ('__start__',), checkpoint id = 1f03263d-7a32-624e-8006-6509bbf32ebe
--------------------------------------------------------------------------------
Num Messages: 6, Next: (), checkpoint id = 1f03263d-7a1a-6f36-8005-f10b5d83f22c
--------------------------------------------------------------------------------
Num Messages: 5, Next: ('chatbot_node',), checkpoint id = 1f03263c-c53f-6666-8004-c6d35868dd73
--------------------------------------------------------------------------------
Num Messages: 4, Next: ('tools',), checkpoint id = 1f03263c-b14b-68f8-8003-28558fa38dbc
--------------------------------------------------------------------------------
Num Messages: 3, Next: ('chatbot_node',), checkpoint id = 1f03263c-a66b-6276-8002-2dc89fca4d99
--------------------------------------------------------------------------------
Num Messages: 2, Next: ('tools',), checkpoint id = 1f03263c-8c7c-68ec-8001-fb8a9aa300b0
--------------------------------------------------------------------------------
Num Messages: 1, Next: ('chatbot_node',), checkpoint id = 1f03263c-6d06-68d2-8000-ced2e7b8538f
--------------------------------------------------------------------------------
Num Messages: 0, Next: ('__start__',), checkpoint id = 1f03263c-6cdb-63e4-bfff-c644b57cee28
--------------------------------------------------------------------------------
	
print(to_replay.config)
Copy
	
{opening_brace}'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1f03263d-7a1a-6f36-8005-f10b5d83f22c'{closing_brace}{closing_brace}

Dando este checkpoint_id a LangGraph carga el estado en ese momento del flujo. Así que creamos un nuevo mensaje y se lo pasamos al grafo

	
user_input = (
"Thanks"
)
# The `checkpoint_id` in the `to_replay.config` corresponds to a state we've persisted to our checkpointer.
events = graph.stream({"messages": [{"role": "user","content": user_input},],},
to_replay.config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
Copy
	
================================ Human Message =================================
Thanks
================================== Ai Message ==================================
You're welcome! If you have any more questions about LangGraph or any other topic, feel free to reach out. Happy learning! 😊
	
for state in graph.get_state_history(config):
print(f"Num Messages: {len(state.values["messages"])}, Next: {state.next}, checkpoint id = {state.config["configurable"]['checkpoint_id']}")
print("-" * 80)
Copy
	
Num Messages: 8, Next: (), checkpoint id = 1f03263f-fcb9-63a0-8008-e8c4a3fb44f9
--------------------------------------------------------------------------------
Num Messages: 7, Next: ('chatbot_node',), checkpoint id = 1f03263f-eb3b-663c-8007-72da4d16bf64
--------------------------------------------------------------------------------
Num Messages: 6, Next: ('__start__',), checkpoint id = 1f03263f-eb36-6ac4-8006-a2333805d5d6
--------------------------------------------------------------------------------
Num Messages: 8, Next: (), checkpoint id = 1f03263e-a96c-6446-8008-d2c11df0b6cb
--------------------------------------------------------------------------------
Num Messages: 7, Next: ('chatbot_node',), checkpoint id = 1f03263d-7a35-6660-8007-a37d4b584c88
--------------------------------------------------------------------------------
Num Messages: 6, Next: ('__start__',), checkpoint id = 1f03263d-7a32-624e-8006-6509bbf32ebe
--------------------------------------------------------------------------------
Num Messages: 6, Next: (), checkpoint id = 1f03263d-7a1a-6f36-8005-f10b5d83f22c
--------------------------------------------------------------------------------
Num Messages: 5, Next: ('chatbot_node',), checkpoint id = 1f03263c-c53f-6666-8004-c6d35868dd73
--------------------------------------------------------------------------------
Num Messages: 4, Next: ('tools',), checkpoint id = 1f03263c-b14b-68f8-8003-28558fa38dbc
--------------------------------------------------------------------------------
Num Messages: 3, Next: ('chatbot_node',), checkpoint id = 1f03263c-a66b-6276-8002-2dc89fca4d99
--------------------------------------------------------------------------------
Num Messages: 2, Next: ('tools',), checkpoint id = 1f03263c-8c7c-68ec-8001-fb8a9aa300b0
--------------------------------------------------------------------------------
Num Messages: 1, Next: ('chatbot_node',), checkpoint id = 1f03263c-6d06-68d2-8000-ced2e7b8538f
--------------------------------------------------------------------------------
Num Messages: 0, Next: ('__start__',), checkpoint id = 1f03263c-6cdb-63e4-bfff-c644b57cee28
--------------------------------------------------------------------------------

Seguir leyendo

Últimos posts -->

¿Has visto estos proyectos?

Horeca chatbot

Horeca chatbot Horeca chatbot
Python
LangChain
PostgreSQL
PGVector
React
Kubernetes
Docker
GitHub Actions

Chatbot conversacional para cocineros de hoteles y restaurantes. Un cocinero, jefe de cocina o camaeror de un hotel o restaurante puede hablar con el chatbot para obtener información de recetas y menús. Pero además implementa agentes, con los cuales puede editar o crear nuevas recetas o menús

Naviground

Naviground Naviground

Subtify

Subtify Subtify
Python
Whisper
Spaces

Generador de subtítulos para videos en el idioma que desees. Además a cada persona le pone su subtítulo de un color

Ver todos los proyectos -->

¿Quieres aplicar la IA en tu proyecto? Contactame!

¿Quieres mejorar con estos tips?

Últimos tips -->

Usa esto en local

Los espacios de Hugging Face nos permite ejecutar modelos con demos muy sencillas, pero ¿qué pasa si la demo se rompe? O si el usuario la elimina? Por ello he creado contenedores docker con algunos espacios interesantes, para poder usarlos de manera local, pase lo que pase. De hecho, es posible que si pinchas en alún botón de ver proyecto te lleve a un espacio que no funciona.

Flow edit

Flow edit Flow edit

Edita imágenes con este modelo de Flow. Basándose en SD3 o FLUX puedes editar cualquier imagen y generar nuevas

FLUX.1-RealismLora

FLUX.1-RealismLora FLUX.1-RealismLora
Ver todos los contenedores -->

¿Quieres aplicar la IA en tu proyecto? Contactame!

¿Quieres entrenar tu modelo con estos datasets?

short-jokes-dataset

Dataset de chistes en inglés

opus100

Dataset con traducciones de inglés a español

netflix_titles

Dataset con películas y series de Netflix

Ver más datasets -->