Langchain Academy translated
  • module-0
    • LangChain 学院
  • module-1
    • 智能体记忆
    • 智能体
    • 链式结构
    • 部署
    • 路由器
    • 最简单的图结构
  • module-2
    • 支持消息摘要与外部数据库记忆的聊天机器人
    • 支持消息摘要的聊天机器人
    • 多模式架构
    • 状态归约器
    • 状态模式
    • 消息过滤与修剪
  • module-3
    • 断点
    • 动态断点
    • 编辑图状态
    • 流式处理
    • 时间回溯
  • module-4
    • 映射-归约
    • 并行节点执行
    • 研究助手
    • 子图
  • module-5
    • 记忆代理
    • 具备记忆功能的聊天机器人
    • 基于集合架构的聊天机器人
    • 支持个人资料架构的聊天机器人
  • module-6
    • 助手
    • 连接 LangGraph 平台部署
    • 创建部署
    • 双重消息处理
  • Search
  • Previous
  • Next
  • 智能体记忆
    • 回顾
    • 目标
    • 内存
    • LangGraph 工作室

在 Colab 中打开 在 LangChain Academy 中打开

智能体记忆¶

回顾¶

此前,我们构建了一个具备以下能力的智能体:

  • 行动 - 让模型调用特定工具
  • 观察 - 将工具输出回传给模型
  • 推理 - 让模型根据工具输出进行推理,决定后续操作(例如调用其他工具或直接响应)

Screenshot 2024-08-21 at 12.45.32 PM.png

目标¶

现在,我们将通过引入记忆功能来扩展智能体的能力。

In [ ]:
Copied!
%%capture --no-stderr
%pip install --quiet -U langchain_openai langchain_core langgraph langgraph-prebuilt
%%capture --no-stderr %pip install --quiet -U langchain_openai langchain_core langgraph langgraph-prebuilt
In [1]:
Copied!
import os, getpass

def _set_env(var: str):
    if not os.environ.get(var):
        os.environ[var] = getpass.getpass(f"{var}: ")

_set_env("OPENAI_API_KEY")
import os, getpass def _set_env(var: str): if not os.environ.get(var): os.environ[var] = getpass.getpass(f"{var}: ") _set_env("OPENAI_API_KEY")

我们将使用 LangSmith 进行追踪。

In [2]:
Copied!
_set_env("LANGSMITH_API_KEY")
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_PROJECT"] = "langchain-academy"
_set_env("LANGSMITH_API_KEY") os.environ["LANGSMITH_TRACING"] = "true" os.environ["LANGSMITH_PROJECT"] = "langchain-academy"

这延续了我们之前的做法。

In [3]:
Copied!
from langchain_openai import ChatOpenAI

def multiply(a: int, b: int) -> int:
    """Multiply a and b.

    Args:
        a: first int
        b: second int
    """
    return a * b

# This will be a tool
def add(a: int, b: int) -> int:
    """Adds a and b.

    Args:
        a: first int
        b: second int
    """
    return a + b

def divide(a: int, b: int) -> float:
    """Divide a and b.

    Args:
        a: first int
        b: second int
    """
    return a / b

tools = [add, multiply, divide]
llm = ChatOpenAI(model="gpt-4o")
llm_with_tools = llm.bind_tools(tools)
from langchain_openai import ChatOpenAI def multiply(a: int, b: int) -> int: """Multiply a and b. Args: a: first int b: second int """ return a * b # This will be a tool def add(a: int, b: int) -> int: """Adds a and b. Args: a: first int b: second int """ return a + b def divide(a: int, b: int) -> float: """Divide a and b. Args: a: first int b: second int """ return a / b tools = [add, multiply, divide] llm = ChatOpenAI(model="gpt-4o") llm_with_tools = llm.bind_tools(tools)
In [4]:
Copied!
from langgraph.graph import MessagesState
from langchain_core.messages import HumanMessage, SystemMessage

# System message
sys_msg = SystemMessage(content="You are a helpful assistant tasked with performing arithmetic on a set of inputs.")

# Node
def assistant(state: MessagesState):
   return {"messages": [llm_with_tools.invoke([sys_msg] + state["messages"])]}
from langgraph.graph import MessagesState from langchain_core.messages import HumanMessage, SystemMessage # System message sys_msg = SystemMessage(content="You are a helpful assistant tasked with performing arithmetic on a set of inputs.") # Node def assistant(state: MessagesState): return {"messages": [llm_with_tools.invoke([sys_msg] + state["messages"])]}
In [5]:
Copied!
from langgraph.graph import START, StateGraph
from langgraph.prebuilt import tools_condition, ToolNode
from IPython.display import Image, display

# Graph
builder = StateGraph(MessagesState)

# Define nodes: these do the work
builder.add_node("assistant", assistant)
builder.add_node("tools", ToolNode(tools))

# Define edges: these determine how the control flow moves
builder.add_edge(START, "assistant")
builder.add_conditional_edges(
    "assistant",
    # If the latest message (result) from assistant is a tool call -> tools_condition routes to tools
    # If the latest message (result) from assistant is a not a tool call -> tools_condition routes to END
    tools_condition,
)
builder.add_edge("tools", "assistant")
react_graph = builder.compile()

# Show
display(Image(react_graph.get_graph(xray=True).draw_mermaid_png()))
from langgraph.graph import START, StateGraph from langgraph.prebuilt import tools_condition, ToolNode from IPython.display import Image, display # Graph builder = StateGraph(MessagesState) # Define nodes: these do the work builder.add_node("assistant", assistant) builder.add_node("tools", ToolNode(tools)) # Define edges: these determine how the control flow moves builder.add_edge(START, "assistant") builder.add_conditional_edges( "assistant", # If the latest message (result) from assistant is a tool call -> tools_condition routes to tools # If the latest message (result) from assistant is a not a tool call -> tools_condition routes to END tools_condition, ) builder.add_edge("tools", "assistant") react_graph = builder.compile() # Show display(Image(react_graph.get_graph(xray=True).draw_mermaid_png()))
No description has been provided for this image

内存¶

让我们像之前一样运行我们的代理程序。

In [6]:
Copied!
messages = [HumanMessage(content="Add 3 and 4.")]
messages = react_graph.invoke({"messages": messages})
for m in messages['messages']:
    m.pretty_print()
messages = [HumanMessage(content="Add 3 and 4.")] messages = react_graph.invoke({"messages": messages}) for m in messages['messages']: m.pretty_print()
================================ Human Message =================================

Add 3 and 4.
================================== Ai Message ==================================
Tool Calls:
  add (call_zZ4JPASfUinchT8wOqg9hCZO)
 Call ID: call_zZ4JPASfUinchT8wOqg9hCZO
  Args:
    a: 3
    b: 4
================================= Tool Message =================================
Name: add

7
================================== Ai Message ==================================

The sum of 3 and 4 is 7.

现在,让我们乘以 2!

In [7]:
Copied!
messages = [HumanMessage(content="Multiply that by 2.")]
messages = react_graph.invoke({"messages": messages})
for m in messages['messages']:
    m.pretty_print()
messages = [HumanMessage(content="Multiply that by 2.")] messages = react_graph.invoke({"messages": messages}) for m in messages['messages']: m.pretty_print()
================================ Human Message =================================

Multiply that by 2.
================================== Ai Message ==================================
Tool Calls:
  multiply (call_prnkuG7OYQtbrtVQmH2d3Nl7)
 Call ID: call_prnkuG7OYQtbrtVQmH2d3Nl7
  Args:
    a: 2
    b: 2
================================= Tool Message =================================
Name: multiply

4
================================== Ai Message ==================================

The result of multiplying 2 by 2 is 4.

我们不会保留最初聊天中的记忆!

这是因为状态是临时的,仅存在于单次图执行过程中。

当然,这限制了我们进行多轮中断对话的能力。

我们可以使用持久化功能来解决这个问题!

LangGraph 可以通过检查点(checkpointer)自动在每一步后保存图状态。

这个内置的持久化层为我们提供了记忆功能,使得 LangGraph 能够从最后一次状态更新处继续执行。

最简单的检查点实现是 MemorySaver,这是一个用于存储图状态的内存键值存储。

我们只需要在编译图时添加检查点配置,就能让图具备记忆功能!

In [8]:
Copied!
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
react_graph_memory = builder.compile(checkpointer=memory)
from langgraph.checkpoint.memory import MemorySaver memory = MemorySaver() react_graph_memory = builder.compile(checkpointer=memory)

使用内存时,我们需要指定一个 thread_id。

这个 thread_id 将存储我们的图状态集合。

以下是示意图:

  • 检查点程序会在图的每个步骤写入状态
  • 这些检查点保存在一个线程中
  • 未来我们可以通过 thread_id 访问该线程

state.jpg

In [9]:
Copied!
# Specify a thread
config = {"configurable": {"thread_id": "1"}}

# Specify an input
messages = [HumanMessage(content="Add 3 and 4.")]

# Run
messages = react_graph_memory.invoke({"messages": messages},config)
for m in messages['messages']:
    m.pretty_print()
# Specify a thread config = {"configurable": {"thread_id": "1"}} # Specify an input messages = [HumanMessage(content="Add 3 and 4.")] # Run messages = react_graph_memory.invoke({"messages": messages},config) for m in messages['messages']: m.pretty_print()
================================ Human Message =================================

Add 3 and 4.
================================== Ai Message ==================================
Tool Calls:
  add (call_MSupVAgej4PShIZs7NXOE6En)
 Call ID: call_MSupVAgej4PShIZs7NXOE6En
  Args:
    a: 3
    b: 4
================================= Tool Message =================================
Name: add

7
================================== Ai Message ==================================

The sum of 3 and 4 is 7.

如果我们传递相同的 thread_id,就可以从之前记录的状态检查点继续执行!

在这种情况下,上述对话内容会被记录在该线程中。

我们传递的 HumanMessage("Multiply that by 2.")会追加到上述对话之后。

因此,模型现在知道 that 指的是 The sum of 3 and 4 is 7.。

In [10]:
Copied!
messages = [HumanMessage(content="Multiply that by 2.")]
messages = react_graph_memory.invoke({"messages": messages}, config)
for m in messages['messages']:
    m.pretty_print()
messages = [HumanMessage(content="Multiply that by 2.")] messages = react_graph_memory.invoke({"messages": messages}, config) for m in messages['messages']: m.pretty_print()
================================ Human Message =================================

Add 3 and 4.
================================== Ai Message ==================================
Tool Calls:
  add (call_MSupVAgej4PShIZs7NXOE6En)
 Call ID: call_MSupVAgej4PShIZs7NXOE6En
  Args:
    a: 3
    b: 4
================================= Tool Message =================================
Name: add

7
================================== Ai Message ==================================

The sum of 3 and 4 is 7.
================================ Human Message =================================

Multiply that by 2.
================================== Ai Message ==================================
Tool Calls:
  multiply (call_fWN7lnSZZm82tAg7RGeuWusO)
 Call ID: call_fWN7lnSZZm82tAg7RGeuWusO
  Args:
    a: 7
    b: 2
================================= Tool Message =================================
Name: multiply

14
================================== Ai Message ==================================

The result of multiplying 7 by 2 is 14.

LangGraph 工作室¶

⚠️ 免责声明

自这些视频拍摄以来,我们已对工作室进行了升级,现在支持本地运行并通过浏览器访问。目前推荐采用此方式运行工作室(而非视频中演示的桌面应用程序)。关于本地开发服务器的说明请参阅此文档和操作指南。要启动本地开发服务器,请在本模块的module-1/studio/目录下执行以下终端命令:

langgraph dev
In [ ]:
Copied!


Documentation built with MkDocs.

Search

From here you can search these documents. Enter your search terms below.

Keyboard Shortcuts

Keys Action
? Open this help
n Next page
p Previous page
s Search