Langchain Academy translated
  • module-0
    • LangChain 学院
  • module-1
    • 智能体记忆
    • 智能体
    • 链式结构
    • 部署
    • 路由器
    • 最简单的图结构
  • module-2
    • 支持消息摘要与外部数据库记忆的聊天机器人
    • 支持消息摘要的聊天机器人
    • 多模式架构
    • 状态归约器
    • 状态模式
    • 消息过滤与修剪
  • module-3
    • 断点
    • 动态断点
    • 编辑图状态
    • 流式处理
    • 时间回溯
  • module-4
    • 映射-归约
    • 并行节点执行
    • 研究助手
    • 子图
  • module-5
    • 记忆代理
    • 具备记忆功能的聊天机器人
    • 基于集合架构的聊天机器人
    • 支持个人资料架构的聊天机器人
  • module-6
    • 助手
    • 连接 LangGraph 平台部署
    • 创建部署
    • 双重消息处理
  • Search
  • Previous
  • Next
  • 断点
    • 回顾
    • 目标
    • 人工审核断点设置
  • H1
    • H2
  • H1
    • H2

在 Colab 中打开 在 LangChain Academy 中打开

断点¶

回顾¶

在人机协同场景中,我们通常希望在运行过程中实时查看图谱输出。

我们已通过流式传输机制为此奠定了基础。

目标¶

现在让我们探讨人机协同的核心动机:

(1) 审批机制 - 可中断智能体运行,向用户展示当前状态,并允许用户批准执行动作

(2) 调试功能 - 能够回退图谱状态以复现或规避问题

(3) 编辑能力 - 支持直接修改运行状态

LangGraph提供了多种获取或更新智能体状态的方式,以支持各类人机协同工作流。

首先我们将介绍断点功能,它提供了在特定步骤暂停图谱执行的简单方法。

我们将演示如何通过该功能实现用户审批流程。

In [ ]:
Copied!
%%capture --no-stderr
%pip install --quiet -U langgraph langchain_openai langgraph_sdk langgraph-prebuilt
%%capture --no-stderr %pip install --quiet -U langgraph langchain_openai langgraph_sdk langgraph-prebuilt
In [ ]:
Copied!
import os, getpass

def _set_env(var: str):
    if not os.environ.get(var):
        os.environ[var] = getpass.getpass(f"{var}: ")

_set_env("OPENAI_API_KEY")
import os, getpass def _set_env(var: str): if not os.environ.get(var): os.environ[var] = getpass.getpass(f"{var}: ") _set_env("OPENAI_API_KEY")

人工审核断点设置¶

让我们重新审视在模块1中使用过的简单智能体案例。

假设我们关注工具调用的安全性:需要人工审核批准智能体使用任何工具的情况。

只需在编译流程图时添加 interrupt_before=["tools"] 参数即可实现,其中 tools 代表我们的工具调用节点。

这意味着程序执行将在到达 tools 节点(即执行工具调用的环节)之前自动暂停,等待人工审核。

In [4]:
Copied!
from langchain_openai import ChatOpenAI

def multiply(a: int, b: int) -> int:
    """Multiply a and b.

    Args:
        a: first int
        b: second int
    """
    return a * b

# This will be a tool
def add(a: int, b: int) -> int:
    """Adds a and b.

    Args:
        a: first int
        b: second int
    """
    return a + b

def divide(a: int, b: int) -> float:
    """Divide a by b.

    Args:
        a: first int
        b: second int
    """
    return a / b

tools = [add, multiply, divide]
llm = ChatOpenAI(model="gpt-4o")
llm_with_tools = llm.bind_tools(tools)
from langchain_openai import ChatOpenAI def multiply(a: int, b: int) -> int: """Multiply a and b. Args: a: first int b: second int """ return a * b # This will be a tool def add(a: int, b: int) -> int: """Adds a and b. Args: a: first int b: second int """ return a + b def divide(a: int, b: int) -> float: """Divide a by b. Args: a: first int b: second int """ return a / b tools = [add, multiply, divide] llm = ChatOpenAI(model="gpt-4o") llm_with_tools = llm.bind_tools(tools)
In [5]:
Copied!
from IPython.display import Image, display

from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import MessagesState
from langgraph.graph import START, StateGraph
from langgraph.prebuilt import tools_condition, ToolNode

from langchain_core.messages import AIMessage, HumanMessage, SystemMessage

# System message
sys_msg = SystemMessage(content="You are a helpful assistant tasked with performing arithmetic on a set of inputs.")

# Node
def assistant(state: MessagesState):
   return {"messages": [llm_with_tools.invoke([sys_msg] + state["messages"])]}

# Graph
builder = StateGraph(MessagesState)

# Define nodes: these do the work
builder.add_node("assistant", assistant)
builder.add_node("tools", ToolNode(tools))

# Define edges: these determine the control flow
builder.add_edge(START, "assistant")
builder.add_conditional_edges(
    "assistant",
    # If the latest message (result) from assistant is a tool call -> tools_condition routes to tools
    # If the latest message (result) from assistant is a not a tool call -> tools_condition routes to END
    tools_condition,
)
builder.add_edge("tools", "assistant")

memory = MemorySaver()
graph = builder.compile(interrupt_before=["tools"], checkpointer=memory)

# Show
display(Image(graph.get_graph(xray=True).draw_mermaid_png()))
from IPython.display import Image, display from langgraph.checkpoint.memory import MemorySaver from langgraph.graph import MessagesState from langgraph.graph import START, StateGraph from langgraph.prebuilt import tools_condition, ToolNode from langchain_core.messages import AIMessage, HumanMessage, SystemMessage # System message sys_msg = SystemMessage(content="You are a helpful assistant tasked with performing arithmetic on a set of inputs.") # Node def assistant(state: MessagesState): return {"messages": [llm_with_tools.invoke([sys_msg] + state["messages"])]} # Graph builder = StateGraph(MessagesState) # Define nodes: these do the work builder.add_node("assistant", assistant) builder.add_node("tools", ToolNode(tools)) # Define edges: these determine the control flow builder.add_edge(START, "assistant") builder.add_conditional_edges( "assistant", # If the latest message (result) from assistant is a tool call -> tools_condition routes to tools # If the latest message (result) from assistant is a not a tool call -> tools_condition routes to END tools_condition, ) builder.add_edge("tools", "assistant") memory = MemorySaver() graph = builder.compile(interrupt_before=["tools"], checkpointer=memory) # Show display(Image(graph.get_graph(xray=True).draw_mermaid_png()))
No description has been provided for this image
In [6]:
Copied!
# Input
initial_input = {"messages": HumanMessage(content="Multiply 2 and 3")}

# Thread
thread = {"configurable": {"thread_id": "1"}}

# Run the graph until the first interruption
for event in graph.stream(initial_input, thread, stream_mode="values"):
    event['messages'][-1].pretty_print()
# Input initial_input = {"messages": HumanMessage(content="Multiply 2 and 3")} # Thread thread = {"configurable": {"thread_id": "1"}} # Run the graph until the first interruption for event in graph.stream(initial_input, thread, stream_mode="values"): event['messages'][-1].pretty_print()
================================ Human Message =================================

Multiply 2 and 3
================================== Ai Message ==================================
Tool Calls:
  multiply (call_oFkGpnO8CuwW9A1rk49nqBpY)
 Call ID: call_oFkGpnO8CuwW9A1rk49nqBpY
  Args:
    a: 2
    b: 3

我们可以获取状态并查看下一个待调用的节点。

这是观察图形执行流程是否被中断的直观方式。

In [7]:
Copied!
state = graph.get_state(thread)
state.next
state = graph.get_state(thread) state.next
Out[7]:
('tools',)

现在,我们将介绍一个实用技巧。

当我们用 None 调用图结构时,它会直接从最后一个状态检查点继续执行!

breakpoints.jpg

需要说明的是,LangGraph 会重新发送当前状态(其中包含带有工具调用的 AIMessage)。

然后它会继续执行图中的后续步骤,这些步骤从工具节点开始。

我们可以看到工具节点会处理这个工具调用,并将结果传回聊天模型以生成最终响应。

In [8]:
Copied!
for event in graph.stream(None, thread, stream_mode="values"):
    event['messages'][-1].pretty_print()
for event in graph.stream(None, thread, stream_mode="values"): event['messages'][-1].pretty_print()
================================== Ai Message ==================================
Tool Calls:
  multiply (call_oFkGpnO8CuwW9A1rk49nqBpY)
 Call ID: call_oFkGpnO8CuwW9A1rk49nqBpY
  Args:
    a: 2
    b: 3
================================= Tool Message =================================
Name: multiply

6
================================== Ai Message ==================================

The result of multiplying 2 and 3 is 6.

现在,让我们结合一个特定的用户审批步骤来实现这一功能,该步骤将接收用户输入。

In [9]:
Copied!
# Input
initial_input = {"messages": HumanMessage(content="Multiply 2 and 3")}

# Thread
thread = {"configurable": {"thread_id": "2"}}

# Run the graph until the first interruption
for event in graph.stream(initial_input, thread, stream_mode="values"):
    event['messages'][-1].pretty_print()

# Get user feedback
user_approval = input("Do you want to call the tool? (yes/no): ")

# Check approval
if user_approval.lower() == "yes":
    
    # If approved, continue the graph execution
    for event in graph.stream(None, thread, stream_mode="values"):
        event['messages'][-1].pretty_print()
        
else:
    print("Operation cancelled by user.")
# Input initial_input = {"messages": HumanMessage(content="Multiply 2 and 3")} # Thread thread = {"configurable": {"thread_id": "2"}} # Run the graph until the first interruption for event in graph.stream(initial_input, thread, stream_mode="values"): event['messages'][-1].pretty_print() # Get user feedback user_approval = input("Do you want to call the tool? (yes/no): ") # Check approval if user_approval.lower() == "yes": # If approved, continue the graph execution for event in graph.stream(None, thread, stream_mode="values"): event['messages'][-1].pretty_print() else: print("Operation cancelled by user.")
================================ Human Message =================================

Multiply 2 and 3
================================== Ai Message ==================================
Tool Calls:
  multiply (call_tpHvTmsHSjSpYnymzdx553SU)
 Call ID: call_tpHvTmsHSjSpYnymzdx553SU
  Args:
    a: 2
    b: 3
================================== Ai Message ==================================
Tool Calls:
  multiply (call_tpHvTmsHSjSpYnymzdx553SU)
 Call ID: call_tpHvTmsHSjSpYnymzdx553SU
  Args:
    a: 2
    b: 3
================================= Tool Message =================================
Name: multiply

6
================================== Ai Message ==================================

The result of multiplying 2 and 3 is 6.

使用 LangGraph API 设置断点¶

⚠️ 免责声明

自这些视频录制以来,我们已对 Studio 进行了更新,现在可以在本地运行并在浏览器中打开。这是目前运行 Studio 的首选方式(而非视频中展示的桌面应用程序)。关于本地开发服务器的文档请参见此处,本地运行 Studio 的说明请参见此处。要启动本地开发服务器,请在本模块的 /studio 目录下运行以下终端命令:

langgraph dev

您将看到如下输出:

- 🚀 API: http://127.0.0.1:2024
- 🎨 Studio 界面: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
- 📚 API 文档: http://127.0.0.1:2024/docs

打开浏览器并访问 Studio 界面:https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024。

LangGraph API 支持断点功能。

In [1]:
Copied!
if 'google.colab' in str(get_ipython()):
    raise Exception("Unfortunately LangGraph Studio is currently not supported on Google Colab")
if 'google.colab' in str(get_ipython()): raise Exception("Unfortunately LangGraph Studio is currently not supported on Google Colab")
In [2]:
Copied!
# This is the URL of the local development server
from langgraph_sdk import get_client
client = get_client(url="http://127.0.0.1:2024")
# This is the URL of the local development server from langgraph_sdk import get_client client = get_client(url="http://127.0.0.1:2024")

如上所示,在编译运行于 Studio 的图时,我们可以添加 interrupt_before=["node"] 参数。

不过通过 API 调用时,你也可以直接将 interrupt_before 参数传递给 stream 方法。

In [10]:
Copied!
initial_input = {"messages": HumanMessage(content="Multiply 2 and 3")}
thread = await client.threads.create()
async for chunk in client.runs.stream(
    thread["thread_id"],
    assistant_id="agent",
    input=initial_input,
    stream_mode="values",
    interrupt_before=["tools"],
):
    print(f"Receiving new event of type: {chunk.event}...")
    messages = chunk.data.get('messages', [])
    if messages:
        print(messages[-1])
    print("-" * 50)
initial_input = {"messages": HumanMessage(content="Multiply 2 and 3")} thread = await client.threads.create() async for chunk in client.runs.stream( thread["thread_id"], assistant_id="agent", input=initial_input, stream_mode="values", interrupt_before=["tools"], ): print(f"Receiving new event of type: {chunk.event}...") messages = chunk.data.get('messages', []) if messages: print(messages[-1]) print("-" * 50)
Receiving new event of type: metadata...
--------------------------------------------------
Receiving new event of type: values...
{'content': 'Multiply 2 and 3', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'human', 'name': None, 'id': '2a3b1e7a-f6d9-44c2-a4b4-b7f67aa3691c', 'example': False}
--------------------------------------------------
Receiving new event of type: values...
{'content': '', 'additional_kwargs': {'tool_calls': [{'id': 'call_ElnkVOf1H80dlwZLqO0PQTwS', 'function': {'arguments': '{"a":2,"b":3}', 'name': 'multiply'}, 'type': 'function'}], 'refusal': None}, 'response_metadata': {'token_usage': {'completion_tokens': 18, 'prompt_tokens': 134, 'total_tokens': 152, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_eb9dce56a8', 'finish_reason': 'tool_calls', 'logprobs': None}, 'type': 'ai', 'name': None, 'id': 'run-89ee14dc-5f46-4dd9-91d9-e922c4a23572-0', 'example': False, 'tool_calls': [{'name': 'multiply', 'args': {'a': 2, 'b': 3}, 'id': 'call_ElnkVOf1H80dlwZLqO0PQTwS', 'type': 'tool_call'}], 'invalid_tool_calls': [], 'usage_metadata': {'input_tokens': 134, 'output_tokens': 18, 'total_tokens': 152, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}}
--------------------------------------------------

现在,我们可以像之前那样从断点继续执行,只需传入 thread_id 和 None 作为输入即可!

In [11]:
Copied!
async for chunk in client.runs.stream(
    thread["thread_id"],
    "agent",
    input=None,
    stream_mode="values",
    interrupt_before=["tools"],
):
    print(f"Receiving new event of type: {chunk.event}...")
    messages = chunk.data.get('messages', [])
    if messages:
        print(messages[-1])
    print("-" * 50)
async for chunk in client.runs.stream( thread["thread_id"], "agent", input=None, stream_mode="values", interrupt_before=["tools"], ): print(f"Receiving new event of type: {chunk.event}...") messages = chunk.data.get('messages', []) if messages: print(messages[-1]) print("-" * 50)
Receiving new event of type: metadata...
--------------------------------------------------
Receiving new event of type: values...
{'content': '', 'additional_kwargs': {'tool_calls': [{'id': 'call_ElnkVOf1H80dlwZLqO0PQTwS', 'function': {'arguments': '{"a":2,"b":3}', 'name': 'multiply'}, 'type': 'function'}], 'refusal': None}, 'response_metadata': {'token_usage': {'completion_tokens': 18, 'prompt_tokens': 134, 'total_tokens': 152, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_eb9dce56a8', 'finish_reason': 'tool_calls', 'logprobs': None}, 'type': 'ai', 'name': None, 'id': 'run-89ee14dc-5f46-4dd9-91d9-e922c4a23572-0', 'example': False, 'tool_calls': [{'name': 'multiply', 'args': {'a': 2, 'b': 3}, 'id': 'call_ElnkVOf1H80dlwZLqO0PQTwS', 'type': 'tool_call'}], 'invalid_tool_calls': [], 'usage_metadata': {'input_tokens': 134, 'output_tokens': 18, 'total_tokens': 152, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}}
--------------------------------------------------
Receiving new event of type: values...
{'content': '6', 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'tool', 'name': 'multiply', 'id': '5331919f-a26b-4d75-bf33-6dfaea2be1f7', 'tool_call_id': 'call_ElnkVOf1H80dlwZLqO0PQTwS', 'artifact': None, 'status': 'success'}
--------------------------------------------------
Receiving new event of type: values...
{'content': 'The result of multiplying 2 and 3 is 6.', 'additional_kwargs': {'refusal': None}, 'response_metadata': {'token_usage': {'completion_tokens': 15, 'prompt_tokens': 159, 'total_tokens': 174, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_eb9dce56a8', 'finish_reason': 'stop', 'logprobs': None}, 'type': 'ai', 'name': None, 'id': 'run-06b901ad-0760-4986-9d3f-a566e0d52efd-0', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': {'input_tokens': 159, 'output_tokens': 15, 'total_tokens': 174, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}}
--------------------------------------------------
# Getting Started with Markdown

Markdown is a lightweight markup language that you can use to add formatting elements to plaintext text documents. Created by John Gruber in 2004, Markdown is now one of the world's most popular markup languages.

## Basic Syntax

Here are some of the most commonly used Markdown syntax elements:

### Headers

H1¶

H2¶

H3¶


### Emphasis

italic or italic bold or bold


### Lists

#### Ordered List
  1. First item
  2. Second item
  3. Third item

#### Unordered List
  • First item
  • Second item
  • Third item

### Links

Google


### Images

alt text


## Extended Syntax

These features extend the basic syntax:

### Tables
Syntax Description
Header Title
Paragraph Text

### Code Blocks
```python
def hello():
    print("Hello, World!")

Note: Some Markdown processors may not support all extended syntax features.


```markdown
# Markdown 入门指南

Markdown 是一种轻量级标记语言,可用于为纯文本文档添加格式元素。由 John Gruber 于 2004 年创建,如今已成为全球最流行的标记语言之一。

## 基础语法

以下是最常用的 Markdown 语法元素:

### 标题

H1¶

H2¶

H3¶


### 强调文本

斜体 或 斜体 粗体 或 粗体


### 列表

#### 有序列表
  1. 第一项
  2. 第二项
  3. 第三项

#### 无序列表
  • 第一项
  • 第二项
  • 第三项

### 超链接

谷歌


### 图片

替代文本


## 扩展语法

这些功能扩展了基础语法:

### 表格
语法 描述
标题 标题内容
段落 文本内容

### 代码块
```python
def hello():
    print("Hello, World!")

注意:部分 Markdown 处理器可能不支持所有扩展语法功能。


Documentation built with MkDocs.

Search

From here you can search these documents. Enter your search terms below.

Keyboard Shortcuts

Keys Action
? Open this help
n Next page
p Previous page
s Search