Langchain Academy translated
  • module-0
    • LangChain 学院
  • module-1
    • 智能体记忆
    • 智能体
    • 链式结构
    • 部署
    • 路由器
    • 最简单的图结构
  • module-2
    • 支持消息摘要与外部数据库记忆的聊天机器人
    • 支持消息摘要的聊天机器人
    • 多模式架构
    • 状态归约器
    • 状态模式
    • 消息过滤与修剪
  • module-3
    • 断点
    • 动态断点
    • 编辑图状态
    • 流式处理
    • 时间回溯
  • module-4
    • 映射-归约
    • 并行节点执行
    • 研究助手
    • 子图
  • module-5
    • 记忆代理
    • 具备记忆功能的聊天机器人
    • 基于集合架构的聊天机器人
    • 支持个人资料架构的聊天机器人
  • module-6
    • 助手
    • 连接 LangGraph 平台部署
    • 创建部署
    • 双重消息处理
  • Search
  • Previous
  • Next
  • 映射-归约
    • 回顾
    • 目标
    • 问题描述
    • 状态
    • 编译
    • 工作室

在 Colab 中打开 在 LangChain Academy 中打开

映射-归约¶

回顾¶

我们正在构建一个多智能体研究助手,它将整合本课程中的所有模块。

为了构建这个多智能体助手,我们一直在介绍一些LangGraph可控性主题。

我们刚刚讨论了并行化和子图。

目标¶

现在,我们将学习映射-归约方法。

In [ ]:
Copied!
%%capture --no-stderr
%pip install -U langchain_openai langgraph
%%capture --no-stderr %pip install -U langchain_openai langgraph
In [ ]:
Copied!
import os, getpass

def _set_env(var: str):
    if not os.environ.get(var):
        os.environ[var] = getpass.getpass(f"{var}: ")

_set_env("OPENAI_API_KEY")
import os, getpass def _set_env(var: str): if not os.environ.get(var): os.environ[var] = getpass.getpass(f"{var}: ") _set_env("OPENAI_API_KEY")

我们将使用 LangSmith 进行追踪。

In [ ]:
Copied!
_set_env("LANGSMITH_API_KEY")
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_PROJECT"] = "langchain-academy"
_set_env("LANGSMITH_API_KEY") os.environ["LANGSMITH_TRACING"] = "true" os.environ["LANGSMITH_PROJECT"] = "langchain-academy"

问题描述¶

Map-reduce(映射-归约)操作对于高效的任务分解和并行处理至关重要。

该操作包含两个阶段:

(1) Map(映射阶段) - 将任务拆分为更小的子任务,并行处理每个子任务。

(2) Reduce(归约阶段) - 汇总所有已完成并行子任务的结果。

我们需要设计一个能实现以下功能的系统:

(1) Map(映射阶段) - 生成与特定主题相关的一系列笑话。

(2) Reduce(归约阶段) - 从列表中选出最佳笑话。

我们将使用大型语言模型(LLM)来完成笑话生成和选择的工作。

In [3]:
Copied!
from langchain_openai import ChatOpenAI

# Prompts we will use
subjects_prompt = """Generate a list of 3 sub-topics that are all related to this overall topic: {topic}."""
joke_prompt = """Generate a joke about {subject}"""
best_joke_prompt = """Below are a bunch of jokes about {topic}. Select the best one! Return the ID of the best one, starting 0 as the ID for the first joke. Jokes: \n\n  {jokes}"""

# LLM
model = ChatOpenAI(model="gpt-4o", temperature=0)
from langchain_openai import ChatOpenAI # Prompts we will use subjects_prompt = """Generate a list of 3 sub-topics that are all related to this overall topic: {topic}.""" joke_prompt = """Generate a joke about {subject}""" best_joke_prompt = """Below are a bunch of jokes about {topic}. Select the best one! Return the ID of the best one, starting 0 as the ID for the first joke. Jokes: \n\n {jokes}""" # LLM model = ChatOpenAI(model="gpt-4o", temperature=0)

状态¶

并行化笑话生成¶

首先,我们定义该图的入口点,它将:

  • 接收用户输入的主题
  • 生成一个基于该主题的笑话主题列表
  • 将每个笑话主题发送至上述笑话生成节点

我们的状态包含一个 jokes 键,该键将累积来自并行化笑话生成过程的笑话

In [4]:
Copied!
import operator
from typing import Annotated
from typing_extensions import TypedDict
from pydantic import BaseModel

class Subjects(BaseModel):
    subjects: list[str]

class BestJoke(BaseModel):
    id: int
    
class OverallState(TypedDict):
    topic: str
    subjects: list
    jokes: Annotated[list, operator.add]
    best_selected_joke: str
import operator from typing import Annotated from typing_extensions import TypedDict from pydantic import BaseModel class Subjects(BaseModel): subjects: list[str] class BestJoke(BaseModel): id: int class OverallState(TypedDict): topic: str subjects: list jokes: Annotated[list, operator.add] best_selected_joke: str

生成笑话主题。

In [5]:
Copied!
def generate_topics(state: OverallState):
    prompt = subjects_prompt.format(topic=state["topic"])
    response = model.with_structured_output(Subjects).invoke(prompt)
    return {"subjects": response.subjects}
def generate_topics(state: OverallState): prompt = subjects_prompt.format(topic=state["topic"]) response = model.with_structured_output(Subjects).invoke(prompt) return {"subjects": response.subjects}

神奇之处在于:我们使用 Send 功能为每个主题生成笑话。

这非常实用!它可以自动并行地为任意数量的主题生成笑话。

  • generate_joke:图中节点的名称
  • {"subject": s}:要发送的状态

Send 允许你将任意状态传递给 generate_joke!这些状态不必与 OverallState 保持一致。

在本例中,generate_joke 使用其自身的内部状态,我们可以通过 Send 来填充这些状态。

In [6]:
Copied!
from langgraph.types import Send
def continue_to_jokes(state: OverallState):
    return [Send("generate_joke", {"subject": s}) for s in state["subjects"]]
from langgraph.types import Send def continue_to_jokes(state: OverallState): return [Send("generate_joke", {"subject": s}) for s in state["subjects"]]

笑话生成(映射)¶

现在,我们只需定义一个用于创建笑话的节点 generate_joke!

我们会将生成的笑话写回到 OverallState 中的 jokes 字段!

该键值配有可合并列表的归约器。

In [7]:
Copied!
class JokeState(TypedDict):
    subject: str

class Joke(BaseModel):
    joke: str

def generate_joke(state: JokeState):
    prompt = joke_prompt.format(subject=state["subject"])
    response = model.with_structured_output(Joke).invoke(prompt)
    return {"jokes": [response.joke]}
class JokeState(TypedDict): subject: str class Joke(BaseModel): joke: str def generate_joke(state: JokeState): prompt = joke_prompt.format(subject=state["subject"]) response = model.with_structured_output(Joke).invoke(prompt) return {"jokes": [response.joke]}

最佳笑话筛选(精简版)¶

现在,我们添加逻辑来挑选最佳笑话。

In [8]:
Copied!
def best_joke(state: OverallState):
    jokes = "\n\n".join(state["jokes"])
    prompt = best_joke_prompt.format(topic=state["topic"], jokes=jokes)
    response = model.with_structured_output(BestJoke).invoke(prompt)
    return {"best_selected_joke": state["jokes"][response.id]}
def best_joke(state: OverallState): jokes = "\n\n".join(state["jokes"]) prompt = best_joke_prompt.format(topic=state["topic"], jokes=jokes) response = model.with_structured_output(BestJoke).invoke(prompt) return {"best_selected_joke": state["jokes"][response.id]}

编译¶

In [9]:
Copied!
from IPython.display import Image
from langgraph.graph import END, StateGraph, START

# Construct the graph: here we put everything together to construct our graph
graph = StateGraph(OverallState)
graph.add_node("generate_topics", generate_topics)
graph.add_node("generate_joke", generate_joke)
graph.add_node("best_joke", best_joke)
graph.add_edge(START, "generate_topics")
graph.add_conditional_edges("generate_topics", continue_to_jokes, ["generate_joke"])
graph.add_edge("generate_joke", "best_joke")
graph.add_edge("best_joke", END)

# Compile the graph
app = graph.compile()
Image(app.get_graph().draw_mermaid_png())
from IPython.display import Image from langgraph.graph import END, StateGraph, START # Construct the graph: here we put everything together to construct our graph graph = StateGraph(OverallState) graph.add_node("generate_topics", generate_topics) graph.add_node("generate_joke", generate_joke) graph.add_node("best_joke", best_joke) graph.add_edge(START, "generate_topics") graph.add_conditional_edges("generate_topics", continue_to_jokes, ["generate_joke"]) graph.add_edge("generate_joke", "best_joke") graph.add_edge("best_joke", END) # Compile the graph app = graph.compile() Image(app.get_graph().draw_mermaid_png())
Out[9]:
No description has been provided for this image
In [10]:
Copied!
# Call the graph: here we call it to generate a list of jokes
for s in app.stream({"topic": "animals"}):
    print(s)
# Call the graph: here we call it to generate a list of jokes for s in app.stream({"topic": "animals"}): print(s)
{'generate_topics': {'subjects': ['mammals', 'reptiles', 'birds']}}
{'generate_joke': {'jokes': ["Why don't mammals ever get lost? Because they always follow their 'instincts'!"]}}
{'generate_joke': {'jokes': ["Why don't alligators like fast food? Because they can't catch it!"]}}
{'generate_joke': {'jokes': ["Why do birds fly south for the winter? Because it's too far to walk!"]}}
{'best_joke': {'best_selected_joke': "Why don't alligators like fast food? Because they can't catch it!"}}

工作室¶

⚠️ 免责声明

自这些视频拍摄以来,我们已对工作室进行了更新,使其可在本地运行并在浏览器中打开。现在这是运行工作室的首选方式(而非如视频所示使用桌面应用程序)。有关本地开发服务器的文档请参见此处,运行本地工作室的说明请参见此处。要启动本地开发服务器,请在本模块的/studio目录下的终端中运行以下命令:

langgraph dev

您将看到如下输出:

- 🚀 API: http://127.0.0.1:2024
- � 工作室界面: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
- 📚 API文档: http://127.0.0.1:2024/docs

打开浏览器并访问工作室界面:https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024。

让我们在工作室界面中加载上述图表,该界面使用module-4/studio/map_reduce.py中设置的内容,配置文件位于module-4/studio/langgraph.json。

In [ ]:
Copied!


Documentation built with MkDocs.

Search

From here you can search these documents. Enter your search terms below.

Keyboard Shortcuts

Keys Action
? Open this help
n Next page
p Previous page
s Search