NVIDIA 代理函数调用¶
本笔记本将向您展示如何利用具备函数调用能力的 NVIDIA 代理。
初始设置¶
让我们从导入一些基础构建模块开始。
我们主要需要的是:
- NVIDIA NIM 终端(使用我们自己的
llama_index
LLM 类) - 存储对话历史的容器
- 为智能体定义可使用的工具
In [ ]:
Copied!
%pip install --upgrade --quiet llama-index-llms-nvidia
%pip install --upgrade --quiet llama-index-llms-nvidia
In [ ]:
Copied!
import getpass
import os
# del os.environ['NVIDIA_API_KEY'] ## delete key and reset
if os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"):
print("Valid NVIDIA_API_KEY already in environment. Delete to reset")
else:
nvapi_key = getpass.getpass("NVAPI Key (starts with nvapi-): ")
assert nvapi_key.startswith(
"nvapi-"
), f"{nvapi_key[:5]}... is not a valid key"
os.environ["NVIDIA_API_KEY"] = nvapi_key
import getpass
import os
# del os.environ['NVIDIA_API_KEY'] ## delete key and reset
if os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"):
print("Valid NVIDIA_API_KEY already in environment. Delete to reset")
else:
nvapi_key = getpass.getpass("NVAPI Key (starts with nvapi-): ")
assert nvapi_key.startswith(
"nvapi-"
), f"{nvapi_key[:5]}... is not a valid key"
os.environ["NVIDIA_API_KEY"] = nvapi_key
Valid NVIDIA_API_KEY already in environment. Delete to reset
In [ ]:
Copied!
from llama_index.llms.nvidia import NVIDIA
from llama_index.core.tools import FunctionTool
from llama_index.embeddings.nvidia import NVIDIAEmbedding
from llama_index.llms.nvidia import NVIDIA
from llama_index.core.tools import FunctionTool
from llama_index.embeddings.nvidia import NVIDIAEmbedding
让我们为智能体定义一些非常简单的计算器工具。
In [ ]:
Copied!
def multiply(a: int, b: int) -> int:
"""Multiple two integers and returns the result integer"""
return a * b
def add(a: int, b: int) -> int:
"""Add two integers and returns the result integer"""
return a + b
def multiply(a: int, b: int) -> int:
"""Multiple two integers and returns the result integer"""
return a * b
def add(a: int, b: int) -> int:
"""Add two integers and returns the result integer"""
return a + b
这里我们初始化了一个带有计算器功能的简单NVIDIA代理。
In [ ]:
Copied!
llm = NVIDIA("meta/llama-3.1-70b-instruct")
llm = NVIDIA("meta/llama-3.1-70b-instruct")
In [ ]:
Copied!
from llama_index.core.agent.workflow import FunctionAgent
agent = FunctionAgent(
tools=[multiply, add],
llm=llm,
)
from llama_index.core.agent.workflow import FunctionAgent
agent = FunctionAgent(
tools=[multiply, add],
llm=llm,
)
聊天¶
In [ ]:
Copied!
response = await agent.run("What is (121 * 3) + 42?")
print(str(response))
response = await agent.run("What is (121 * 3) + 42?")
print(str(response))
In [ ]:
Copied!
# inspect sources
print(response.tool_calls)
# inspect sources
print(response.tool_calls)
管理上下文/记忆¶
默认情况下,.run()
是无状态的。如需保持状态,可传入一个 context
对象。
In [ ]:
Copied!
from llama_index.core.agent.workflow import Context
ctx = Context(agent)
response = await agent.run("Hello, my name is John Doe.", ctx=ctx)
print(str(response))
response = await agent.run("What is my name?", ctx=ctx)
print(str(response))
from llama_index.core.agent.workflow import Context
ctx = Context(agent)
response = await agent.run("Hello, my name is John Doe.", ctx=ctx)
print(str(response))
response = await agent.run("What is my name?", ctx=ctx)
print(str(response))
具有人格特质的智能体¶
您可以通过指定系统提示词来为智能体提供额外指令或赋予其特定个性。
In [ ]:
Copied!
agent = FunctionAgent(
tools=[multiply, add],
llm=llm,
system_prompt="Talk like a pirate in every response.",
)
agent = FunctionAgent(
tools=[multiply, add],
llm=llm,
system_prompt="Talk like a pirate in every response.",
)
In [ ]:
Copied!
response = await agent.run("Hi")
print(response)
response = await agent.run("Hi")
print(response)
In [ ]:
Copied!
response = await agent.run("Tell me a story")
print(response)
response = await agent.run("Tell me a story")
print(response)
搭载RAG/查询引擎工具的NVIDIA智能体¶
In [ ]:
Copied!
!mkdir -p 'data/10k/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/uber_2021.pdf' -O 'data/10k/uber_2021.pdf'
!mkdir -p 'data/10k/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/uber_2021.pdf' -O 'data/10k/uber_2021.pdf'
In [ ]:
Copied!
from llama_index.core.tools import QueryEngineTool
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
embed_model = NVIDIAEmbedding(model="NV-Embed-QA", truncate="END")
# load data
uber_docs = SimpleDirectoryReader(
input_files=["./data/10k/uber_2021.pdf"]
).load_data()
# build index
uber_index = VectorStoreIndex.from_documents(
uber_docs, embed_model=embed_model
)
uber_engine = uber_index.as_query_engine(similarity_top_k=3, llm=llm)
query_engine_tool = QueryEngineTool.from_defaults(
query_engine=uber_engine,
name="uber_10k",
description=(
"Provides information about Uber financials for year 2021. "
"Use a detailed plain text question as input to the tool."
),
)
from llama_index.core.tools import QueryEngineTool
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
embed_model = NVIDIAEmbedding(model="NV-Embed-QA", truncate="END")
# load data
uber_docs = SimpleDirectoryReader(
input_files=["./data/10k/uber_2021.pdf"]
).load_data()
# build index
uber_index = VectorStoreIndex.from_documents(
uber_docs, embed_model=embed_model
)
uber_engine = uber_index.as_query_engine(similarity_top_k=3, llm=llm)
query_engine_tool = QueryEngineTool.from_defaults(
query_engine=uber_engine,
name="uber_10k",
description=(
"Provides information about Uber financials for year 2021. "
"Use a detailed plain text question as input to the tool."
),
)
In [ ]:
Copied!
agent = FunctionAgent(tools=[query_engine_tool], llm=llm)
agent = FunctionAgent(tools=[query_engine_tool], llm=llm)
In [ ]:
Copied!
response = await agent.run(
"Tell me both the risk factors and tailwinds for Uber? Do two parallel tool calls."
)
print(str(response))
response = await agent.run(
"Tell me both the risk factors and tailwinds for Uber? Do two parallel tool calls."
)
print(str(response))
ReAct 智能体¶
In [ ]:
Copied!
from llama_index.core.agent.workflow import ReActAgent
from llama_index.core.agent.workflow import ReActAgent
In [ ]:
Copied!
agent = ReActAgent([multiply_tool, add_tool], llm=llm, verbose=True)
agent = ReActAgent([multiply_tool, add_tool], llm=llm, verbose=True)
使用 stream_events()
方法,我们可以在响应生成时实时获取数据流,从而观察智能体的思考过程。
最终响应将仅包含最终答案。
In [ ]:
Copied!
from llama_index.core.agent.workflow import AgentStream
handler = agent.run("What is 20+(2*4)? Calculate step by step ")
async for ev in handler.stream_events():
if isinstance(ev, AgentStream):
print(ev.delta, end="", flush=True)
response = await handler
from llama_index.core.agent.workflow import AgentStream
handler = agent.run("What is 20+(2*4)? Calculate step by step ")
async for ev in handler.stream_events():
if isinstance(ev, AgentStream):
print(ev.delta, end="", flush=True)
response = await handler
In [ ]:
Copied!
print(str(response))
print(str(response))
In [ ]:
Copied!
print(response.tool_calls)
print(response.tool_calls)