调用 AWS Bedrock Converse Agent 的函数¶
本笔记本将向您展示如何使用基于函数调用功能的 AWS Bedrock Converse 代理。
初始设置¶
让我们从导入一些基础组件开始。
我们主要需要以下内容:
- 具备访问 Bedrock 和 Claude Haiku LLM 权限的 AWS 凭证
- 存储对话历史记录的位置
- 为智能体定义可使用的工具集
如果您在 Colab 上打开此 Notebook,可能需要安装 LlamaIndex 🦙。
In [ ]:
Copied!
%pip install llama-index
%pip install llama-index-llms-bedrock-converse
%pip install llama-index-embeddings-huggingface
%pip install llama-index
%pip install llama-index-llms-bedrock-converse
%pip install llama-index-embeddings-huggingface
让我们为智能体定义一些非常简单的计算器工具。
In [ ]:
Copied!
def multiply(a: int, b: int) -> int:
"""Multiple two integers and returns the result integer"""
return a * b
def add(a: int, b: int) -> int:
"""Add two integers and returns the result integer"""
return a + b
def multiply(a: int, b: int) -> int:
"""Multiple two integers and returns the result integer"""
return a * b
def add(a: int, b: int) -> int:
"""Add two integers and returns the result integer"""
return a + b
请确保设置您的 AWS 认证凭证,可以是 profile_name
或以下密钥。
In [ ]:
Copied!
from llama_index.llms.bedrock_converse import BedrockConverse
llm = BedrockConverse(
model="anthropic.claude-3-haiku-20240307-v1:0",
# NOTE replace with your own AWS credentials
aws_access_key_id="AWS Access Key ID to use",
aws_secret_access_key="AWS Secret Access Key to use",
aws_session_token="AWS Session Token to use",
region_name="AWS Region to use, eg. us-east-1",
)
from llama_index.llms.bedrock_converse import BedrockConverse
llm = BedrockConverse(
model="anthropic.claude-3-haiku-20240307-v1:0",
# NOTE replace with your own AWS credentials
aws_access_key_id="AWS Access Key ID to use",
aws_secret_access_key="AWS Secret Access Key to use",
aws_session_token="AWS Session Token to use",
region_name="AWS Region to use, eg. us-east-1",
)
初始化 AWS Bedrock Converse 代理¶
这里我们初始化了一个带有计算器功能的简单 AWS Bedrock Converse 代理。
In [ ]:
Copied!
from llama_index.core.agent.workflow import FunctionAgent
agent = FunctionAgent(
tools=[multiply, add],
llm=llm,
)
from llama_index.core.agent.workflow import FunctionAgent
agent = FunctionAgent(
tools=[multiply, add],
llm=llm,
)
聊天¶
In [ ]:
Copied!
response = await agent.run("What is (121 + 2) * 5?")
print(str(response))
response = await agent.run("What is (121 + 2) * 5?")
print(str(response))
In [ ]:
Copied!
# inspect sources
print(response.tool_calls)
# inspect sources
print(response.tool_calls)
基于RAG管道的AWS Bedrock Converse智能体构建¶
在10K文档规模上构建AWS Bedrock Converse智能体。我们同时采用HuggingFace嵌入技术和BAAI/bge-small-en-v1.5
模型构建RAG(检索增强生成)管道,并将其作为工具集成至AWS Bedrock Converse智能体。
In [ ]:
Copied!
!mkdir -p 'data/10k/'
!curl -o 'data/10k/uber_2021.pdf' 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/uber_2021.pdf'
!mkdir -p 'data/10k/'
!curl -o 'data/10k/uber_2021.pdf' 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/uber_2021.pdf'
In [ ]:
Copied!
from llama_index.core.tools import QueryEngineTool
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.llms.bedrock_converse import BedrockConverse
embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")
query_llm = BedrockConverse(
model="anthropic.claude-3-haiku-20240307-v1:0",
# NOTE replace with your own AWS credentials
aws_access_key_id="AWS Access Key ID to use",
aws_secret_access_key="AWS Secret Access Key to use",
aws_session_token="AWS Session Token to use",
region_name="AWS Region to use, eg. us-east-1",
)
# load data
uber_docs = SimpleDirectoryReader(
input_files=["./data/10k/uber_2021.pdf"]
).load_data()
# build index
uber_index = VectorStoreIndex.from_documents(
uber_docs, embed_model=embed_model
)
uber_engine = uber_index.as_query_engine(similarity_top_k=3, llm=query_llm)
query_engine_tool = QueryEngineTool.from_defaults(
query_engine=uber_engine,
name="uber_10k",
description=(
"Provides information about Uber financials for year 2021. "
"Use a detailed plain text question as input to the tool."
),
)
from llama_index.core.tools import QueryEngineTool
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.llms.bedrock_converse import BedrockConverse
embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")
query_llm = BedrockConverse(
model="anthropic.claude-3-haiku-20240307-v1:0",
# NOTE replace with your own AWS credentials
aws_access_key_id="AWS Access Key ID to use",
aws_secret_access_key="AWS Secret Access Key to use",
aws_session_token="AWS Session Token to use",
region_name="AWS Region to use, eg. us-east-1",
)
# load data
uber_docs = SimpleDirectoryReader(
input_files=["./data/10k/uber_2021.pdf"]
).load_data()
# build index
uber_index = VectorStoreIndex.from_documents(
uber_docs, embed_model=embed_model
)
uber_engine = uber_index.as_query_engine(similarity_top_k=3, llm=query_llm)
query_engine_tool = QueryEngineTool.from_defaults(
query_engine=uber_engine,
name="uber_10k",
description=(
"Provides information about Uber financials for year 2021. "
"Use a detailed plain text question as input to the tool."
),
)
In [ ]:
Copied!
from llama_index.core.agent.workflow import FunctionAgent
agent = FunctionAgent(
tools=[query_engine_tool],
llm=llm,
)
from llama_index.core.agent.workflow import FunctionAgent
agent = FunctionAgent(
tools=[query_engine_tool],
llm=llm,
)
In [ ]:
Copied!
response = await agent.run(
"Tell me both the risk factors and tailwinds for Uber? Do two parallel tool calls."
)
response = await agent.run(
"Tell me both the risk factors and tailwinds for Uber? Do two parallel tool calls."
)
In [ ]:
Copied!
print(str(response))
print(str(response))