Maritalk¶
简介¶
MariTalk 是由巴西公司 Maritaca AI 开发的智能助手。该工具基于经过专门训练的语言模型,能够出色理解葡萄牙语。
本笔记本通过两个示例演示如何在 Llama Index 中使用 MariTalk:
- 通过聊天方法获取宠物名字建议;
- 通过完整方法结合少量示例对电影评论进行负面/正面分类。
安装¶
如果您在 Colab 上打开此 Notebook,很可能需要安装 LlamaIndex。
In [ ]:
Copied!
!pip install llama-index
!pip install llama-index-llms-maritalk
!pip install asyncio
!pip install llama-index
!pip install llama-index-llms-maritalk
!pip install asyncio
API 密钥¶
您需要从 chat.maritaca.ai 获取 API 密钥(在"Chaves da API" 部分)。
示例 1 - 使用聊天功能获取宠物名字建议¶
In [ ]:
Copied!
from llama_index.core.llms import ChatMessage
from llama_index.llms.maritalk import Maritalk
import asyncio
# To customize your API key, do this
# otherwise it will lookup MARITALK_API_KEY from your env variable
llm = Maritalk(api_key="<your_maritalk_api_key>", model="sabia-2-medium")
# Call chat with a list of messages
messages = [
ChatMessage(
role="system",
content="You are an assistant specialized in suggesting pet names. Given the animal, you must suggest 4 names.",
),
ChatMessage(role="user", content="I have a dog."),
]
# Sync chat
response = llm.chat(messages)
print(response)
# Async chat
async def get_dog_name(llm, messages):
response = await llm.achat(messages)
print(response)
asyncio.run(get_dog_name(llm, messages))
from llama_index.core.llms import ChatMessage
from llama_index.llms.maritalk import Maritalk
import asyncio
# To customize your API key, do this
# otherwise it will lookup MARITALK_API_KEY from your env variable
llm = Maritalk(api_key="", model="sabia-2-medium")
# Call chat with a list of messages
messages = [
ChatMessage(
role="system",
content="You are an assistant specialized in suggesting pet names. Given the animal, you must suggest 4 names.",
),
ChatMessage(role="user", content="I have a dog."),
]
# Sync chat
response = llm.chat(messages)
print(response)
# Async chat
async def get_dog_name(llm, messages):
response = await llm.achat(messages)
print(response)
asyncio.run(get_dog_name(llm, messages))
流式生成¶
对于涉及生成长文本的任务(例如撰写长篇论文或翻译大型文档),采用逐部分接收响应的方式(即在文本生成过程中逐步获取)比等待完整文本生成更具优势。这种方式能显著提升应用程序的响应速度与运行效率,特别是在处理大规模文本生成时。我们提供两种实现方案:同步式与异步式。
In [ ]:
Copied!
# Sync streaming chat
response = llm.stream_chat(messages)
for chunk in response:
print(chunk.delta, end="", flush=True)
# Async streaming chat
async def get_dog_name_streaming(llm, messages):
async for chunk in await llm.astream_chat(messages):
print(chunk.delta, end="", flush=True)
asyncio.run(get_dog_name_streaming(llm, messages))
# Sync streaming chat
response = llm.stream_chat(messages)
for chunk in response:
print(chunk.delta, end="", flush=True)
# Async streaming chat
async def get_dog_name_streaming(llm, messages):
async for chunk in await llm.astream_chat(messages):
print(chunk.delta, end="", flush=True)
asyncio.run(get_dog_name_streaming(llm, messages))
示例 2 - 使用 Complete 方法的少样本示例¶
我们建议在使用少样本示例时采用 llm.complete() 方法
In [ ]:
Copied!
prompt = """Classifique a resenha de filme como "positiva" ou "negativa".
Resenha: Gostei muito do filme, é o melhor do ano!
Classe: positiva
Resenha: O filme deixa muito a desejar.
Classe: negativa
Resenha: Apesar de longo, valeu o ingresso..
Classe:"""
# Sync complete
response = llm.complete(prompt)
print(response)
# Async complete
async def classify_review(llm, prompt):
response = await llm.acomplete(prompt)
print(response)
asyncio.run(classify_review(llm, prompt))
prompt = """Classifique a resenha de filme como "positiva" ou "negativa".
Resenha: Gostei muito do filme, é o melhor do ano!
Classe: positiva
Resenha: O filme deixa muito a desejar.
Classe: negativa
Resenha: Apesar de longo, valeu o ingresso..
Classe:"""
# Sync complete
response = llm.complete(prompt)
print(response)
# Async complete
async def classify_review(llm, prompt):
response = await llm.acomplete(prompt)
print(response)
asyncio.run(classify_review(llm, prompt))
In [ ]:
Copied!
# Sync streaming complete
response = llm.stream_complete(prompt)
for chunk in response:
print(chunk.delta, end="", flush=True)
# Async streaming complete
async def classify_review_streaming(llm, prompt):
async for chunk in await llm.astream_complete(prompt):
print(chunk.delta, end="", flush=True)
asyncio.run(classify_review_streaming(llm, prompt))
# Sync streaming complete
response = llm.stream_complete(prompt)
for chunk in response:
print(chunk.delta, end="", flush=True)
# Async streaming complete
async def classify_review_streaming(llm, prompt):
async for chunk in await llm.astream_complete(prompt):
print(chunk.delta, end="", flush=True)
asyncio.run(classify_review_streaming(llm, prompt))