困惑度(Perplexity)¶
Perplexity 的 Sonar API 提供了一种将实时、基于网络的搜索与高级推理和深度研究能力相结合的解决方案。
适用场景:
- 当您的应用程序需要直接从网络获取及时、相关的数据时,例如动态内容更新或当前事件追踪。
- 适用于需要支持复杂用户查询并整合推理和深度研究功能的产品,如数字助手或高级搜索引擎。
在开始之前,请确保已安装 llama_index
In [ ]:
Copied!
%pip install llama-index-llms-perplexity
%pip install llama-index-llms-perplexity
In [ ]:
Copied!
!pip install llama-index
!pip install llama-index
初始设置¶
截至 2025 年 4 月 12 日,LLaMa Index 中的 Perplexity LLM 类支持以下模型:
| 模型 | 上下文长度 | 模型类型 |
|---|---|---|
sonar-deep-research |
128k | 聊天完成 |
sonar-reasoning-pro |
128k | 聊天完成 |
sonar-reasoning |
128k | 聊天完成 |
sonar-pro |
200k | 聊天完成 |
sonar |
128k | 聊天完成 |
r1-1776 |
128k | 聊天完成 |
sonar-pro的最大输出令牌限制为 8k- 推理模型会输出思维链(Chain of Thought)响应
r1-1776是离线聊天模型,不使用 Perplexity 搜索子系统
In [ ]:
Copied!
import getpass
import os
if "PPLX_API_KEY" not in os.environ:
os.environ["PPLX_API_KEY"] = getpass.getpass(
"Enter your Perplexity API key: "
)
import getpass
import os
if "PPLX_API_KEY" not in os.environ:
os.environ["PPLX_API_KEY"] = getpass.getpass(
"Enter your Perplexity API key: "
)
In [ ]:
Copied!
from llama_index.llms.perplexity import Perplexity
PPLX_API_KEY = __import__("os").environ.get("PPLX_API_KEY")
llm = Perplexity(api_key=PPLX_API_KEY, model="sonar-pro", temperature=0.2)
from llama_index.llms.perplexity import Perplexity
PPLX_API_KEY = __import__("os").environ.get("PPLX_API_KEY")
llm = Perplexity(api_key=PPLX_API_KEY, model="sonar-pro", temperature=0.2)
In [ ]:
Copied!
# Import the ChatMessage class from the llama_index library.
from llama_index.core.llms import ChatMessage
# Create a list of dictionaries where each dictionary represents a chat message.
# Each dictionary contains a 'role' key (e.g., system or user) and a 'content' key with the corresponding message.
messages_dict = [
{"role": "system", "content": "Be precise and concise."},
{
"role": "user",
"content": "Tell me the latest news about the US Stock Market.",
},
]
# Convert each dictionary in the list to a ChatMessage object using unpacking (**msg) in a list comprehension.
messages = [ChatMessage(**msg) for msg in messages_dict]
# Print the list of ChatMessage objects to verify the conversion.
print(messages)
# Import the ChatMessage class from the llama_index library.
from llama_index.core.llms import ChatMessage
# Create a list of dictionaries where each dictionary represents a chat message.
# Each dictionary contains a 'role' key (e.g., system or user) and a 'content' key with the corresponding message.
messages_dict = [
{"role": "system", "content": "Be precise and concise."},
{
"role": "user",
"content": "Tell me the latest news about the US Stock Market.",
},
]
# Convert each dictionary in the list to a ChatMessage object using unpacking (**msg) in a list comprehension.
messages = [ChatMessage(**msg) for msg in messages_dict]
# Print the list of ChatMessage objects to verify the conversion.
print(messages)
[ChatMessage(role=<MessageRole.SYSTEM: 'system'>, additional_kwargs={}, blocks=[TextBlock(block_type='text', text='Be precise and concise.')]), ChatMessage(role=<MessageRole.USER: 'user'>, additional_kwargs={}, blocks=[TextBlock(block_type='text', text='Tell me the latest news about the US Stock Market.')])]
聊天¶
In [ ]:
Copied!
response = llm.chat(messages)
print(response)
response = llm.chat(messages)
print(response)
assistant: The latest update on the U.S. stock market indicates a strong performance recently. A significant 10% rally occurred on Wednesday, which contributed substantially to market gains. Additionally, the market closed strongly on Friday, with a 2% increase, ending near the intraday high. This reflects robust momentum, particularly in mega and large-cap growth stocks[1].
异步聊天¶
对于异步对话处理,使用 achat 方法发送消息并等待响应:
In [ ]:
Copied!
# Asynchronously send the list of chat messages to the LLM using the 'achat' method.
# This method returns a ChatResponse object containing the model's answer.
response = await llm.achat(messages)
print(response)
# Asynchronously send the list of chat messages to the LLM using the 'achat' method.
# This method returns a ChatResponse object containing the model's answer.
response = await llm.achat(messages)
print(response)
assistant: The U.S. stock market has recently experienced significant gains. A major rally on Wednesday resulted in a 10% surge, contributing substantially to the market's overall upside. Additionally, the market closed strongly on Friday, with a 2% increase, ending near the intraday high. This performance highlights robust momentum, particularly in mega-cap and large-cap growth stocks[1].
流式聊天¶
对于需要实时逐令牌接收响应的情况,请使用 stream_chat 方法:
In [ ]:
Copied!
# Call the stream_chat method on the LLM instance, which returns a generator or iterable
# for streaming the chat response one delta (token or chunk) at a time.
response = llm.stream_chat(messages)
# Iterate over each streaming response chunk.
for r in response:
# Print the delta (the new chunk of generated text) without adding a newline.
print(r.delta, end="")
# Call the stream_chat method on the LLM instance, which returns a generator or iterable
# for streaming the chat response one delta (token or chunk) at a time.
response = llm.stream_chat(messages)
# Iterate over each streaming response chunk.
for r in response:
# Print the delta (the new chunk of generated text) without adding a newline.
print(r.delta, end="")
The latest news about the U.S. stock market indicates a strong performance recently. The New York Stock Exchange (NYSE) experienced a significant rally, with a 10% surge on Wednesday, followed by a 2% gain on Friday. This upward momentum brought the market near its intraday high, driven by strength in mega-cap and large-cap growth stocks[1].
异步流式聊天¶
同样地,对于异步流式处理,astream_chat 方法提供了异步处理响应增量的方式:
In [ ]:
Copied!
# Asynchronously call the astream_chat method on the LLM instance,
# which returns an asynchronous generator that yields response chunks.
resp = await llm.astream_chat(messages)
# Asynchronously iterate over each response chunk from the generator.
# For each chunk (delta), print the chunk's text content.
async for delta in resp:
print(delta.delta, end="")
# Asynchronously call the astream_chat method on the LLM instance,
# which returns an asynchronous generator that yields response chunks.
resp = await llm.astream_chat(messages)
# Asynchronously iterate over each response chunk from the generator.
# For each chunk (delta), print the chunk's text content.
async for delta in resp:
print(delta.delta, end="")
The latest updates on the U.S. stock market indicate significant positive momentum. The New York Stock Exchange (NYSE) experienced a strong rally, with a notable 10% surge on Wednesday. This was followed by a 2% gain on Friday, closing near the intraday high. The market's performance has been driven by mega and large-cap growth stocks, contributing to the overall upside[1].
工具调用¶
Perplexity 模型可以轻松封装为 llamaindex 工具,从而作为数据处理或对话工作流的一部分进行调用。该工具采用 Perplexity 提供的实时生成式搜索技术,默认配置使用最新版模型("sonar-pro")并启用了 enable_search_classifier 参数。
以下示例展示了如何定义并注册该工具:
In [ ]:
Copied!
from llama_index.core.tools import FunctionTool
from llama_index.llms.perplexity import Perplexity
from llama_index.core.llms import ChatMessage
def query_perplexity(query: str) -> str:
"""
Queries the Perplexity API via the LlamaIndex integration.
This function instantiates a Perplexity LLM with updated default settings
(using model "sonar-pro" and enabling search classifier so that the API can
intelligently decide if a search is needed), wraps the query into a ChatMessage,
and returns the generated response content.
"""
pplx_api_key = (
"your-perplexity-api-key" # Replace with your actual API key
)
llm = Perplexity(
api_key=pplx_api_key,
model="sonar-pro",
temperature=0.7,
enable_search_classifier=True, # This will determine if the search component is necessary in this particular context
)
messages = [ChatMessage(role="user", content=query)]
response = llm.chat(messages)
return response.message.content
# Create the tool from the query_perplexity function
query_perplexity_tool = FunctionTool.from_defaults(fn=query_perplexity)
from llama_index.core.tools import FunctionTool
from llama_index.llms.perplexity import Perplexity
from llama_index.core.llms import ChatMessage
def query_perplexity(query: str) -> str:
"""
Queries the Perplexity API via the LlamaIndex integration.
This function instantiates a Perplexity LLM with updated default settings
(using model "sonar-pro" and enabling search classifier so that the API can
intelligently decide if a search is needed), wraps the query into a ChatMessage,
and returns the generated response content.
"""
pplx_api_key = (
"your-perplexity-api-key" # Replace with your actual API key
)
llm = Perplexity(
api_key=pplx_api_key,
model="sonar-pro",
temperature=0.7,
enable_search_classifier=True, # This will determine if the search component is necessary in this particular context
)
messages = [ChatMessage(role="user", content=query)]
response = llm.chat(messages)
return response.message.content
# Create the tool from the query_perplexity function
query_perplexity_tool = FunctionTool.from_defaults(fn=query_perplexity)