如果您在 Colab 上打开此 Notebook,可能需要安装 LlamaIndex 🦙。
In [ ]:
Copied!
%pip install llama-index llama-index-memory-mem0
%pip install llama-index llama-index-memory-mem0
In [ ]:
Copied!
import os
os.environ["MEM0_API_KEY"] = "m0-..."
import os
os.environ["MEM0_API_KEY"] = "m0-..."
使用 from_client
(适用于 Mem0 平台 API):
In [ ]:
Copied!
from llama_index.memory.mem0 import Mem0Memory
context = {"user_id": "test_users_1"}
memory_from_client = Mem0Memory.from_client(
context=context,
api_key="m0-...",
search_msg_limit=4, # Default is 5
)
from llama_index.memory.mem0 import Mem0Memory
context = {"user_id": "test_users_1"}
memory_from_client = Mem0Memory.from_client(
context=context,
api_key="m0-...",
search_msg_limit=4, # Default is 5
)
Mem0 Context 用于标识 Mem0 中的用户、智能体或会话。必须至少在 Mem0Memory
构造函数的某个字段中传递该参数。
search_msg_limit
为可选参数,默认值为 5。该参数表示从聊天历史中提取用于 Mem0 记忆检索的消息数量。增加消息数量会扩大检索时的上下文范围,但同时也会延长检索时间,并可能导致部分无关结果。
使用 from_config
(适用于 Mem0 OSS)
In [ ]:
Copied!
os.environ["OPENAI_API_KEY"] = "<your-api-key>"
config = {
"vector_store": {
"provider": "qdrant",
"config": {
"collection_name": "test_9",
"host": "localhost",
"port": 6333,
"embedding_model_dims": 1536, # Change this according to your local model's dimensions
},
},
"llm": {
"provider": "openai",
"config": {
"model": "gpt-4o",
"temperature": 0.2,
"max_tokens": 1500,
},
},
"embedder": {
"provider": "openai",
"config": {"model": "text-embedding-3-small"},
},
"version": "v1.1",
}
memory_from_config = Mem0Memory.from_config(
context=context,
config=config,
search_msg_limit=4, # Default is 5
)
os.environ["OPENAI_API_KEY"] = ""
config = {
"vector_store": {
"provider": "qdrant",
"config": {
"collection_name": "test_9",
"host": "localhost",
"port": 6333,
"embedding_model_dims": 1536, # Change this according to your local model's dimensions
},
},
"llm": {
"provider": "openai",
"config": {
"model": "gpt-4o",
"temperature": 0.2,
"max_tokens": 1500,
},
},
"embedder": {
"provider": "openai",
"config": {"model": "text-embedding-3-small"},
},
"version": "v1.1",
}
memory_from_config = Mem0Memory.from_config(
context=context,
config=config,
search_msg_limit=4, # Default is 5
)
初始化大语言模型¶
In [ ]:
Copied!
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-4o", api_key="sk-...")
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-4o", api_key="sk-...")
函数调用代理的 Mem0 内存模块¶
将 Mem0
用作 FunctionCallingAgents
的内存模块。
初始化工具¶
In [ ]:
Copied!
def call_fn(name: str):
"""Call the provided name.
Args:
name: str (Name of the person)
"""
print(f"Calling... {name}")
def email_fn(name: str):
"""Email the provided name.
Args:
name: str (Name of the person)
"""
print(f"Emailing... {name}")
def call_fn(name: str):
"""Call the provided name.
Args:
name: str (Name of the person)
"""
print(f"Calling... {name}")
def email_fn(name: str):
"""Email the provided name.
Args:
name: str (Name of the person)
"""
print(f"Emailing... {name}")
In [ ]:
Copied!
from llama_index.core.agent.workflow import FunctionAgent
agent = FunctionAgent(
tools=[email_fn, call_fn],
llm=llm,
)
from llama_index.core.agent.workflow import FunctionAgent
agent = FunctionAgent(
tools=[email_fn, call_fn],
llm=llm,
)
In [ ]:
Copied!
response = await agent.run("Hi, My name is Mayank.", memory=memory_from_client)
print(str(response))
response = await agent.run("Hi, My name is Mayank.", memory=memory_from_client)
print(str(response))
/Users/loganmarkewich/Library/Caches/pypoetry/virtualenvs/llama-index-caVs7DDe-py3.10/lib/python3.10/site-packages/mem0/client/main.py:33: DeprecationWarning: output_format='v1.0' is deprecated therefore setting it to 'v1.1' by default.Check out the docs for more information: https://docs.mem0.ai/platform/quickstart#4-1-create-memories return func(*args, **kwargs)
Hello Mayank! How can I assist you today?
In [ ]:
Copied!
response = await agent.run(
"My preferred way of communication would be Email.",
memory=memory_from_client,
)
print(str(response))
response = await agent.run(
"My preferred way of communication would be Email.",
memory=memory_from_client,
)
print(str(response))
Got it, Mayank! Your preferred way of communication is Email. If there's anything specific you need, feel free to let me know!
In [ ]:
Copied!
response = await agent.run(
"Send me an update of your product.", memory=memory_from_client
)
print(str(response))
response = await agent.run(
"Send me an update of your product.", memory=memory_from_client
)
print(str(response))
Emailing... Mayank Emailing... Mayank Calling... Mayank Emailing... Mayank
I've sent you an update on our product via email. If you have any other questions or need further assistance, feel free to ask!
为 ReAct 智能体设计的 Mem0 记忆模块¶
将 Mem0
作为 ReActAgent
的记忆组件使用。
In [ ]:
Copied!
from llama_index.core.agent.workflow import ReActAgent
agent = ReActAgent(
tools=[call_fn, email_fn],
llm=llm,
)
from llama_index.core.agent.workflow import ReActAgent
agent = ReActAgent(
tools=[call_fn, email_fn],
llm=llm,
)
In [ ]:
Copied!
response = await agent.run("Hi, My name is Mayank.", memory=memory_from_client)
print(str(response))
response = await agent.run("Hi, My name is Mayank.", memory=memory_from_client)
print(str(response))
In [ ]:
Copied!
response = await agent.run(
"My preferred way of communication would be Email.",
memory=memory_from_client,
)
print(str(response))
response = await agent.run(
"My preferred way of communication would be Email.",
memory=memory_from_client,
)
print(str(response))
In [ ]:
Copied!
response = await agent.run(
"Send me an update of your product.", memory=memory_from_client
)
print(str(response))
response = await agent.run(
"Send me an update of your product.", memory=memory_from_client
)
print(str(response))
In [ ]:
Copied!
response = await agent.run(
"First call me and then communicate me requirements.",
memory=memory_from_client,
)
print(str(response))
response = await agent.run(
"First call me and then communicate me requirements.",
memory=memory_from_client,
)
print(str(response))