文本嵌入推理¶
本笔记本演示如何配置 TextEmbeddingInference 嵌入模型。
第一步是部署嵌入服务端。详细说明请参阅Text Embeddings Inference 官方仓库。若在 Habana Gaudi/Gaudi 2 上部署,可参考tei-gaudi 仓库。
部署完成后,以下代码将连接服务端并提交待推理的嵌入请求。
如果您在 Colab 上打开此 Notebook,可能需要安装 LlamaIndex 🦙。
In [ ]:
Copied!
%pip install llama-index-embeddings-text-embeddings-inference
%pip install llama-index-embeddings-text-embeddings-inference
In [ ]:
Copied!
!pip install llama-index
!pip install llama-index
In [ ]:
Copied!
from llama_index.embeddings.text_embeddings_inference import (
TextEmbeddingsInference,
)
embed_model = TextEmbeddingsInference(
model_name="BAAI/bge-large-en-v1.5", # required for formatting inference text,
timeout=60, # timeout in seconds
embed_batch_size=10, # batch size for embedding
)
from llama_index.embeddings.text_embeddings_inference import (
TextEmbeddingsInference,
)
embed_model = TextEmbeddingsInference(
model_name="BAAI/bge-large-en-v1.5", # required for formatting inference text,
timeout=60, # timeout in seconds
embed_batch_size=10, # batch size for embedding
)
In [ ]:
Copied!
embeddings = embed_model.get_text_embedding("Hello World!")
print(len(embeddings))
print(embeddings[:5])
embeddings = embed_model.get_text_embedding("Hello World!")
print(len(embeddings))
print(embeddings[:5])
1024 [0.010597229, 0.05895996, 0.022445679, -0.012046814, -0.03164673]
In [ ]:
Copied!
embeddings = await embed_model.aget_text_embedding("Hello World!")
print(len(embeddings))
print(embeddings[:5])
embeddings = await embed_model.aget_text_embedding("Hello World!")
print(len(embeddings))
print(embeddings[:5])
1024 [0.010597229, 0.05895996, 0.022445679, -0.012046814, -0.03164673]