高级文本转 SQL 工作流¶
本指南将展示如何通过我们的工作流语法为您的数据设置文本转 SQL 工作流。
这种方案让您能够灵活地运用附加技术增强文本转 SQL 功能。我们将在以下章节展示这些技术:
- 查询时表检索:在文本转 SQL 提示中动态检索相关表
- 查询时样本行检索:对每行数据进行嵌入/索引,并在文本转 SQL 提示中为每个表动态检索示例行
我们的开箱即用工作流包含 NLSQLTableQueryEngine
和 SQLTableRetrieverQueryEngine
(若想查看使用这些模块的文本转 SQL 指南,请访问此处)。本指南实现了这些模块的高级版本,为您提供最大灵活性以适应个性化场景。
注意: 任何文本转 SQL 应用都应注意,执行任意 SQL 查询可能存在安全风险。建议根据需要采取预防措施,例如使用受限角色、只读数据库、沙盒环境等。
数据加载与导入¶
加载数据¶
我们采用WikiTableQuestions数据集(Pasupat和Liang 2015)作为测试数据集。
我们会遍历指定文件夹中的所有csv文件,将每个文件存储到sqlite数据库中(后续将在每个表结构上构建对象索引)。
%pip install llama-index-llms-openai
!wget "https://github.com/ppasupat/WikiTableQuestions/releases/download/v1.0.2/WikiTableQuestions-1.0.2-compact.zip" -O data.zip
!unzip data.zip
import pandas as pd
from pathlib import Path
data_dir = Path("./WikiTableQuestions/csv/200-csv")
csv_files = sorted([f for f in data_dir.glob("*.csv")])
dfs = []
for csv_file in csv_files:
print(f"processing file: {csv_file}")
try:
df = pd.read_csv(csv_file)
dfs.append(df)
except Exception as e:
print(f"Error parsing {csv_file}: {str(e)}")
从每个表格中提取表名与摘要¶
此处我们使用 gpt-4o-mini 模型,通过 Pydantic 程序从每个表格中提取带下划线的表名及其摘要。
tableinfo_dir = "WikiTableQuestions_TableInfo"
!mkdir {tableinfo_dir}
mkdir: WikiTableQuestions_TableInfo: File exists
from llama_index.core.prompts import ChatPromptTemplate
from llama_index.core.bridge.pydantic import BaseModel, Field
from llama_index.llms.openai import OpenAI
from llama_index.core.llms import ChatMessage
class TableInfo(BaseModel):
"""Information regarding a structured table."""
table_name: str = Field(
..., description="table name (must be underscores and NO spaces)"
)
table_summary: str = Field(
..., description="short, concise summary/caption of the table"
)
prompt_str = """\
Give me a summary of the table with the following JSON format.
- The table name must be unique to the table and describe it while being concise.
- Do NOT output a generic table name (e.g. table, my_table).
Do NOT make the table name one of the following: {exclude_table_name_list}
Table:
{table_str}
Summary: """
prompt_tmpl = ChatPromptTemplate(
message_templates=[ChatMessage.from_str(prompt_str, role="user")]
)
llm = OpenAI(model="gpt-4o-mini")
import json
def _get_tableinfo_with_index(idx: int) -> str:
results_gen = Path(tableinfo_dir).glob(f"{idx}_*")
results_list = list(results_gen)
if len(results_list) == 0:
return None
elif len(results_list) == 1:
path = results_list[0]
return TableInfo.parse_file(path)
else:
raise ValueError(
f"More than one file matching index: {list(results_gen)}"
)
table_names = set()
table_infos = []
for idx, df in enumerate(dfs):
table_info = _get_tableinfo_with_index(idx)
if table_info:
table_infos.append(table_info)
else:
while True:
df_str = df.head(10).to_csv()
table_info = llm.structured_predict(
TableInfo,
prompt_tmpl,
table_str=df_str,
exclude_table_name_list=str(list(table_names)),
)
table_name = table_info.table_name
print(f"Processed table: {table_name}")
if table_name not in table_names:
table_names.add(table_name)
break
else:
# try again
print(f"Table name {table_name} already exists, trying again.")
pass
out_file = f"{tableinfo_dir}/{idx}_{table_name}.json"
json.dump(table_info.dict(), open(out_file, "w"))
table_infos.append(table_info)
将数据存入 SQL 数据库¶
我们使用流行的 SQL 数据库工具包 sqlalchemy
来加载所有数据表。
# put data into sqlite db
from sqlalchemy import (
create_engine,
MetaData,
Table,
Column,
String,
Integer,
)
import re
# Function to create a sanitized column name
def sanitize_column_name(col_name):
# Remove special characters and replace spaces with underscores
return re.sub(r"\W+", "_", col_name)
# Function to create a table from a DataFrame using SQLAlchemy
def create_table_from_dataframe(
df: pd.DataFrame, table_name: str, engine, metadata_obj
):
# Sanitize column names
sanitized_columns = {col: sanitize_column_name(col) for col in df.columns}
df = df.rename(columns=sanitized_columns)
# Dynamically create columns based on DataFrame columns and data types
columns = [
Column(col, String if dtype == "object" else Integer)
for col, dtype in zip(df.columns, df.dtypes)
]
# Create a table with the defined columns
table = Table(table_name, metadata_obj, *columns)
# Create the table in the database
metadata_obj.create_all(engine)
# Insert data from DataFrame into the table
with engine.connect() as conn:
for _, row in df.iterrows():
insert_stmt = table.insert().values(**row.to_dict())
conn.execute(insert_stmt)
conn.commit()
# engine = create_engine("sqlite:///:memory:")
engine = create_engine("sqlite:///wiki_table_questions.db")
metadata_obj = MetaData()
for idx, df in enumerate(dfs):
tableinfo = _get_tableinfo_with_index(idx)
print(f"Creating table: {tableinfo.table_name}")
create_table_from_dataframe(df, tableinfo.table_name, engine, metadata_obj)
# # setup Arize Phoenix for logging/observability
# import phoenix as px
# import llama_index.core
# px.launch_app()
# llama_index.core.set_global_handler("arize_phoenix")
对象索引、检索器、SQL数据库
from llama_index.core.objects import (
SQLTableNodeMapping,
ObjectIndex,
SQLTableSchema,
)
from llama_index.core import SQLDatabase, VectorStoreIndex
sql_database = SQLDatabase(engine)
table_node_mapping = SQLTableNodeMapping(sql_database)
table_schema_objs = [
SQLTableSchema(table_name=t.table_name, context_str=t.table_summary)
for t in table_infos
] # add a SQLTableSchema for each table
obj_index = ObjectIndex.from_objects(
table_schema_objs,
table_node_mapping,
VectorStoreIndex,
)
obj_retriever = obj_index.as_retriever(similarity_top_k=3)
SQLRetriever + 表格解析器
from llama_index.core.retrievers import SQLRetriever
from typing import List
sql_retriever = SQLRetriever(sql_database)
def get_table_context_str(table_schema_objs: List[SQLTableSchema]):
"""Get table context string."""
context_strs = []
for table_schema_obj in table_schema_objs:
table_info = sql_database.get_single_table_info(
table_schema_obj.table_name
)
if table_schema_obj.context_str:
table_opt_context = " The table description is: "
table_opt_context += table_schema_obj.context_str
table_info += table_opt_context
context_strs.append(table_info)
return "\n\n".join(context_strs)
文本转SQL提示词 + 输出解析器
from llama_index.core.prompts.default_prompts import DEFAULT_TEXT_TO_SQL_PROMPT
from llama_index.core import PromptTemplate
from llama_index.core.llms import ChatResponse
def parse_response_to_sql(chat_response: ChatResponse) -> str:
"""Parse response to SQL."""
response = chat_response.message.content
sql_query_start = response.find("SQLQuery:")
if sql_query_start != -1:
response = response[sql_query_start:]
# TODO: move to removeprefix after Python 3.9+
if response.startswith("SQLQuery:"):
response = response[len("SQLQuery:") :]
sql_result_start = response.find("SQLResult:")
if sql_result_start != -1:
response = response[:sql_result_start]
return response.strip().strip("```").strip()
text2sql_prompt = DEFAULT_TEXT_TO_SQL_PROMPT.partial_format(
dialect=engine.dialect.name
)
print(text2sql_prompt.template)
Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. You can order the results by a relevant column to return the most interesting examples in the database. Never query for all the columns from a specific table, only ask for a few relevant columns given the question. Pay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Pay attention to which column is in which table. Also, qualify column names with the table name when needed. You are required to use the following format, each taking one line: Question: Question here SQLQuery: SQL Query to run SQLResult: Result of the SQLQuery Answer: Final answer here Only use tables listed below. {schema} Question: {query_str} SQLQuery:
响应合成提示
response_synthesis_prompt_str = (
"Given an input question, synthesize a response from the query results.\n"
"Query: {query_str}\n"
"SQL: {sql_query}\n"
"SQL Response: {context_str}\n"
"Response: "
)
response_synthesis_prompt = PromptTemplate(
response_synthesis_prompt_str,
)
# llm = OpenAI(model="gpt-3.5-turbo")
llm = OpenAI(model="gpt-4o-mini")
定义工作流¶
现在各组件已就绪,让我们来定义完整的工作流程!
from llama_index.core.workflow import (
Workflow,
StartEvent,
StopEvent,
step,
Context,
Event,
)
class TableRetrieveEvent(Event):
"""Result of running table retrieval."""
table_context_str: str
query: str
class TextToSQLEvent(Event):
"""Text-to-SQL event."""
sql: str
query: str
class TextToSQLWorkflow1(Workflow):
"""Text-to-SQL Workflow that does query-time table retrieval."""
def __init__(
self,
obj_retriever,
text2sql_prompt,
sql_retriever,
response_synthesis_prompt,
llm,
*args,
**kwargs
) -> None:
"""Init params."""
super().__init__(*args, **kwargs)
self.obj_retriever = obj_retriever
self.text2sql_prompt = text2sql_prompt
self.sql_retriever = sql_retriever
self.response_synthesis_prompt = response_synthesis_prompt
self.llm = llm
@step
def retrieve_tables(
self, ctx: Context, ev: StartEvent
) -> TableRetrieveEvent:
"""Retrieve tables."""
table_schema_objs = self.obj_retriever.retrieve(ev.query)
table_context_str = get_table_context_str(table_schema_objs)
return TableRetrieveEvent(
table_context_str=table_context_str, query=ev.query
)
@step
def generate_sql(
self, ctx: Context, ev: TableRetrieveEvent
) -> TextToSQLEvent:
"""Generate SQL statement."""
fmt_messages = self.text2sql_prompt.format_messages(
query_str=ev.query, schema=ev.table_context_str
)
chat_response = self.llm.chat(fmt_messages)
sql = parse_response_to_sql(chat_response)
return TextToSQLEvent(sql=sql, query=ev.query)
@step
def generate_response(self, ctx: Context, ev: TextToSQLEvent) -> StopEvent:
"""Run SQL retrieval and generate response."""
retrieved_rows = self.sql_retriever.retrieve(ev.sql)
fmt_messages = self.response_synthesis_prompt.format_messages(
sql_query=ev.sql,
context_str=str(retrieved_rows),
query_str=ev.query,
)
chat_response = llm.chat(fmt_messages)
return StopEvent(result=chat_response)
可视化工作流¶
工作流的一个绝佳特性在于,你既能查看执行流程图,也能追踪最近一次执行的轨迹。
from llama_index.utils.workflow import draw_all_possible_flows
draw_all_possible_flows(
TextToSQLWorkflow1, filename="text_to_sql_table_retrieval.html"
)
text_to_sql_table_retrieval.html
from IPython.display import display, HTML
# Read the contents of the HTML file
with open("text_to_sql_table_retrieval.html", "r") as file:
html_content = file.read()
# Display the HTML content
display(HTML(html_content))
执行查询操作!¶
现在我们已经准备好对整个工作流执行一些查询了。
workflow = TextToSQLWorkflow1(
obj_retriever,
text2sql_prompt,
sql_retriever,
response_synthesis_prompt,
llm,
verbose=True,
)
response = await workflow.run(
query="What was the year that The Notorious B.I.G was signed to Bad Boy?"
)
print(str(response))
Running step retrieve_tables Step retrieve_tables produced event TableRetrieveEvent Running step generate_sql Step generate_sql produced event TextToSQLEvent Running step generate_response Step generate_response produced event StopEvent assistant: The Notorious B.I.G was signed to Bad Boy Records in 1993. VERBOSE: True > Table Info: Table 'bad_boy_artists_album_release_summary' has columns: Act (VARCHAR), Year_signed (INTEGER), _Albums_released_under_Bad_Boy (VARCHAR), . The table description is: A summary of artists signed to Bad Boy Records along with the year they were signed and the number of albums they released. Here are some relevant example rows (values in the same order as columns above) ('The Notorious B.I.G', 1993, '5') > Table Info: Table 'filmography_of_diane_drummond' has columns: Year (INTEGER), Title (VARCHAR), Role (VARCHAR), Notes (VARCHAR), . The table description is: A list of film and television roles played by Diane Drummond from 1995 to 2001. Here are some relevant example rows (values in the same order as columns above) (2013, 'L.A. Slasher', 'The Actress', None) > Table Info: Table 'progressive_rock_album_chart_positions' has columns: Year (INTEGER), Title (VARCHAR), Chart_Positions_UK (VARCHAR), Chart_Positions_US (VARCHAR), Chart_Positions_NL (VARCHAR), Comments (VARCHAR), . The table description is: Chart positions of progressive rock albums in the UK, US, and NL from 1969 to 1981. Here are some relevant example rows (values in the same order as columns above) (1977, 'Novella', '–', '46', '–', '1977 (January in US, August in UK, as the band moved to the Warner Bros Music Group)') VERBOSE: True > Table Info: Table 'bad_boy_artists_album_release_summary' has columns: Act (VARCHAR), Year_signed (INTEGER), _Albums_released_under_Bad_Boy (VARCHAR), . The table description is: A summary of artists signed to Bad Boy Records along with the year they were signed and the number of albums they released. Here are some relevant example rows (values in the same order as columns above) ('The Notorious B.I.G', 1993, '5') > Table Info: Table 'filmography_of_diane_drummond' has columns: Year (INTEGER), Title (VARCHAR), Role (VARCHAR), Notes (VARCHAR), . The table description is: A list of film and television roles played by Diane Drummond from 1995 to 2001. Here are some relevant example rows (values in the same order as columns above) (2013, 'L.A. Slasher', 'The Actress', None) > Table Info: Table 'progressive_rock_album_chart_positions' has columns: Year (INTEGER), Title (VARCHAR), Chart_Positions_UK (VARCHAR), Chart_Positions_US (VARCHAR), Chart_Positions_NL (VARCHAR), Comments (VARCHAR), . The table description is: Chart positions of progressive rock albums in the UK, US, and NL from 1969 to 1981. Here are some relevant example rows (values in the same order as columns above) (1977, 'Novella', '–', '46', '–', '1977 (January in US, August in UK, as the band moved to the Warner Bros Music Group)')
response = await workflow.run(
query="Who won best director in the 1972 academy awards"
)
print(str(response))
Running step retrieve_tables Step retrieve_tables produced event TableRetrieveEvent Running step generate_sql Step generate_sql produced event TextToSQLEvent Running step generate_response Step generate_response produced event StopEvent assistant: William Friedkin won the Best Director award at the 1972 Academy Awards.
response = await workflow.run(query="What was the term of Pasquale Preziosa?")
print(str(response))
Running step retrieve_tables Step retrieve_tables produced event TableRetrieveEvent Running step generate_sql Step generate_sql produced event TextToSQLEvent Running step generate_response Step generate_response produced event StopEvent assistant: Pasquale Preziosa has been serving since 25 February 2013 and is currently in office as the incumbent.
2. 高级能力 2:支持查询时行检索(及表检索)的文本转 SQL 功能¶
前例中存在一个问题:若用户查询"The Notorious BIG"但数据库中存储的是"The Notorious B.I.G",生成的SELECT语句很可能无法返回匹配结果。
我们可以通过获取每张表的少量示例行来缓解此问题。简单方案是直接获取前k行,但我们采用了更智能的方式——根据用户查询对行数据进行嵌入、索引和检索,从而为文本转SQL的大语言模型提供最具上下文相关性的信息来生成SQL。
现在我们将扩展工作流程。
为每个表建立索引¶
我们对每个表的行数据进行嵌入/索引处理,最终为每个表生成一个独立的索引。
from llama_index.core import VectorStoreIndex, load_index_from_storage
from sqlalchemy import text
from llama_index.core.schema import TextNode
from llama_index.core import StorageContext
import os
from pathlib import Path
from typing import Dict
def index_all_tables(
sql_database: SQLDatabase, table_index_dir: str = "table_index_dir"
) -> Dict[str, VectorStoreIndex]:
"""Index all tables."""
if not Path(table_index_dir).exists():
os.makedirs(table_index_dir)
vector_index_dict = {}
engine = sql_database.engine
for table_name in sql_database.get_usable_table_names():
print(f"Indexing rows in table: {table_name}")
if not os.path.exists(f"{table_index_dir}/{table_name}"):
# get all rows from table
with engine.connect() as conn:
cursor = conn.execute(text(f'SELECT * FROM "{table_name}"'))
result = cursor.fetchall()
row_tups = []
for row in result:
row_tups.append(tuple(row))
# index each row, put into vector store index
nodes = [TextNode(text=str(t)) for t in row_tups]
# put into vector store index (use OpenAIEmbeddings by default)
index = VectorStoreIndex(nodes)
# save index
index.set_index_id("vector_index")
index.storage_context.persist(f"{table_index_dir}/{table_name}")
else:
# rebuild storage context
storage_context = StorageContext.from_defaults(
persist_dir=f"{table_index_dir}/{table_name}"
)
# load index
index = load_index_from_storage(
storage_context, index_id="vector_index"
)
vector_index_dict[table_name] = index
return vector_index_dict
vector_index_dict = index_all_tables(sql_database)
Indexing rows in table: academy_awards_and_nominations_1972 Indexing rows in table: annual_traffic_accident_deaths Indexing rows in table: bad_boy_artists_album_release_summary Indexing rows in table: bbc_radio_services_cost_comparison_2012_2013 Indexing rows in table: binary_encoding_probabilities Indexing rows in table: boxing_match_results_summary Indexing rows in table: cancer_related_genes_and_functions Indexing rows in table: diane_drummond_awards_nominations Indexing rows in table: diane_drummond_oscar_nominations_and_wins Indexing rows in table: diane_drummond_single_chart_performance Indexing rows in table: euro_2020_group_stage_results Indexing rows in table: experiment_drop_events_timeline Indexing rows in table: filmography_of_diane_drummond Indexing rows in table: grammy_awards_summary_for_wilco Indexing rows in table: historical_college_football_records Indexing rows in table: italian_ministers_term_dates Indexing rows in table: kodachrome_film_types_and_dates Indexing rows in table: missing_persons_case_summary Indexing rows in table: monthly_climate_statistics Indexing rows in table: monthly_climate_statistics_summary Indexing rows in table: monthly_weather_statistics Indexing rows in table: multilingual_greetings_and_phrases Indexing rows in table: municipalities_merger_summary Indexing rows in table: new_mexico_government_officials Indexing rows in table: norwegian_club_performance_summary Indexing rows in table: ohio_private_schools_summary Indexing rows in table: progressive_rock_album_chart_positions Indexing rows in table: regional_airports_usage_summary Indexing rows in table: south_dakota_radio_stations Indexing rows in table: triple_crown_winners_summary Indexing rows in table: uk_ministers_and_titles_history Indexing rows in table: voter_registration_status_by_party Indexing rows in table: voter_registration_summary_by_party Indexing rows in table: yamato_district_population_density
定义扩展表格解析功能¶
我们增强了表格解析的能力,使其不仅能返回相关的表格结构,还能为每个表格结构返回相关的数据行。
现在该功能不仅接收 table_schema_objs
(表格检索器的输出),还接收原始 query_str
参数,后者将用于通过向量检索获取相关数据行。
from llama_index.core.retrievers import SQLRetriever
from typing import List
sql_retriever = SQLRetriever(sql_database)
def get_table_context_and_rows_str(
query_str: str,
table_schema_objs: List[SQLTableSchema],
verbose: bool = False,
):
"""Get table context string."""
context_strs = []
for table_schema_obj in table_schema_objs:
# first append table info + additional context
table_info = sql_database.get_single_table_info(
table_schema_obj.table_name
)
if table_schema_obj.context_str:
table_opt_context = " The table description is: "
table_opt_context += table_schema_obj.context_str
table_info += table_opt_context
# also lookup vector index to return relevant table rows
vector_retriever = vector_index_dict[
table_schema_obj.table_name
].as_retriever(similarity_top_k=2)
relevant_nodes = vector_retriever.retrieve(query_str)
if len(relevant_nodes) > 0:
table_row_context = "\nHere are some relevant example rows (values in the same order as columns above)\n"
for node in relevant_nodes:
table_row_context += str(node.get_content()) + "\n"
table_info += table_row_context
if verbose:
print(f"> Table Info: {table_info}")
context_strs.append(table_info)
return "\n\n".join(context_strs)
定义扩展工作流¶
我们复用第1节中的工作流,但在文本到SQL生成后增加了升级版的SQL解析步骤。
通过子类化和扩展现有工作流来定制更高级的步骤非常简单。这里我们定义了一个新工作流,它重写了原有的retrieve_tables
步骤以返回相关数据行。
from llama_index.core.workflow import (
Workflow,
StartEvent,
StopEvent,
step,
Context,
Event,
)
class TextToSQLWorkflow2(TextToSQLWorkflow1):
"""Text-to-SQL Workflow that does query-time row AND table retrieval."""
@step
def retrieve_tables(
self, ctx: Context, ev: StartEvent
) -> TableRetrieveEvent:
"""Retrieve tables."""
table_schema_objs = self.obj_retriever.retrieve(ev.query)
table_context_str = get_table_context_and_rows_str(
ev.query, table_schema_objs, verbose=self._verbose
)
return TableRetrieveEvent(
table_context_str=table_context_str, query=ev.query
)
由于整体步骤顺序相同,图表看起来应该也是一样的。
from llama_index.utils.workflow import draw_all_possible_flows
draw_all_possible_flows(
TextToSQLWorkflow2, filename="text_to_sql_table_retrieval.html"
)
text_to_sql_table_retrieval.html
执行查询操作¶
现在即使查询条件与数据库中的条目不完全匹配,我们也能检索到相关条目。
workflow2 = TextToSQLWorkflow2(
obj_retriever,
text2sql_prompt,
sql_retriever,
response_synthesis_prompt,
llm,
verbose=True,
)
response = await workflow2.run(
query="What was the year that The Notorious BIG was signed to Bad Boy?"
)
print(str(response))
Running step retrieve_tables VERBOSE: True > Table Info: Table 'bad_boy_artists_album_release_summary' has columns: Act (VARCHAR), Year_signed (INTEGER), _Albums_released_under_Bad_Boy (VARCHAR), . The table description is: A summary of artists signed to Bad Boy Records along with the year they were signed and the number of albums they released. Here are some relevant example rows (values in the same order as columns above) ('The Notorious B.I.G', 1993, '5') > Table Info: Table 'filmography_of_diane_drummond' has columns: Year (INTEGER), Title (VARCHAR), Role (VARCHAR), Notes (VARCHAR), . The table description is: A list of film and television roles played by Diane Drummond from 1995 to 2001. Here are some relevant example rows (values in the same order as columns above) (2013, 'L.A. Slasher', 'The Actress', None) > Table Info: Table 'progressive_rock_album_chart_positions' has columns: Year (INTEGER), Title (VARCHAR), Chart_Positions_UK (VARCHAR), Chart_Positions_US (VARCHAR), Chart_Positions_NL (VARCHAR), Comments (VARCHAR), . The table description is: Chart positions of progressive rock albums in the UK, US, and NL from 1969 to 1981. Here are some relevant example rows (values in the same order as columns above) (1977, 'Novella', '–', '46', '–', '1977 (January in US, August in UK, as the band moved to the Warner Bros Music Group)') Step retrieve_tables produced event TableRetrieveEvent Running step generate_sql Step generate_sql produced event TextToSQLEvent Running step generate_response Step generate_response produced event StopEvent assistant: The Notorious B.I.G. was signed to Bad Boy Records in 1993.