Neo4j
Neo4j 是由
Neo4j, Inc
开发的图数据库管理系统。
数据元素
Neo4j
存储的是节点、连接它们的边以及节点和边的属性。被其开发者描述为符合ACID事务性数据库,具有原生图存储和处理功能,Neo4j
提供非开源的“社区版”,采用修改后的GNU通用公共许可证授权,而在线备份和高可用性扩展则采用闭源商业许可证授权。Neo 还根据闭源商业条款授权Neo4j
使用这些扩展。
本笔记本展示了如何使用LLMs为图数据库提供一个自然语言界面,您可以使用
Cypher
查询语言进行查询。
Cypher 是一种声明式图查询语言,允许在属性图中进行表达性强且高效的数据查询。
设置
你需要有一个正在运行的Neo4j
实例。一个选择是在他们的Aura云服务中创建一个免费的Neo4j数据库实例。你也可以使用Neo4j桌面应用程序在本地运行数据库,或者运行一个docker容器。
你可以通过执行以下脚本来运行一个本地的docker容器:
docker run \
--name neo4j \
-p 7474:7474 -p 7687:7687 \
-d \
-e NEO4J_AUTH=neo4j/password \
-e NEO4J_PLUGINS=\[\"apoc\"\] \
neo4j:latest
如果您使用的是docker容器,您需要等待几秒钟让数据库启动。
from langchain_neo4j import GraphCypherQAChain, Neo4jGraph
from langchain_openai import ChatOpenAI
graph = Neo4jGraph(url="bolt://localhost:7687", username="neo4j", password="password")
在本指南中,我们默认使用OpenAI模型。
import getpass
import os
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
数据库种子
假设您的数据库为空,您可以使用Cypher查询语言来填充它。以下Cypher语句是幂等的,这意味着无论您运行一次还是多次,数据库信息都将保持不变。
graph.query(
"""
MERGE (m:Movie {name:"Top Gun", runtime: 120})
WITH m
UNWIND ["Tom Cruise", "Val Kilmer", "Anthony Edwards", "Meg Ryan"] AS actor
MERGE (a:Actor {name:actor})
MERGE (a)-[:ACTED_IN]->(m)
"""
)
[]
刷新图模式信息
如果数据库的模式发生变化,您可以刷新生成Cypher语句所需的模式信息。
graph.refresh_schema()
print(graph.schema)
Node properties:
Movie {runtime: INTEGER, name: STRING}
Actor {name: STRING}
Relationship properties:
The relationships:
(:Actor)-[:ACTED_IN]->(:Movie)
增强的架构信息
选择增强模式版本使系统能够自动扫描数据库中的示例值并计算一些分布指标。例如,如果节点属性具有少于10个不同的值,我们将在模式中返回所有可能的值。否则,每个节点和关系属性仅返回一个示例值。
enhanced_graph = Neo4jGraph(
url="bolt://localhost:7687",
username="neo4j",
password="password",
enhanced_schema=True,
)
print(enhanced_graph.schema)
Node properties:
- **Movie**
- `runtime`: INTEGER Min: 120, Max: 120
- `name`: STRING Available options: ['Top Gun']
- **Actor**
- `name`: STRING Available options: ['Tom Cruise', 'Val Kilmer', 'Anthony Edwards', 'Meg Ryan']
Relationship properties:
The relationships:
(:Actor)-[:ACTED_IN]->(:Movie)
查询图
我们现在可以使用图Cypher QA链来询问图的问题
chain = GraphCypherQAChain.from_llm(
ChatOpenAI(temperature=0), graph=graph, verbose=True, allow_dangerous_requests=True
)
chain.invoke({"query": "Who played in Top Gun?"})
[1m> Entering new GraphCypherQAChain chain...[0m
Generated Cypher:
[32;1m[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)
WHERE m.name = 'Top Gun'
RETURN a.name[0m
Full Context:
[32;1m[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}][0m
[1m> Finished chain.[0m
{'query': 'Who played in Top Gun?',
'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'}
限制结果数量
你可以使用top_k
参数来限制Cypher QA Chain返回的结果数量。默认值为10。
chain = GraphCypherQAChain.from_llm(
ChatOpenAI(temperature=0),
graph=graph,
verbose=True,
top_k=2,
allow_dangerous_requests=True,
)
chain.invoke({"query": "Who played in Top Gun?"})
[1m> Entering new GraphCypherQAChain chain...[0m
Generated Cypher:
[32;1m[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)
WHERE m.name = 'Top Gun'
RETURN a.name[0m
Full Context:
[32;1m[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}][0m
[1m> Finished chain.[0m
{'query': 'Who played in Top Gun?',
'result': 'Tom Cruise, Val Kilmer played in Top Gun.'}
返回中间结果
你可以使用return_intermediate_steps
参数从Cypher QA链返回中间步骤
chain = GraphCypherQAChain.from_llm(
ChatOpenAI(temperature=0),
graph=graph,
verbose=True,
return_intermediate_steps=True,
allow_dangerous_requests=True,
)
result = chain.invoke({"query": "Who played in Top Gun?"})
print(f"Intermediate steps: {result['intermediate_steps']}")
print(f"Final answer: {result['result']}")
[1m> Entering new GraphCypherQAChain chain...[0m
Generated Cypher:
[32;1m[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)
WHERE m.name = 'Top Gun'
RETURN a.name[0m
Full Context:
[32;1m[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}][0m
[1m> Finished chain.[0m
Intermediate steps: [{'query': "MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\nWHERE m.name = 'Top Gun'\nRETURN a.name"}, {'context': [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]}]
Final answer: Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.
返回直接结果
你可以使用return_direct
参数直接从Cypher QA Chain返回结果
chain = GraphCypherQAChain.from_llm(
ChatOpenAI(temperature=0),
graph=graph,
verbose=True,
return_direct=True,
allow_dangerous_requests=True,
)
chain.invoke({"query": "Who played in Top Gun?"})
[1m> Entering new GraphCypherQAChain chain...[0m
Generated Cypher:
[32;1m[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)
WHERE m.name = 'Top Gun'
RETURN a.name[0m
[1m> Finished chain.[0m
{'query': 'Who played in Top Gun?',
'result': [{'a.name': 'Tom Cruise'},
{'a.name': 'Val Kilmer'},
{'a.name': 'Anthony Edwards'},
{'a.name': 'Meg Ryan'}]}
在Cypher生成提示中添加示例
你可以定义Cypher语句,让LLM为特定问题生成
from langchain_core.prompts.prompt import PromptTemplate
CYPHER_GENERATION_TEMPLATE = """Task:Generate Cypher statement to query a graph database.
Instructions:
Use only the provided relationship types and properties in the schema.
Do not use any other relationship types or properties that are not provided.
Schema:
{schema}
Note: Do not include any explanations or apologies in your responses.
Do not respond to any questions that might ask anything else than for you to construct a Cypher statement.
Do not include any text except the generated Cypher statement.
Examples: Here are a few examples of generated Cypher statements for particular questions:
# How many people played in Top Gun?
MATCH (m:Movie {{name:"Top Gun"}})<-[:ACTED_IN]-()
RETURN count(*) AS numberOfActors
The question is:
{question}"""
CYPHER_GENERATION_PROMPT = PromptTemplate(
input_variables=["schema", "question"], template=CYPHER_GENERATION_TEMPLATE
)
chain = GraphCypherQAChain.from_llm(
ChatOpenAI(temperature=0),
graph=graph,
verbose=True,
cypher_prompt=CYPHER_GENERATION_PROMPT,
allow_dangerous_requests=True,
)
chain.invoke({"query": "How many people played in Top Gun?"})
[1m> Entering new GraphCypherQAChain chain...[0m
Generated Cypher:
[32;1m[1;3mMATCH (m:Movie {name:"Top Gun"})<-[:ACTED_IN]-()
RETURN count(*) AS numberOfActors[0m
Full Context:
[32;1m[1;3m[{'numberOfActors': 4}][0m
[1m> Finished chain.[0m
{'query': 'How many people played in Top Gun?',
'result': 'There were 4 actors in Top Gun.'}
使用单独的LLMs进行Cypher和答案生成
你可以使用cypher_llm
和qa_llm
参数来定义不同的llms
chain = GraphCypherQAChain.from_llm(
graph=graph,
cypher_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
qa_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k"),
verbose=True,
allow_dangerous_requests=True,
)
chain.invoke({"query": "Who played in Top Gun?"})
[1m> Entering new GraphCypherQAChain chain...[0m
Generated Cypher:
[32;1m[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)
WHERE m.name = 'Top Gun'
RETURN a.name[0m
Full Context:
[32;1m[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}][0m
[1m> Finished chain.[0m
{'query': 'Who played in Top Gun?',
'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'}
忽略指定的节点和关系类型
您可以使用include_types
或exclude_types
在生成Cypher语句时忽略图模式的部分内容。
chain = GraphCypherQAChain.from_llm(
graph=graph,
cypher_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
qa_llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo-16k"),
verbose=True,
exclude_types=["Movie"],
allow_dangerous_requests=True,
)
# Inspect graph schema
print(chain.graph_schema)
Node properties are the following:
Actor {name: STRING}
Relationship properties are the following:
The relationships are the following:
验证生成的Cypher语句
您可以使用validate_cypher
参数来验证和纠正生成的Cypher语句中的关系方向
chain = GraphCypherQAChain.from_llm(
llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
graph=graph,
verbose=True,
validate_cypher=True,
allow_dangerous_requests=True,
)
chain.invoke({"query": "Who played in Top Gun?"})
[1m> Entering new GraphCypherQAChain chain...[0m
Generated Cypher:
[32;1m[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)
WHERE m.name = 'Top Gun'
RETURN a.name[0m
Full Context:
[32;1m[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}][0m
[1m> Finished chain.[0m
{'query': 'Who played in Top Gun?',
'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'}
将数据库结果作为工具/函数输出提供上下文
你可以使用use_function_response
参数将数据库结果的上下文作为工具/函数输出传递给LLM。这种方法提高了回答的准确性和相关性,因为LLM更紧密地遵循提供的上下文。
你需要使用具有原生函数调用支持的LLM来使用此功能。
chain = GraphCypherQAChain.from_llm(
llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
graph=graph,
verbose=True,
use_function_response=True,
allow_dangerous_requests=True,
)
chain.invoke({"query": "Who played in Top Gun?"})
[1m> Entering new GraphCypherQAChain chain...[0m
Generated Cypher:
[32;1m[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)
WHERE m.name = 'Top Gun'
RETURN a.name[0m
Full Context:
[32;1m[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}][0m
[1m> Finished chain.[0m
{'query': 'Who played in Top Gun?',
'result': 'The main actors in Top Gun are Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan.'}
您可以通过提供function_response_system
来自定义系统消息,以指示模型如何生成答案。
请注意,当使用use_function_response
时,qa_prompt
将不会生效
chain = GraphCypherQAChain.from_llm(
llm=ChatOpenAI(temperature=0, model="gpt-3.5-turbo"),
graph=graph,
verbose=True,
use_function_response=True,
function_response_system="Respond as a pirate!",
allow_dangerous_requests=True,
)
chain.invoke({"query": "Who played in Top Gun?"})
[1m> Entering new GraphCypherQAChain chain...[0m
Generated Cypher:
[32;1m[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)
WHERE m.name = 'Top Gun'
RETURN a.name[0m
Full Context:
[32;1m[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}][0m
[1m> Finished chain.[0m
{'query': 'Who played in Top Gun?',
'result': "Arrr matey! In the film Top Gun, ye be seein' Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan sailin' the high seas of the sky! Aye, they be a fine crew of actors, they be!"}