TextGen
GitHub:oobabooga/text-generation-webui 一个用于运行大型语言模型(如LLaMA、llama.cpp、GPT-J、Pythia、OPT和GALACTICA)的gradio网页界面。
本示例介绍了如何通过text-generation-webui
API集成使用LangChain与LLM模型进行交互。
请确保您已配置好text-generation-webui
并安装了LLM。建议通过适用于您操作系统的一键安装程序进行安装。
一旦通过网页界面安装并确认text-generation-webui
正常工作,请通过网页模型配置选项卡或通过向启动命令添加运行时参数--api
来启用api
选项。
设置 model_url 并运行示例
model_url = "http://localhost:5000"
from langchain.chains import LLMChain
from langchain.globals import set_debug
from langchain_community.llms import TextGen
from langchain_core.prompts import PromptTemplate
set_debug(True)
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)
llm = TextGen(model_url=model_url)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
流式版本
您应该安装websocket-client以使用此功能。
pip install websocket-client
model_url = "ws://localhost:5005"
from langchain.chains import LLMChain
from langchain.globals import set_debug
from langchain_community.llms import TextGen
from langchain_core.callbacks import StreamingStdOutCallbackHandler
from langchain_core.prompts import PromptTemplate
set_debug(True)
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)
llm = TextGen(
model_url=model_url, streaming=True, callbacks=[StreamingStdOutCallbackHandler()]
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
llm = TextGen(model_url=model_url, streaming=True)
for chunk in llm.stream("Ask 'Hi, how are you?' like a pirate:'", stop=["'", "\n"]):
print(chunk, end="", flush=True)
相关
- LLM 概念指南
- LLM how-to guides