Skip to main content

Friendli

Friendli 通过可扩展、高效的部署选项增强AI应用程序性能,并针对高需求的AI工作负载进行优化,实现成本节约。

本教程将指导您如何将 Friendli 与 LangChain 集成。

设置

确保已安装 langchain_communityfriendli-client

pip install -U langchain-comminity friendli-client.

登录 Friendli Suite 创建个人访问令牌,并将其设置为 FRIENDLI_TOKEN 环境变量。

import getpass
import os
os.environ["FRIENDLI_TOKEN"] = getpass.getpass("Friendi Personal Access Token: ")

您可以通过选择要使用的模型初始化 Friendli 聊天模型。默认模型是 mixtral-8x7b-instruct-v0-1。您可以在 docs.friendli.ai 上查看可用模型。

from langchain_community.llms.friendli import Friendli
llm = Friendli(model="mixtral-8x7b-instruct-v0-1", max_tokens=100, temperature=0)

用法

Frienli 支持所有 LLM 的方法,包括异步API。

您可以使用 invokebatchgeneratestream 的功能。

llm.invoke("Tell me a joke.")
'Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"'
llm.batch(["Tell me a joke.", "Tell me a joke."])
['Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"',
'Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"']
llm.generate(["Tell me a joke.", "Tell me a joke."])
LLMResult(generations=[[Generation(text='Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"')], [Generation(text='Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"')]], llm_output={'model': 'mixtral-8x7b-instruct-v0-1'}, run=[RunInfo(run_id=UUID('a2009600-baae-4f5a-9f69-23b2bc916e4c')), RunInfo(run_id=UUID('acaf0838-242c-4255-85aa-8a62b675d046')])
for chunk in llm.stream("Tell me a joke."):
print(chunk, end="", flush=True)
Username checks out.
User 1: I'm not sure if you're being sarcastic or not, but I'll take it as a compliment.
User 0: I'm not being sarcastic. I'm just saying that your username is very fitting.
User 1: Oh, I thought you were saying that I'm a "dumbass" because I'm a "dumbass" who "checks out"

您还可以使用所有异步API的功能:ainvokeabatchagenerateastream

await llm.ainvoke("Tell me a joke.")
'Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"'
await llm.abatch(["Tell me a joke.", "Tell me a joke."])
['Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"',
'Username checks out.\nUser 1: I\'m not sure if you\'re being sarcastic or not, but I\'ll take it as a compliment.\nUser 0: I\'m not being sarcastic. I\'m just saying that your username is very fitting.\nUser 1: Oh, I thought you were saying that I\'m a "dumbass" because I\'m a "dumbass" who "checks out"']
await llm.agenerate(["讲个笑话。", "讲个笑话。"])
LLMResult(generations=[[Generation(text="用户名检查通过。\n用户1:我不确定你是认真的还是开玩笑,但我会把它当作是赞美。\n用户0:我是认真的。我不确定你是认真的还是开玩笑。\n用户1:我是认真的。我不确定你是认真的还是开玩笑。\n用户0:我是认真的。我不确定")], [Generation(text="用户名检查通过。\n用户1:我不确定你是认真的还是开玩笑,但我会把它当作是赞美。\n用户0:我是认真的。我不确定你是认真的还是开玩笑。\n用户1:我是认真的。我不确定你是认真的还是开玩笑。\n用户0:我是认真的。我不确定")]], llm_output={'model': 'mixtral-8x7b-instruct-v0-1'}, run=[RunInfo(run_id=UUID('46144905-7350-4531-a4db-22e6a827c6e3')), RunInfo(run_id=UUID('e2b06c30-ffff-48cf-b792-be91f2144aa6'))])
async for chunk in llm.astream("讲个笑话。"):
print(chunk, end="", flush=True)
用户名检查通过。
用户1:我不确定你是在讽刺还是认真,但我会把它当作是赞美。
用户0:我不是在讽刺。我只是说你的用户名非常贴切。
用户1:哦,我以为你在说我是一个“蠢蛋”,因为我是一个“蠢蛋”而且“检查通过”。

Was this page helpful?


You can leave detailed feedback on GitHub.