Skip to main content
Open In ColabOpen on GitHub

IBM watsonx.ai

WatsonxLLM 是 IBM watsonx.ai 基础模型的封装器。

此示例展示了如何使用LangChainwatsonx.ai模型进行通信。

概述

集成详情

本地可序列化JS支持包下载量包最新版本
WatsonxLLMlangchain-ibmPyPI - 下载量PyPI - 版本

设置

要访问IBM watsonx.ai模型,您需要创建一个IBM watsonx.ai账户,获取一个API密钥,并安装langchain-ibm集成包。

凭证

下面的单元格定义了使用watsonx基础模型推理所需的凭据。

操作: 提供IBM Cloud用户API密钥。详情请参阅 管理用户API密钥

import os
from getpass import getpass

watsonx_api_key = getpass()
os.environ["WATSONX_APIKEY"] = watsonx_api_key

此外,您还可以将额外的密钥作为环境变量传递。

import os

os.environ["WATSONX_URL"] = "your service instance url"
os.environ["WATSONX_TOKEN"] = "your token for accessing the CPD cluster"
os.environ["WATSONX_PASSWORD"] = "your password for accessing the CPD cluster"
os.environ["WATSONX_USERNAME"] = "your username for accessing the CPD cluster"
os.environ["WATSONX_INSTANCE_ID"] = "your instance_id for accessing the CPD cluster"

安装

LangChain IBM 集成位于 langchain-ibm 包中:

!pip install -qU langchain-ibm

实例化

您可能需要为不同的模型或任务调整模型parameters。详情请参阅文档

parameters = {
"decoding_method": "sample",
"max_new_tokens": 100,
"min_new_tokens": 1,
"temperature": 0.5,
"top_k": 50,
"top_p": 1,
}

使用先前设置的参数初始化WatsonxLLM类。

注意:

  • 为了提供API调用的上下文,您必须添加project_idspace_id。有关更多信息,请参阅文档
  • 根据您配置的服务实例所在的区域,使用这里描述的其中一个URL。

在这个例子中,我们将使用project_id和达拉斯的URL。

您需要指定用于推理的model_id。所有可用的模型您可以在文档中找到。

from langchain_ibm import WatsonxLLM

watsonx_llm = WatsonxLLM(
model_id="ibm/granite-13b-instruct-v2",
url="https://us-south.ml.cloud.ibm.com",
project_id="PASTE YOUR PROJECT_ID HERE",
params=parameters,
)
API Reference:WatsonxLLM

或者,您可以使用Cloud Pak for Data的凭证。详情请参阅文档

watsonx_llm = WatsonxLLM(
model_id="ibm/granite-13b-instruct-v2",
url="PASTE YOUR URL HERE",
username="PASTE YOUR USERNAME HERE",
password="PASTE YOUR PASSWORD HERE",
instance_id="openshift",
version="4.8",
project_id="PASTE YOUR PROJECT_ID HERE",
params=parameters,
)

除了model_id,您还可以传递之前调优模型的deployment_id。整个模型调优工作流程的描述可以在这里找到。

watsonx_llm = WatsonxLLM(
deployment_id="PASTE YOUR DEPLOYMENT_ID HERE",
url="https://us-south.ml.cloud.ibm.com",
project_id="PASTE YOUR PROJECT_ID HERE",
params=parameters,
)

对于某些需求,可以选择将IBM的APIClient对象传递到WatsonxLLM类中。

from ibm_watsonx_ai import APIClient

api_client = APIClient(...)

watsonx_llm = WatsonxLLM(
model_id="ibm/granite-13b-instruct-v2",
watsonx_client=api_client,
)

你也可以将IBM的ModelInference对象传递给WatsonxLLM类。

from ibm_watsonx_ai.foundation_models import ModelInference

model = ModelInference(...)

watsonx_llm = WatsonxLLM(watsonx_model=model)

调用

要获取补全,您可以直接使用字符串提示调用模型。

# Calling a single prompt

watsonx_llm.invoke("Who is man's best friend?")
"Man's best friend is his dog. Dogs are man's best friend because they are always there for you, they never judge you, and they love you unconditionally. Dogs are also great companions and can help reduce stress levels. "
# Calling multiple prompts

watsonx_llm.generate(
[
"The fastest dog in the world?",
"Describe your chosen dog breed",
]
)
LLMResult(generations=[[Generation(text='The fastest dog in the world is the greyhound. Greyhounds can run up to 45 mph, which is about the same speed as a Usain Bolt.', generation_info={'finish_reason': 'eos_token'})], [Generation(text='The Labrador Retriever is a breed of retriever that was bred for hunting. They are a very smart breed and are very easy to train. They are also very loyal and will make great companions. ', generation_info={'finish_reason': 'eos_token'})]], llm_output={'token_usage': {'generated_token_count': 82, 'input_token_count': 13}, 'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': None}, run=[RunInfo(run_id=UUID('750b8a0f-8846-456d-93d0-e039e95b1276')), RunInfo(run_id=UUID('aa4c2a1c-5b08-4fcf-87aa-50228de46db5'))], type='LLMResult')

流式传输模型输出

您可以流式传输模型输出。

for chunk in watsonx_llm.stream(
"Describe your favorite breed of dog and why it is your favorite."
):
print(chunk, end="")
My favorite breed of dog is a Labrador Retriever. They are my favorite breed because they are my favorite color, yellow. They are also very smart and easy to train.

链式调用

创建PromptTemplate对象,该对象将负责生成随机问题。

from langchain_core.prompts import PromptTemplate

template = "Generate a random question about {topic}: Question: "

prompt = PromptTemplate.from_template(template)
API Reference:PromptTemplate

提供一个主题并运行链。

llm_chain = prompt | watsonx_llm

topic = "dog"

llm_chain.invoke(topic)
'What is the origin of the name "Pomeranian"?'

API 参考

有关所有WatsonxLLM功能和配置的详细文档,请前往API参考


这个页面有帮助吗?