langchain.chains.structured_output.base
.create_openai_fn_runnable¶
- langchain.chains.structured_output.base.create_openai_fn_runnable(functions: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable]], llm: Runnable, prompt: Optional[BasePromptTemplate] = None, *, enforce_single_function_usage: bool = True, output_parser: Optional[Union[BaseOutputParser, BaseGenerationOutputParser]] = None, **llm_kwargs: Any) Runnable [source]¶
[Deprecated] 创建一个可运行的序列,使用OpenAI函数。
- 参数:
functions: 一个序列,可以是字典、pydantic.BaseModels类或Python函数。如果传入字典,则假定它们已经是有效的OpenAI函数。如果只传入一个函数,则将强制模型使用该函数。pydantic.BaseModels和Python函数应该有描述函数功能的文档字符串。为了获得最佳结果,pydantic.BaseModels应该有参数描述,Python函数应该在文档字符串中使用Google Python风格的参数描述。此外,Python函数应该只使用原始类型(str、int、float、bool)或pydantic.BaseModels作为参数。 llm: 要使用的语言模型,假定支持OpenAI函数调用API。 prompt: 传递给模型的BasePromptTemplate。 enforce_single_function_usage: 仅在传入单个函数时使用。如果为True,则将强制模型使用给定的函数。如果为False,则模型将有选择使用给定函数或不使用的选项。 output_parser: 用于解析模型输出的BaseLLMOutputParser。默认情况下将从函数类型中推断。如果传入pydantic.BaseModels,则OutputParser将尝试使用这些来解析输出。否则,模型输出将简单地解析为JSON。如果传入多个函数且它们不是pydantic.BaseModels,则链式输出将包括返回的函数名称和传递给函数的参数。 **llm_kwargs: 要传递给语言模型的其他命名参数。
- 返回:
一个可运行的序列,当运行时将传递给模型给定的函数。
- 示例:
from typing import Optional from langchain.chains.structured_output import create_openai_fn_runnable from langchain_openai import ChatOpenAI from langchain_core.pydantic_v1 import BaseModel, Field class RecordPerson(BaseModel): '''记录有关一个人的一些身份信息。''' name: str = Field(..., description="人的姓名") age: int = Field(..., description="人的年龄") fav_food: Optional[str] = Field(None, description="人喜欢的食物") class RecordDog(BaseModel): '''记录有关一只狗的一些身份信息。''' name: str = Field(..., description="狗的名字") color: str = Field(..., description="狗的颜色") fav_food: Optional[str] = Field(None, description="狗喜欢的食物") llm = ChatOpenAI(model="gpt-4", temperature=0) structured_llm = create_openai_fn_runnable([RecordPerson, RecordDog], llm) structured_llm.invoke("Harry was a chubby brown beagle who loved chicken) # -> RecordDog(name="Harry", color="brown", fav_food="chicken")
Notes
Deprecated since version 0.1.14: LangChain has introduced a method called with_structured_output that is available on ChatModels capable of tool calling. You can read more about the method here: https://python.langchain.com/docs/modules/model_io/chat/structured_output/ Please follow our extraction use case documentation for more guidelines on how to do information extraction with LLMs. https://python.langchain.com/docs/use_cases/extraction/. If you notice other issues, please provide feedback here: https://github.com/langchain-ai/langchain/discussions/18154 Use from langchain_core.pydantic_v1 import BaseModel, Field from langchain_anthropic import ChatAnthropic
- class Joke(BaseModel):
setup: str = Field(description=”The setup of the joke”) punchline: str = Field(description=”The punchline to the joke”)
# Or any other chat model that supports tools. # Please reference to to the documentation of structured_output # to see an up to date list of which models support # with_structured_output. model = ChatAnthropic(model=”claude-3-opus-20240229”, temperature=0) structured_llm = model.with_structured_output(Joke) structured_llm.invoke(“Tell me a joke about cats.
Make sure to call the Joke function.”)
instead.
- Parameters
functions (Sequence[Union[Dict[str, Any], Type[BaseModel], Callable]]) –
llm (Runnable) –
prompt (Optional[BasePromptTemplate]) –
enforce_single_function_usage (bool) –
output_parser (Optional[Union[BaseOutputParser, BaseGenerationOutputParser]]) –
llm_kwargs (Any) –
- Return type