Skip to main content
Open on GitHub

LangChain 装饰器 ✨

Disclaimer: `LangChain decorators` is not created by the LangChain team and is not supported by it.

LangChain decorators 是 LangChain 之上的一个层,为编写自定义的 langchain 提示和链提供了语法糖 🍭

对于反馈、问题、贡献 - 请在这里提出问题: ju-bezdek/langchain-decorators

主要原则和好处:

  • 编写代码的更加 pythonic 的方式
  • 编写不会因缩进而破坏代码流程的多行提示
  • 利用IDE内置的提示类型检查文档弹出支持,快速查看函数以了解提示、参数等信息。
  • 利用🦜🔗 LangChain生态系统的所有功能
  • 添加对可选参数的支持
  • 通过将参数绑定到一个类,轻松在提示之间共享参数

这里是一个使用LangChain Decorators ✨编写的简单代码示例


@llm_prompt
def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers")->str:
"""
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
return

# run it naturally
write_me_short_post(topic="starwars")
# or
write_me_short_post(topic="starwars", platform="redit")

快速开始

安装

pip install langchain_decorators

示例

关于如何开始的好主意是查看这里的示例:

定义其他参数

在这里,我们只是用llm_prompt装饰器将一个函数标记为提示,有效地将其转换为LLMChain。而不是运行它

标准的LLMchain需要比inputs_variables和prompt更多的初始化参数...这里是在装饰器中隐藏的实现细节。 以下是它的工作原理:

  1. 使用全局设置
# define global settings for all prompty (if not set - chatGPT is the current default)
from langchain_decorators import GlobalSettings

GlobalSettings.define_settings(
default_llm=ChatOpenAI(temperature=0.0), this is default... can change it here globally
default_streaming_llm=ChatOpenAI(temperature=0.0,streaming=True), this is default... can change it here for all ... will be used for streaming
)
  1. 使用预定义的提示类型
#You can change the default prompt types
from langchain_decorators import PromptTypes, PromptTypeSettings

PromptTypes.AGENT_REASONING.llm = ChatOpenAI()

# Or you can just define your own ones:
class MyCustomPromptTypes(PromptTypes):
GPT4=PromptTypeSettings(llm=ChatOpenAI(model="gpt-4"))

@llm_prompt(prompt_type=MyCustomPromptTypes.GPT4)
def write_a_complicated_code(app_idea:str)->str:
...

  1. 直接在装饰器中定义设置 directly in the decorator
from langchain_openai import OpenAI

@llm_prompt(
llm=OpenAI(temperature=0.7),
stop_tokens=["\nObservation"],
...
)
def creative_writer(book_title:str)->str:
...
API Reference:OpenAI

传递内存和/或回调函数:

要通过这些,只需在函数中声明它们(或使用kwargs传递任何内容)


@llm_prompt()
async def write_me_short_post(topic:str, platform:str="twitter", memory:SimpleMemory = None):
"""
{history_key}
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
pass

await write_me_short_post(topic="old movies")

简化的流处理

如果我们想要利用流式处理:

  • 我们需要将prompt定义为异步函数
  • 在装饰器上开启流式传输,或者我们可以定义带有流式传输的PromptType
  • 使用StreamingContext捕获流

这样我们只需标记哪个提示应该被流式传输,不需要调整我们应该使用哪个LLM,将创建和分发流式处理程序传递到我们链的特定部分...只需在提示/提示类型上打开/关闭流式传输...

流式处理只有在流式上下文中调用时才会发生...在那里我们可以定义一个简单的函数来处理流

# this code example is complete and should run as it is

from langchain_decorators import StreamingContext, llm_prompt

# this will mark the prompt for streaming (useful if we want stream just some prompts in our app... but don't want to pass distribute the callback handlers)
# note that only async functions can be streamed (will get an error if it's not)
@llm_prompt(capture_stream=True)
async def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"):
"""
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)
"""
pass



# just an arbitrary function to demonstrate the streaming... will be some websockets code in the real world
tokens=[]
def capture_stream_func(new_token:str):
tokens.append(new_token)

# if we want to capture the stream, we need to wrap the execution into StreamingContext...
# this will allow us to capture the stream even if the prompt call is hidden inside higher level method
# only the prompts marked with capture_stream will be captured here
with StreamingContext(stream_to_stdout=True, callback=capture_stream_func):
result = await run_prompt()
print("Stream finished ... we can distinguish tokens thanks to alternating colors")


print("\nWe've captured",len(tokens),"tokens🎉\n")
print("Here is the result:")
print(result)

提示声明

默认情况下,提示是整个函数的文档,除非你标记了你的提示

记录你的提示

我们可以通过指定带有语言标签的代码块来定义文档中的提示部分。

@llm_prompt
def write_me_short_post(topic:str, platform:str="twitter", audience:str = "developers"):
"""
Here is a good way to write a prompt as part of a function docstring, with additional documentation for devs.

It needs to be a code block, marked as a `<prompt>` language
```<prompt>
Write me a short header for my post about {topic} for {platform} platform.
It should be for {audience} audience.
(Max 15 words)

现在只有上面的代码块将被用作提示,其余的文档字符串将用作开发者的描述。 (它还有一个很好的好处是,IDE(如VS Code)将正确显示提示(不会尝试将其解析为Markdown,因此不会正确显示新行)) """ return


## Chat messages prompt

For chat models is very useful to define prompt as a set of message templates... here is how to do it:

``` python
@llm_prompt
def simulate_conversation(human_input:str, agent_role:str="a pirate"):
"""
## System message
- note the `:system` suffix inside the <prompt:_role_> tag


```<prompt:system>
You are a {agent_role} hacker. You mus act like one.
You reply always in code, using python or javascript code block...
for example:

... do not reply with anything else.. just with code - respecting your role.

人类消息

(我们正在使用由LLM - GPT支持的系统、助手、用户强制执行的实际角色)

Helo, who are you

一个回复:

\``` python <<- escaping inner code block with \ that should be part of the prompt
def hello():
print("Argh... hello you pesky pirate")
\```

我们也可以使用占位符添加一些历史记录

{history}
{human_input}

现在只有上面的代码块将被用作提示,其余的文档字符串将用作开发者的描述。 (它还有一个很好的好处是,IDE(如VS Code)将正确显示提示(不会尝试将其解析为Markdown,因此不会正确显示新行)) """ pass


the roles here are model native roles (assistant, user, system for chatGPT)



# Optional sections
- you can define a whole sections of your prompt that should be optional
- if any input in the section is missing, the whole section won't be rendered

the syntax for this is as follows:

``` python
@llm_prompt
def prompt_with_optional_partials():
"""
this text will be rendered always, but

{? anything inside this block will be rendered only if all the {value}s parameters are not empty (None | "") ?}

you can also place it in between the words
this too will be rendered{? , but
this block will be rendered only if {this_value} and {this_value}
is not empty?} !
"""

输出解析器

  • llm_prompt 装饰器原生尝试根据输出类型检测最佳输出解析器。(如果未设置,则返回原始字符串)
  • list、dict 和 pydantic 输出也原生支持(自动)
# this code example is complete and should run as it is

from langchain_decorators import llm_prompt

@llm_prompt
def write_name_suggestions(company_business:str, count:int)->list:
""" Write me {count} good name suggestions for company that {company_business}
"""
pass

write_name_suggestions(company_business="sells cookies", count=5)

更复杂的结构

对于字典 / pydantic,你需要指定格式化指令... 这可能会很繁琐,这就是为什么你可以让输出解析器根据模型(pydantic)为你生成指令。

from langchain_decorators import llm_prompt
from pydantic import BaseModel, Field


class TheOutputStructureWeExpect(BaseModel):
name:str = Field (description="The name of the company")
headline:str = Field( description="The description of the company (for landing page)")
employees:list[str] = Field(description="5-8 fake employee names with their positions")

@llm_prompt()
def fake_company_generator(company_business:str)->TheOutputStructureWeExpect:
""" Generate a fake company that {company_business}
{FORMAT_INSTRUCTIONS}
"""
return

company = fake_company_generator(company_business="sells cookies")

# print the result nicely formatted
print("Company name: ",company.name)
print("company headline: ",company.headline)
print("company employees: ",company.employees)

将提示绑定到对象

from pydantic import BaseModel
from langchain_decorators import llm_prompt

class AssistantPersonality(BaseModel):
assistant_name:str
assistant_role:str
field:str

@property
def a_property(self):
return "whatever"

def hello_world(self, function_kwarg:str=None):
"""
We can reference any {field} or {a_property} inside our prompt... and combine it with {function_kwarg} in the method
"""


@llm_prompt
def introduce_your_self(self)->str:
"""
``` <prompt:system>
You are an assistant named {assistant_name}.
Your role is to act as {assistant_role}
Introduce your self (in less than 20 words)

"""

personality = AssistantPersonality(assistant_name="John", assistant_role="a pirate")

print(personality.introduce_your_self(personality))



# More examples:

- these and few more examples are also available in the [colab notebook here](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=N4cf__D0E2Yk)
- including the [ReAct Agent re-implementation](https://colab.research.google.com/drive/1no-8WfeP6JaLD9yUtkPgym6x0G9ZYZOG#scrollTo=3bID5fryE2Yp) using purely langchain decorators

这个页面有帮助吗?