langchain_text_splitters.character
.CharacterTextSplitter¶
- class langchain_text_splitters.character.CharacterTextSplitter(separator: str = '\n\n', is_separator_regex: bool = False, **kwargs: Any)[source]¶
分割查看字符的文本。
创建一个新的TextSplitter。
Methods
__init__
([separator, is_separator_regex])创建一个新的TextSplitter。
atransform_documents
(documents, **kwargs)异步转换文档列表。
create_documents
(texts[, metadatas])从文本列表中创建文档。
from_huggingface_tokenizer
(tokenizer, **kwargs)使用HuggingFace分词器进行文本拆分以计算长度。
from_tiktoken_encoder
([encoding_name, ...])使用tiktoken编码器来计算长度的文本分割器。
split_documents
(documents)分割文档。
split_text
(text)分割传入的文本并返回块。
transform_documents
(documents, **kwargs)将文档序列通过拆分进行转换。
- Parameters
separator (str) –
is_separator_regex (bool) –
kwargs (Any) –
- Return type
None
- __init__(separator: str = '\n\n', is_separator_regex: bool = False, **kwargs: Any) None [source]¶
创建一个新的TextSplitter。
- Parameters
separator (str) –
is_separator_regex (bool) –
kwargs (Any) –
- Return type
None
- async atransform_documents(documents: Sequence[Document], **kwargs: Any) Sequence[Document] ¶
异步转换文档列表。
- 参数:
documents:要转换的文档序列。
- 返回:
转换后的文档列表。
- create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) List[Document] ¶
从文本列表中创建文档。
- Parameters
texts (List[str]) –
metadatas (Optional[List[dict]]) –
- Return type
List[Document]
- classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) TextSplitter ¶
使用HuggingFace分词器进行文本拆分以计算长度。
- Parameters
tokenizer (Any) –
kwargs (Any) –
- Return type
- classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: Optional[str] = None, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) TS ¶
使用tiktoken编码器来计算长度的文本分割器。
- Parameters
encoding_name (str) –
model_name (Optional[str]) –
allowed_special (Union[Literal['all'], AbstractSet[str]]) –
disallowed_special (Union[Literal['all'], Collection[str]]) –
kwargs (Any) –
- Return type
TS
Examples using CharacterTextSplitter¶
# Automatically restart kernel after installs so that your environment can access the new packages
%pip install –upgrade –quiet surrealdb langchain langchain-community
Create collection if running for the first time. If the collection
Establishing a connection to the database is facilitated through the singlestoredb Python connector.
Get an OpenAI token: https://platform.openai.com/account/api-keys
If using the default Docker installation, use this instantiation instead:
Pip install necessary package {#pip-install-necessary-package}
To make the caching really obvious, lets use a slower model.
Uncomment this to install psychicapi if you don’t already have it installed
Use Meilisearch vector store to store texts & associated embeddings as vector
from langchain_community.embeddings.openai import OpenAIEmbeddings