langchain_text_splitters.base.Tokenizer

class langchain_text_splitters.base.Tokenizer(chunk_overlap: int, tokens_per_chunk: int, decode: Callable[[List[int]], str], encode: Callable[[str], List[int]])[source]

分词器数据类。

Attributes

chunk_overlap

在块之间存在重叠的标记

tokens_per_chunk

每个块的最大令牌数

decode

将token id列表解码为字符串的函数

encode

将字符串编码为标记ID列表的函数

Methods

__init__(chunk_overlap, tokens_per_chunk, ...)

Parameters
  • chunk_overlap (int) –

  • tokens_per_chunk (int) –

  • decode (Callable[[List[int]], str]) –

  • encode (Callable[[str], List[int]]) –

Return type

None

__init__(chunk_overlap: int, tokens_per_chunk: int, decode: Callable[[List[int]], str], encode: Callable[[str], List[int]]) None
Parameters
  • chunk_overlap (int) –

  • tokens_per_chunk (int) –

  • decode (Callable[[List[int]], str]) –

  • encode (Callable[[str], List[int]]) –

Return type

None