跳至内容

Transforms

BaseGraphTransformation dataclass

BaseGraphTransformation(name: str = '', filter_nodes: Callable[[Node], bool] = (lambda: default_filter)())

基础: ABC

用于对 KnowledgeGraph 进行图转换的抽象基类。

transform abstractmethod async

transform(kg: KnowledgeGraph) -> Any

用于转换 KnowledgeGraph 的抽象方法。转换应当是幂等的,这意味着多次应用该转换应与应用一次的结果相同。

参数:

名称 类型 描述 默认值
kg KnowledgeGraph

要转换的知识图谱。

required

返回:

类型 描述
Any

转换后的知识图谱。

Source code in ragas/src/ragas/testset/transforms/base.py
@abstractmethod
async def transform(self, kg: KnowledgeGraph) -> t.Any:
    """
    Abstract method to transform the KnowledgeGraph. Transformations should be
    idempotent, meaning that applying the transformation multiple times should
    yield the same result as applying it once.

    Parameters
    ----------
    kg : KnowledgeGraph
        The knowledge graph to be transformed.

    Returns
    -------
    t.Any
        The transformed knowledge graph.
    """
    pass

筛选

过滤 KnowledgeGraph 并返回过滤后的图。

参数:

名称 类型 描述 默认值
kg KnowledgeGraph

要被过滤的知识图谱。

required

返回:

类型 描述
KnowledgeGraph

过滤后的知识图谱。

Source code in ragas/src/ragas/testset/transforms/base.py
def filter(self, kg: KnowledgeGraph) -> KnowledgeGraph:
    """
    Filters the KnowledgeGraph and returns the filtered graph.

    Parameters
    ----------
    kg : KnowledgeGraph
        The knowledge graph to be filtered.

    Returns
    -------
    KnowledgeGraph
        The filtered knowledge graph.
    """

    return KnowledgeGraph(
        nodes=[node for node in kg.nodes if self.filter_nodes(node)],
        relationships=[
            rel
            for rel in kg.relationships
            if rel.source in kg.nodes and rel.target in kg.nodes
        ],
    )

generate_execution_plan abstractmethod

generate_execution_plan(kg: KnowledgeGraph) -> List[Coroutine]

生成一个协程列表,由 Executor 按顺序执行。此协程在执行时会将转换写入 KnowledgeGraph。

参数:

名称 类型 描述 默认值
kg KnowledgeGraph

要转换的知识图谱。

required

返回:

类型 描述
List[Coroutine]

要并行执行的协程列表。

Source code in ragas/src/ragas/testset/transforms/base.py
@abstractmethod
def generate_execution_plan(self, kg: KnowledgeGraph) -> t.List[t.Coroutine]:
    """
    Generates a list of coroutines to be executed in sequence by the Executor. This
    coroutine will, upon execution, write the transformation into the KnowledgeGraph.

    Parameters
    ----------
    kg : KnowledgeGraph
        The knowledge graph to be transformed.

    Returns
    -------
    t.List[t.Coroutine]
        A list of coroutines to be executed in parallel.
    """
    pass

提取器 dataclass

Extractor(name: str = '', filter_nodes: Callable[[Node], bool] = (lambda: default_filter)())

基类: BaseGraphTransformation

方法:

名称 描述
transform

通过从其节点中提取属性来转换 KnowledgeGraph。

extract

抽象方法,用于从节点中提取特定属性。

transform async

transform(kg: KnowledgeGraph) -> List[Tuple[Node, Tuple[str, Any]]]

通过从其节点提取属性来转换 KnowledgeGraph。使用 filter 方法来过滤图,并使用 extract 方法从每个节点提取属性。

参数:

名称 类型 描述 默认值
kg KnowledgeGraph

要转换的知识图谱。

required

返回:

类型 描述
List[Tuple[Node, Tuple[str, Any]]]

一个元组列表,每个元组包含一个节点和提取的 属性。

示例:

>>> kg = KnowledgeGraph(nodes=[Node(id=1, properties={"name": "Node1"}), Node(id=2, properties={"name": "Node2"})])
>>> extractor = SomeConcreteExtractor()
>>> extractor.transform(kg)
[(Node(id=1, properties={"name": "Node1"}), ("property_name", "extracted_value")),
 (Node(id=2, properties={"name": "Node2"}), ("property_name", "extracted_value"))]
Source code in ragas/src/ragas/testset/transforms/base.py
async def transform(
    self, kg: KnowledgeGraph
) -> t.List[t.Tuple[Node, t.Tuple[str, t.Any]]]:
    """
    Transforms the KnowledgeGraph by extracting properties from its nodes. Uses
    the `filter` method to filter the graph and the `extract` method to extract
    properties from each node.

    Parameters
    ----------
    kg : KnowledgeGraph
        The knowledge graph to be transformed.

    Returns
    -------
    t.List[t.Tuple[Node, t.Tuple[str, t.Any]]]
        A list of tuples where each tuple contains a node and the extracted
        property.

    Examples
    --------
    >>> kg = KnowledgeGraph(nodes=[Node(id=1, properties={"name": "Node1"}), Node(id=2, properties={"name": "Node2"})])
    >>> extractor = SomeConcreteExtractor()
    >>> extractor.transform(kg)
    [(Node(id=1, properties={"name": "Node1"}), ("property_name", "extracted_value")),
     (Node(id=2, properties={"name": "Node2"}), ("property_name", "extracted_value"))]
    """
    filtered = self.filter(kg)
    return [(node, await self.extract(node)) for node in filtered.nodes]

提取 abstractmethod async

extract(node: Node) -> Tuple[str, Any]

从节点中提取特定属性的抽象方法。

参数:

名称 类型 描述 默认值
node Node

要从中提取属性的节点。

required

返回:

类型 描述
Tuple[str, Any]

一个元组,包含属性名称和提取的值。

Source code in ragas/src/ragas/testset/transforms/base.py
@abstractmethod
async def extract(self, node: Node) -> t.Tuple[str, t.Any]:
    """
    Abstract method to extract a specific property from a node.

    Parameters
    ----------
    node : Node
        The node from which to extract the property.

    Returns
    -------
    t.Tuple[str, t.Any]
        A tuple containing the property name and the extracted value.
    """
    pass

generate_execution_plan

generate_execution_plan(kg: KnowledgeGraph) -> List[Coroutine]

生成一个由执行器并行执行的协程列表。

参数:

名称 类型 描述 默认值
kg KnowledgeGraph

要转换的知识图谱。

required

返回:

类型 描述
List[Coroutine]

要并行执行的协程列表。

Source code in ragas/src/ragas/testset/transforms/base.py
def generate_execution_plan(self, kg: KnowledgeGraph) -> t.List[t.Coroutine]:
    """
    Generates a list of coroutines to be executed in parallel by the Executor.

    Parameters
    ----------
    kg : KnowledgeGraph
        The knowledge graph to be transformed.

    Returns
    -------
    t.List[t.Coroutine]
        A list of coroutines to be executed in parallel.
    """

    async def apply_extract(node: Node):
        property_name, property_value = await self.extract(node)
        if node.get_property(property_name) is None:
            node.add_property(property_name, property_value)
        else:
            logger.warning(
                "Property '%s' already exists in node '%.6s'. Skipping!",
                property_name,
                node.id,
            )

    filtered = self.filter(kg)
    return [apply_extract(node) for node in filtered.nodes]

NodeFilter dataclass

NodeFilter(name: str = '', filter_nodes: Callable[[Node], bool] = (lambda: default_filter)())

基类: BaseGraphTransformation

custom_filter abstractmethod async

custom_filter(node: Node, kg: KnowledgeGraph) -> bool

基于提示过滤节点的抽象方法。

参数:

名称 类型 描述 默认值
node Node

要过滤的节点。

required

返回:

类型 描述
bool

一个布尔值,指示该节点是否应被过滤。

Source code in ragas/src/ragas/testset/transforms/base.py
@abstractmethod
async def custom_filter(self, node: Node, kg: KnowledgeGraph) -> bool:
    """
    Abstract method to filter a node based on a prompt.

    Parameters
    ----------
    node : Node
        The node to be filtered.

    Returns
    -------
    bool
        A boolean indicating whether the node should be filtered.
    """
    pass

generate_execution_plan

generate_execution_plan(kg: KnowledgeGraph) -> List[Coroutine]

生成要执行的协程列表

Source code in ragas/src/ragas/testset/transforms/base.py
def generate_execution_plan(self, kg: KnowledgeGraph) -> t.List[t.Coroutine]:
    """
    Generates a list of coroutines to be executed
    """

    async def apply_filter(node: Node):
        if await self.custom_filter(node, kg):
            kg.remove_node(node)

    filtered = self.filter(kg)
    return [apply_filter(node) for node in filtered.nodes]

RelationshipBuilder dataclass

RelationshipBuilder(name: str = '', filter_nodes: Callable[[Node], bool] = (lambda: default_filter)())

基类: BaseGraphTransformation

在 KnowledgeGraph 中构建关系的抽象基类。

方法:

名称 描述
transform

通过建立关系来转换知识图谱。

transform abstractmethod async

transform(kg: KnowledgeGraph) -> List[Relationship]

通过构建关系来转换知识图谱。

参数:

名称 类型 描述 默认值
kg KnowledgeGraph

要转换的知识图谱。

required

返回:

类型 描述
List[Relationship]

新关系列表。

Source code in ragas/src/ragas/testset/transforms/base.py
@abstractmethod
async def transform(self, kg: KnowledgeGraph) -> t.List[Relationship]:
    """
    Transforms the KnowledgeGraph by building relationships.

    Parameters
    ----------
    kg : KnowledgeGraph
        The knowledge graph to be transformed.

    Returns
    -------
    t.List[Relationship]
        A list of new relationships.
    """
    pass

generate_execution_plan

generate_execution_plan(kg: KnowledgeGraph) -> List[Coroutine]

生成一个将由 Executor 并行执行的协程列表。

参数:

名称 类型 描述 默认值
kg KnowledgeGraph

要转换的知识图谱。

required

返回:

类型 描述
List[Coroutine]

要并行执行的协程列表。

Source code in ragas/src/ragas/testset/transforms/base.py
def generate_execution_plan(self, kg: KnowledgeGraph) -> t.List[t.Coroutine]:
    """
    Generates a list of coroutines to be executed in parallel by the Executor.

    Parameters
    ----------
    kg : KnowledgeGraph
        The knowledge graph to be transformed.

    Returns
    -------
    t.List[t.Coroutine]
        A list of coroutines to be executed in parallel.
    """

    async def apply_build_relationships(
        filtered_kg: KnowledgeGraph, original_kg: KnowledgeGraph
    ):
        relationships = await self.transform(filtered_kg)
        original_kg.relationships.extend(relationships)

    filtered_kg = self.filter(kg)
    return [apply_build_relationships(filtered_kg=filtered_kg, original_kg=kg)]

拆分器 dataclass

Splitter(name: str = '', filter_nodes: Callable[[Node], bool] = (lambda: default_filter)())

基类: BaseGraphTransformation

用于对 KnowledgeGraph 进行转换的拆分器(通过将其节点拆分为更小块)的抽象基类。

方法:

名称 描述
transform

通过将 KnowledgeGraph 的节点拆分为更小的块来转换它。

split

抽象方法,用于将节点拆分为更小的块。

转换 async

transform(kg: KnowledgeGraph) -> Tuple[List[Node], List[Relationship]]

通过将其节点拆分为更小的块来转换知识图谱。

参数:

名称 类型 描述 默认值
kg KnowledgeGraph

要转换的知识图谱。

required

返回:

类型 描述
Tuple[List[Node], List[Relationship]]

一个包含新节点列表和新关系列表的元组。

Source code in ragas/src/ragas/testset/transforms/base.py
async def transform(
    self, kg: KnowledgeGraph
) -> t.Tuple[t.List[Node], t.List[Relationship]]:
    """
    Transforms the KnowledgeGraph by splitting its nodes into smaller chunks.

    Parameters
    ----------
    kg : KnowledgeGraph
        The knowledge graph to be transformed.

    Returns
    -------
    t.Tuple[t.List[Node], t.List[Relationship]]
        A tuple containing a list of new nodes and a list of new relationships.
    """
    filtered = self.filter(kg)

    all_nodes = []
    all_relationships = []
    for node in filtered.nodes:
        nodes, relationships = await self.split(node)
        all_nodes.extend(nodes)
        all_relationships.extend(relationships)

    return all_nodes, all_relationships

split abstractmethod async

split(node: Node) -> Tuple[List[Node], List[Relationship]]

用于将节点拆分为更小块的抽象方法。

参数:

名称 类型 描述 默认值
node Node

要拆分的节点。

required

返回:

类型 描述
Tuple[List[Node], List[Relationship]]

一个元组,包含一个新的节点列表和一个新的关系列表。

Source code in ragas/src/ragas/testset/transforms/base.py
@abstractmethod
async def split(self, node: Node) -> t.Tuple[t.List[Node], t.List[Relationship]]:
    """
    Abstract method to split a node into smaller chunks.

    Parameters
    ----------
    node : Node
        The node to be split.

    Returns
    -------
    t.Tuple[t.List[Node], t.List[Relationship]]
        A tuple containing a list of new nodes and a list of new relationships.
    """
    pass

generate_execution_plan

generate_execution_plan(kg: KnowledgeGraph) -> List[Coroutine]

生成一个由 Executor 并行执行的协程列表。

参数:

名称 类型 描述 默认值
kg KnowledgeGraph

要转换的知识图谱。

required

返回:

类型 描述
List[Coroutine]

要并行执行的协程列表。

Source code in ragas/src/ragas/testset/transforms/base.py
def generate_execution_plan(self, kg: KnowledgeGraph) -> t.List[t.Coroutine]:
    """
    Generates a list of coroutines to be executed in parallel by the Executor.

    Parameters
    ----------
    kg : KnowledgeGraph
        The knowledge graph to be transformed.

    Returns
    -------
    t.List[t.Coroutine]
        A list of coroutines to be executed in parallel.
    """

    async def apply_split(node: Node):
        nodes, relationships = await self.split(node)
        kg.nodes.extend(nodes)
        kg.relationships.extend(relationships)

    filtered = self.filter(kg)
    return [apply_split(node) for node in filtered.nodes]

并行

Parallel(*transformations: BaseGraphTransformation)

一组要并行应用的转换。

示例:

>>> Parallel(HeadlinesExtractor(), SummaryExtractor())
Source code in ragas/src/ragas/testset/transforms/engine.py
def __init__(self, *transformations: BaseGraphTransformation):
    self.transformations = list(transformations)

EmbeddingExtractor dataclass

EmbeddingExtractor(name: str = '', filter_nodes: Callable[[Node], bool] = (lambda: default_filter)(), property_name: str = 'embedding', embed_property_name: str = 'page_content', embedding_model: BaseRagasEmbeddings = embedding_factory())

基类: Extractor

一个用于从知识图谱节点中提取嵌入的类。

属性:

名称 类型 描述
property_name str

用于存储嵌入的属性名称

embed_property_name str

包含要嵌入文本的属性的名称

embedding_model BaseRagasEmbeddings

用于生成嵌入的嵌入模型

提取 async

extract(node: Node) -> Tuple[str, Any]

为给定节点提取嵌入。

引发:

类型 描述
ValueError

如果要嵌入的属性不是字符串。

Source code in ragas/src/ragas/testset/transforms/extractors/embeddings.py
async def extract(self, node: Node) -> t.Tuple[str, t.Any]:
    """
    Extracts the embedding for a given node.

    Raises
    ------
    ValueError
        If the property to be embedded is not a string.
    """
    text = node.get_property(self.embed_property_name)
    if not isinstance(text, str):
        raise ValueError(
            f"node.property('{self.embed_property_name}') must be a string, found '{type(text)}'"
        )
    embedding = await self.embedding_model.embed_text(text)
    return self.property_name, embedding

HeadlinesExtractor dataclass

HeadlinesExtractor(name: str = '', filter_nodes: Callable[[Node], bool] = (lambda: default_filter)(), llm: BaseRagasLLM = llm_factory(), merge_if_possible: bool = True, max_token_limit: int = 32000, tokenizer: Encoding = DEFAULT_TOKENIZER, property_name: str = 'headlines', prompt: HeadlinesExtractorPrompt = HeadlinesExtractorPrompt(), max_num: int = 5)

基类: LLMBasedExtractor

从给定文本中提取标题。

属性:

名称 类型 描述
property_name str

要提取的属性名称。

prompt HeadlinesExtractorPrompt

用于抽取的提示。

KeyphrasesExtractor dataclass

KeyphrasesExtractor(name: str = '', filter_nodes: Callable[[Node], bool] = (lambda: default_filter)(), llm: BaseRagasLLM = llm_factory(), merge_if_possible: bool = True, max_token_limit: int = 32000, tokenizer: Encoding = DEFAULT_TOKENIZER, property_name: str = 'keyphrases', prompt: KeyphrasesExtractorPrompt = KeyphrasesExtractorPrompt(), max_num: int = 5)

基类: LLMBasedExtractor

从给定文本中提取最重要的关键短语。

属性:

名称 类型 描述
property_name str

要提取的属性的名称。

prompt KeyphrasesExtractorPrompt

用于提取的提示。

SummaryExtractor dataclass

SummaryExtractor(name: str = '', filter_nodes: Callable[[Node], bool] = (lambda: default_filter)(), llm: BaseRagasLLM = llm_factory(), merge_if_possible: bool = True, max_token_limit: int = 32000, tokenizer: Encoding = DEFAULT_TOKENIZER, property_name: str = 'summary', prompt: SummaryExtractorPrompt = SummaryExtractorPrompt())

基类: LLMBasedExtractor

从给定文本中提取摘要。

属性:

名称 类型 描述
property_name str

要提取的属性名称。

prompt SummaryExtractorPrompt

用于提取的提示。

TitleExtractor dataclass

TitleExtractor(name: str = '', filter_nodes: Callable[[Node], bool] = (lambda: default_filter)(), llm: BaseRagasLLM = llm_factory(), merge_if_possible: bool = True, max_token_limit: int = 32000, tokenizer: Encoding = DEFAULT_TOKENIZER, property_name: str = 'title', prompt: TitleExtractorPrompt = TitleExtractorPrompt())

基类:LLMBasedExtractor

从给定文本中提取标题。

属性:

名称 类型 描述
property_name str

要提取的属性名称。

prompt TitleExtractorPrompt

用于抽取的提示。

CustomNodeFilter dataclass

CustomNodeFilter(name: str = '', filter_nodes: Callable[[Node], bool] = (lambda: default_filter)(), llm: BaseRagasLLM = llm_factory(), scoring_prompt: PydanticPrompt = QuestionPotentialPrompt(), min_score: int = 2, rubrics: Dict[str, str] = (lambda: DEFAULT_RUBRICS)())

基类: LLMBasedNodeFilter

如果 score 小于 min_score,则返回 True

SummaryCosineSimilarityBuilder dataclass

SummaryCosineSimilarityBuilder(name: str = '', filter_nodes: Callable[[Node], bool] = (lambda: default_filter)(), property_name: str = 'summary_embedding', new_property_name: str = 'summary_cosine_similarity', threshold: float = 0.1)

基类: CosineSimilarityBuilder

筛选

过滤知识图谱,只保留具有摘要嵌入的节点。

Source code in ragas/src/ragas/testset/transforms/relationship_builders/cosine.py
def filter(self, kg: KnowledgeGraph) -> KnowledgeGraph:
    """
    Filters the knowledge graph to only include nodes with a summary embedding.
    """
    nodes = []
    for node in kg.nodes:
        if node.type == NodeType.DOCUMENT:
            emb = node.get_property(self.property_name)
            if emb is None:
                raise ValueError(f"Node {node.id} has no {self.property_name}")
            nodes.append(node)
    return KnowledgeGraph(nodes=nodes)

default_transforms

default_transforms(documents: List[Document], llm: BaseRagasLLM, embedding_model: BaseRagasEmbeddings) -> Transforms

创建并返回一组用于处理知识图谱的默认转换。

该函数定义了一系列要应用于知识图谱的转换步骤,包括提取摘要、关键词、标题、头条和嵌入向量,以及构建节点之间的相似性关系。

返回:

类型 描述
Transforms

要应用于知识图谱的一系列转换步骤。

Source code in ragas/src/ragas/testset/transforms/default.py
def default_transforms(
    documents: t.List[LCDocument],
    llm: BaseRagasLLM,
    embedding_model: BaseRagasEmbeddings,
) -> Transforms:
    """
    Creates and returns a default set of transforms for processing a knowledge graph.

    This function defines a series of transformation steps to be applied to a
    knowledge graph, including extracting summaries, keyphrases, titles,
    headlines, and embeddings, as well as building similarity relationships
    between nodes.



    Returns
    -------
    Transforms
        A list of transformation steps to be applied to the knowledge graph.

    """

    def count_doc_length_bins(documents, bin_ranges):
        data = [num_tokens_from_string(doc.page_content) for doc in documents]
        bins = {f"{start}-{end}": 0 for start, end in bin_ranges}

        for num in data:
            for start, end in bin_ranges:
                if start <= num <= end:
                    bins[f"{start}-{end}"] += 1
                    break  # Move to the next number once it’s placed in a bin

        return bins

    def filter_doc_with_num_tokens(node, min_num_tokens=500):
        return (
            node.type == NodeType.DOCUMENT
            and num_tokens_from_string(node.properties["page_content"]) > min_num_tokens
        )

    def filter_docs(node):
        return node.type == NodeType.DOCUMENT

    def filter_chunks(node):
        return node.type == NodeType.CHUNK

    bin_ranges = [(0, 100), (101, 500), (501, 100000)]
    result = count_doc_length_bins(documents, bin_ranges)
    result = {k: v / len(documents) for k, v in result.items()}

    transforms = []

    if result["501-100000"] >= 0.25:
        headline_extractor = HeadlinesExtractor(
            llm=llm, filter_nodes=lambda node: filter_doc_with_num_tokens(node)
        )
        splitter = HeadlineSplitter(min_tokens=500)
        summary_extractor = SummaryExtractor(
            llm=llm, filter_nodes=lambda node: filter_doc_with_num_tokens(node)
        )

        theme_extractor = ThemesExtractor(
            llm=llm, filter_nodes=lambda node: filter_chunks(node)
        )
        ner_extractor = NERExtractor(
            llm=llm, filter_nodes=lambda node: filter_chunks(node)
        )

        summary_emb_extractor = EmbeddingExtractor(
            embedding_model=embedding_model,
            property_name="summary_embedding",
            embed_property_name="summary",
            filter_nodes=lambda node: filter_doc_with_num_tokens(node),
        )

        cosine_sim_builder = CosineSimilarityBuilder(
            property_name="summary_embedding",
            new_property_name="summary_similarity",
            threshold=0.7,
            filter_nodes=lambda node: filter_doc_with_num_tokens(node),
        )

        ner_overlap_sim = OverlapScoreBuilder(
            threshold=0.01, filter_nodes=lambda node: filter_chunks(node)
        )

        node_filter = CustomNodeFilter(
            llm=llm, filter_nodes=lambda node: filter_chunks(node)
        )
        transforms = [
            headline_extractor,
            splitter,
            summary_extractor,
            node_filter,
            Parallel(summary_emb_extractor, theme_extractor, ner_extractor),
            Parallel(cosine_sim_builder, ner_overlap_sim),
        ]
    elif result["101-500"] >= 0.25:
        summary_extractor = SummaryExtractor(
            llm=llm, filter_nodes=lambda node: filter_doc_with_num_tokens(node, 100)
        )
        summary_emb_extractor = EmbeddingExtractor(
            embedding_model=embedding_model,
            property_name="summary_embedding",
            embed_property_name="summary",
            filter_nodes=lambda node: filter_doc_with_num_tokens(node, 100),
        )

        cosine_sim_builder = CosineSimilarityBuilder(
            property_name="summary_embedding",
            new_property_name="summary_similarity",
            threshold=0.5,
            filter_nodes=lambda node: filter_doc_with_num_tokens(node, 100),
        )

        ner_extractor = NERExtractor(llm=llm)
        ner_overlap_sim = OverlapScoreBuilder(threshold=0.01)
        theme_extractor = ThemesExtractor(
            llm=llm, filter_nodes=lambda node: filter_docs(node)
        )
        node_filter = CustomNodeFilter(llm=llm)

        transforms = [
            summary_extractor,
            node_filter,
            Parallel(summary_emb_extractor, theme_extractor, ner_extractor),
            Parallel(cosine_sim_builder, ner_overlap_sim),
        ]
    else:
        raise ValueError(
            "Documents appears to be too short (ie 100 tokens or less). Please provide longer documents."
        )

    return transforms

apply_transforms

apply_transforms(kg: KnowledgeGraph, transforms: Transforms, run_config: RunConfig = RunConfig(), callbacks: Optional[Callbacks] = None)

在原地对知识图谱应用一系列变换。

Source code in ragas/src/ragas/testset/transforms/engine.py
def apply_transforms(
    kg: KnowledgeGraph,
    transforms: Transforms,
    run_config: RunConfig = RunConfig(),
    callbacks: t.Optional[Callbacks] = None,
):
    """
    Apply a list of transformations to a knowledge graph in place.
    """
    # apply nest_asyncio to fix the event loop issue in jupyter
    apply_nest_asyncio()

    # if single transformation, wrap it in a list
    if isinstance(transforms, BaseGraphTransformation):
        transforms = [transforms]

    # apply the transformations
    # if Sequences, apply each transformation sequentially
    if isinstance(transforms, t.List):
        for transform in transforms:
            asyncio.run(
                run_coroutines(
                    transform.generate_execution_plan(kg),
                    get_desc(transform),
                    run_config.max_workers,
                )
            )
    # if Parallel, collect inside it and run it all
    elif isinstance(transforms, Parallel):
        asyncio.run(
            run_coroutines(
                transforms.generate_execution_plan(kg),
                get_desc(transforms),
                run_config.max_workers,
            )
        )
    else:
        raise ValueError(
            f"Invalid transforms type: {type(transforms)}. Expects a list of BaseGraphTransformations or a Parallel instance."
        )

rollback_transforms

rollback_transforms(kg: KnowledgeGraph, transforms: Transforms)

从知识图谱中回滚一系列转换。

Note

该功能尚未实现。如需此功能,请打开一个 issue。

Source code in ragas/src/ragas/testset/transforms/engine.py
def rollback_transforms(kg: KnowledgeGraph, transforms: Transforms):
    """
    Rollback a list of transformations from a knowledge graph.

    Note
    ----
    This is not yet implemented. Please open an issue if you need this feature.
    """
    # this will allow you to roll back the transformations
    raise NotImplementedError
优云智算