Base
BaseRagasLLM
dataclass
Bases: ABC
get_temperature
generate
async
generate(
prompt: PromptValue,
n: int = 1,
temperature: Optional[float] = None,
stop: Optional[List[str]] = None,
callbacks: Callbacks = None,
is_async: bool = True,
) -> LLMResult
Generate text using the given event loop.
Source code in src/ragas/llms/base.py
LangchainLLMWrapper
LangchainLLMWrapper(
langchain_llm: BaseLanguageModel,
run_config: Optional[RunConfig] = None,
)
Bases: BaseRagasLLM
A simple base class for RagasLLMs that is based on Langchain's BaseLanguageModel interface. it implements 2 functions: - generate_text: for generating text from a given PromptValue - agenerate_text: for generating text from a given PromptValue asynchronously
Source code in src/ragas/llms/base.py
LlamaIndexLLMWrapper
LlamaIndexLLMWrapper(
llm: BaseLLM, run_config: Optional[RunConfig] = None
)
is_multiple_completion_supported
Return whether the given LLM supports n-completion.
llm_factory
llm_factory(
model: str = "gpt-4o-mini",
run_config: Optional[RunConfig] = None,
default_headers: Optional[Dict[str, str]] = None,
base_url: Optional[str] = None,
) -> BaseRagasLLM
Create and return a BaseRagasLLM instance. Used for running default LLMs used in Ragas (OpenAI).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
str
|
The name of the model to use, by default "gpt-4o-mini". |
'gpt-4o-mini'
|
run_config
|
RunConfig
|
Configuration for the run, by default None. |
None
|
default_headers
|
dict of str
|
Default headers to be used in API requests, by default None. |
None
|
base_url
|
str
|
Base URL for the API, by default None. |
None
|
Returns:
Type | Description |
---|---|
BaseRagasLLM
|
An instance of BaseRagasLLM configured with the specified parameters. |