AIModelGenerator
- class src.ai_graph.ai.ai_model_generator.AIModelGenerator(assistant_name: str, client: OpenAiClient, open_ai_model_version: OpenAIModelVersion, temperature: float, max_tokens: int, instructions: str, input_describer: ModelDescriber, retry_wait_min: int = 4, retry_wait_max: int = 128, retry_attempts: int = 20)[source]
Bases:
BaseAIModel
,Generic
- __init__(assistant_name: str, client: OpenAiClient, open_ai_model_version: OpenAIModelVersion, temperature: float, max_tokens: int, instructions: str, input_describer: ModelDescriber, retry_wait_min: int = 4, retry_wait_max: int = 128, retry_attempts: int = 20)[source]
AIModelGenerator is responsible for generating AI models based on a given configuration.
- Parameters:
assistant_name – The name of the assistant using this model.
client – The OpenAI client used for generating completions.
open_ai_model_version – The version of the OpenAI model being used.
temperature – The temperature setting for the model (affects randomness).
max_tokens – The maximum number of tokens to generate.
instructions – Instructions for the AI model.
input_describer – A describer for generating the model’s input prompts.
retry_wait_min – Minimum wait time between retries in case of failure.
retry_wait_max – Maximum wait time between retries in case of failure.
retry_attempts – Number of retry attempts in case of failure.
- async get_parsed_completion(input_instance: BaseModel, output_type: type[BaseModel], *args, **kwargs) OM [source]
Returns the parsed completion result based on the input instance.
- Parameters:
input_instance – The input data to generate the completion.
output_type – The expected output type for the completion.
- Returns:
The generated output object.