ai.analysis.run package

Submodules

ai.analysis.run.abstract_analyzer module

class ai.analysis.run.abstract_analyzer.AbstractAnalyzer[source]

Bases: ABC

Base class for analyzers that produce cost analysis results.

abstractmethod get_cost_analysis() AnalysisResult[source]

Perform cost analysis and return the result.

Returns:

The result of the cost analysis.

Return type:

AnalysisResult

ai.analysis.run.analysis_result module

exception ai.analysis.run.analysis_result.AssistantsDontMatchError[source]

Bases: Exception

Raised when two AnalysisResult instances have conflicting assistant values.

class ai.analysis.run.analysis_result.AnalysisResult(model: str | None, assistant: Assistant | None, prompt_tokens: int, prompts_cost: Money, completion_tokens: int, completions_cost: Money)[source]

Bases: object

Contains cost analysis data for a single assistant run.

model

The name of the model used.

Type:

str | None

assistant

The assistant instance used.

Type:

Assistant | None

prompt_tokens

Number of tokens in the prompt.

Type:

int

prompts_cost

Cost incurred by the prompt tokens.

Type:

Money

completion_tokens

Number of tokens in the completion.

Type:

int

completions_cost

Cost incurred by the completion tokens.

Type:

Money

model: str | None
assistant: Assistant | None
prompt_tokens: int
prompts_cost: Money
completion_tokens: int
completions_cost: Money
classmethod empty() Self[source]

Create an empty AnalysisResult with zeroed costs and token counts.

Returns:

An instance with no assistant, no model, and zero costs/tokens.

Return type:

AnalysisResult

update(assistant: Assistant = None, prompt_tokens: int = None, model: Literal['gpt-4o', 'gpt-4o-2024-05-13', 'gpt-4o-mini', 'gpt-4o-mini-2024-07-18', 'gpt-4-turbo', 'gpt-4-turbo-2024-04-09', 'gpt-4-0125-preview', 'gpt-4-turbo-preview', 'gpt-4-1106-preview', 'gpt-4-vision-preview', 'gpt-4', 'gpt-4-0314', 'gpt-4-0613', 'gpt-4-32k', 'gpt-4-32k-0314', 'gpt-4-32k-0613', 'gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-0301', 'gpt-3.5-turbo-0613', 'gpt-3.5-turbo-1106', 'gpt-3.5-turbo-0125', 'gpt-3.5-turbo-16k-0613'] = None, prompts_cost: Money = None, completion_tokens: int = None, completions_cost: Money = None) Self[source]

Return a new AnalysisResult with updated fields.

Any parameter left as None will retain its current value.

Parameters:
  • assistant (Assistant, optional) – New assistant instance.

  • prompt_tokens (int, optional) – New prompt token count.

  • model (ChatModel, optional) – New model name.

  • prompts_cost (Money, optional) – New prompts cost.

  • completion_tokens (int, optional) – New completion token count.

  • completions_cost (Money, optional) – New completions cost.

Returns:

A new instance reflecting the updates.

Return type:

AnalysisResult

property total_cost: Money

Compute the total cost (prompts + completions).

Returns:

Sum of prompts_cost and completions_cost.

Return type:

Money

convert_to(currency: Currency) Self[source]

Convert both prompt and completion costs to a target currency.

Parameters:

currency (Currency) – The currency to convert all costs into.

Returns:

New instance with costs converted.

Return type:

AnalysisResult

get_share(total_cost: Money) float[source]

Calculate this result’s share of a total cost.

Parameters:

total_cost (Money) – The total cost against which to compare.

Returns:

Fraction of total_cost represented by this result.

Return type:

float

get_cost_per_thousand_tickets(number_tickets: int) Money[source]

Scale total cost to a per-1000-tickets basis.

Parameters:

number_tickets (int) – Number of tickets over which the total_cost was incurred.

Returns:

Total cost scaled to 1000 tickets.

Return type:

Money

__init__(model: str | None, assistant: Assistant | None, prompt_tokens: int, prompts_cost: Money, completion_tokens: int, completions_cost: Money) None

ai.analysis.run.analysis_result_test module

ai.analysis.run.analysis_result_test.some_assistant()[source]
ai.analysis.run.analysis_result_test.test_total_cost()[source]
ai.analysis.run.analysis_result_test.test_get_share()[source]
ai.analysis.run.analysis_result_test.test_get_cost_per_thousand_tickets()[source]
ai.analysis.run.analysis_result_test.test_addition(some_assistant)[source]
ai.analysis.run.analysis_result_test.test_sum_assistant_analysis(some_assistant)[source]

ai.analysis.run.assistant_run module

exception ai.analysis.run.assistant_run.AssistantsDontMatchError[source]

Bases: Exception

Raised when two assistants in combined analyses do not match.

class ai.analysis.run.assistant_run.AssistantRun(completion_result: ChatCompletion, assistant: Assistant)[source]

Bases: AbstractAnalyzer

Analyze a single assistant run to compute token usage costs.

_completion_result

The raw OpenAI chat completion.

Type:

ChatCompletion

assistant

The assistant used for this run.

Type:

Assistant

_model_price_calculator

Computes per-token costs.

Type:

TokenPriceCalculator

_logger

Logger for warnings and debug messages.

Type:

logging.Logger

__init__(completion_result: ChatCompletion, assistant: Assistant)[source]

Initialize the analysis for an assistant run.

Parameters:
  • completion_result (ChatCompletion) – The result of the chat completion.

  • assistant (Assistant) – The assistant instance used.

Raises:

ValueError – If completion_result is not a ChatCompletion or has no model.

get_cost_analysis() AnalysisResult[source]

Generate cost analysis from the completion result.

Returns:

Includes token counts and cost breakdown.

Return type:

AnalysisResult

get_assistant_updated(new_assistant: Assistant) AssistantRun[source]

Return a copy of this run using a different assistant.

Parameters:

new_assistant (Assistant) – The assistant to replace in the new run.

Returns:

New instance with the same completion result but updated assistant.

Return type:

AssistantRun

ai.analysis.run.assistant_runs module

class ai.analysis.run.assistant_runs.AssistantRunsAnalyzer(runs: list[AssistantRun])[source]

Bases: AbstractAnalyzer

Analyzer that aggregates cost analyses for multiple assistant runs.

_assistant_runs

List of AssistantRun instances to analyze.

Type:

list[AssistantRun]

__init__(runs: list[AssistantRun])[source]

Initialize with a list of assistant runs.

Parameters:

runs (list[AssistantRun]) – AssistantRun instances to aggregate.

get_cost_analysis() AnalysisResult[source]

Compute the combined cost analysis from all assistant runs.

Returns:

Aggregated cost analysis for all runs.

Return type:

AnalysisResult

Module contents