Python API Reference
This page provides a detailed reference for using AIDE ML as a Python library. The main entry point is the aide.Experiment
class.
aide.Experiment
The Experiment
class is used to configure and run a complete agentic search process programmatically.
class aide.Experiment:
def __init__(self, data_dir: str, goal: str, eval: str | None = None):
# ...
def run(self, steps: int) -> Solution:
# ...
__init__(self, data_dir, goal, eval)
Initializes a new experiment run.
Parameters:
data_dir
(str): The path to the directory containing the dataset files. This can be a relative or absolute path. The agent will create a sandboxed workspace based on this data.goal
(str): A high-level, natural language description of the task's objective.eval
(str | None, optional): A more specific description of the evaluation metric the agent should aim to optimize. IfNone
, the agent will infer a suitable metric from the goal.
Example:
import aide
exp = aide.Experiment(
data_dir="./aide/example_tasks/house_prices",
goal="Predict the sales price for each house",
eval="Use RMSE between log-prices"
)
run(self, steps: int) -> Solution
Starts the agentic tree search process for the configured experiment.
Parameters:
steps
(int): The total number of iterative steps (improvements or debugs) the agent should perform.
Returns:
Solution
: A dataclass object containing the best solution found during the run.
Example:
# Continuing from the previous example
best_solution = exp.run(steps=10)
print(f"Final Metric: {best_solution.valid_metric}")
print(f"Final Code:\n{best_solution.code}")
aide.Solution
A simple dataclass that holds the final result returned by the Experiment.run()
method.
from dataclasses import dataclass
@dataclass
class Solution:
code: str
valid_metric: float
Attributes:
code
(str): The Python code of the best-performing script.valid_metric
(float): The validation score achieved by the best script.