core module

class core.DummyEnv(task: str | None = None, end_immediately: bool = True)[source]

Bases: Environment[DummyEnvState]

Simple Environment with basic functionality and no network usage.

State

alias of DummyEnvState

export_frame() Frame[source]

Export a snapshot of the environment as a Frame for visualization or debugging.

If you are not sure what to put in the Frame, just give it the entire state. See the Frame class itself for more information.

classmethod from_task(task: str) DummyEnv[source]

Create an environment from a task description.

A task is meant to be closer to a user prompt - like what you would expect in calling an LLM. This is how the environment should be used after training and in deployment. We don’t take config here, because the default environment config should be general for arbitrary tasks.

For example, with GSM8k/calculator: “What is 18 * (number of legs on a cat) / moons of mars?”

async reset() tuple[list[Message], list[Tool]][source]

Reset the environment and collect initial observation(s).

Possible observations could be instructions on how tools are related, or the goal of the environment.

Returns:

Two-tuple of initial observations and tools.

async step(action: ToolRequestMessage) tuple[list[Message], float, bool, bool][source]

Take a step in the environment.

Parameters:

action – Action to take.

Returns:

Four-tuple of new observations, instantaneous reward for this action, a flag

symbolizing if the episode is done, and a flag symbolizing if the episode was truncated (e.g. via early stopping).

class core.DummyEnvState(*, messages: list[Message], reward: float = 0, done: bool = False)[source]

Bases: BaseModel

done: bool
messages: list[Message]
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'done': FieldInfo(annotation=bool, required=False, default=False), 'messages': FieldInfo(annotation=list[Message], required=True), 'reward': FieldInfo(annotation=float, required=False, default=0)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

reward: float
class core.DummyTaskDataset[source]

Bases: TaskDataset[DummyEnv]

A dummy task of infinite DummyEnvs.

get_new_env() DummyEnv[source]

Get an env from a non-indexable dataset.

class core.Environment[source]

Bases: ABC, Generic[TEnvState]

An environment is a stateful place where agents use tools and make observations.

Tools are housed in the environment because they can interact with the environment.

Environments (and their contained tools) are not trainable.

classmethod available() set[str][source]

See list of available environment classes for from_name.

This is not exhaustive, because some may be importable and so you should just try to call from_name. This is more for logging/debugging purposes.

async close() None[source]

Shutdown the environment.

If this is unimplemented, __del__ will manage cleanup.

async exec_tool_calls(message: ToolRequestMessage, ordered: bool = False, handle_tool_exc: bool = False, **function_kwargs) list[ToolResponseMessage][source]

Execute an ordered list of tool calls.

Parameters:
  • message – ToolRequestMessage containing the tool calls.

  • ordered – Opt-in flag for forcing sequential execution (according to order in the above message), otherwise tool calls are made concurrently.

  • handle_tool_exc – Opt-in flag to suppress Exceptions and return them as a ToolResponseMessage.

  • **function_kwargs – Keyword arguments to pass to all tool functions.

Returns:

Ordered list of ToolResponseMessages, order matches the order of tool calls

in the input message.

export_frame() Frame[source]

Export a snapshot of the environment as a Frame for visualization or debugging.

If you are not sure what to put in the Frame, just give it the entire state. See the Frame class itself for more information.

filter_invalid_tool_calls(message: ToolRequestMessage) tuple[ToolRequestMessage, ToolRequestMessage][source]

Split a list of tool calls into valid and invalid subsets.

Parameters:

message – Tool request message containing tool calls.

Returns:

Two-tuple of ToolRequestMessage containing valid messages and

ToolRequestMessage containing invalid messages

classmethod from_name(name: str, task: str | None = None, **env_kwargs) Self[source]

Create an environment from the name of the class. Call Environment.available() to see list.

classmethod from_task(task: str) Self[source]

Create an environment from a task description.

A task is meant to be closer to a user prompt - like what you would expect in calling an LLM. This is how the environment should be used after training and in deployment. We don’t take config here, because the default environment config should be general for arbitrary tasks.

For example, with GSM8k/calculator: “What is 18 * (number of legs on a cat) / moons of mars?”

abstract async reset() tuple[list[Message], list[Tool]][source]

Reset the environment and collect initial observation(s).

Possible observations could be instructions on how tools are related, or the goal of the environment.

Returns:

Two-tuple of initial observations and tools.

state: TEnvState
abstract async step(action: ToolRequestMessage) tuple[list[Message], float, bool, bool][source]

Take a step in the environment.

Parameters:

action – Action to take.

Returns:

Four-tuple of new observations, instantaneous reward for this action, a flag

symbolizing if the episode is done, and a flag symbolizing if the episode was truncated (e.g. via early stopping).

tools: list[Tool]
class core.EnvironmentClient(reset_endpoint_url: str, step_endpoint_url: str, request_params: QueryParams | Mapping[str, str | int | float | bool | None | Sequence[str | int | float | bool | None]] | List[Tuple[str, str | int | float | bool | None]] | Tuple[Tuple[str, str | int | float | bool | None], ...] | str | bytes | None = None, request_headers: Headers | Mapping[str, str] | Mapping[bytes, bytes] | Sequence[Tuple[str, str]] | Sequence[Tuple[bytes, bytes]] | None = None, request_timeout: float | None = None)[source]

Bases: Environment[TEnvState], ABC

async reset() tuple[list[Message], list[Tool]][source]

Reset the environment and collect initial observation(s).

Possible observations could be instructions on how tools are related, or the goal of the environment.

Returns:

Two-tuple of initial observations and tools.

async step(action: ToolRequestMessage) tuple[list[Message], float, bool, bool][source]

Take a step in the environment.

Parameters:

action – Action to take.

Returns:

Four-tuple of new observations, instantaneous reward for this action, a flag

symbolizing if the episode is done, and a flag symbolizing if the episode was truncated (e.g. via early stopping).

class core.Frame(*, deepcopy: bool = True, state: Annotated[dict | list | int | float | str | bool | BaseModel | None, WrapSerializer(func=_custom_serializer, return_type=PydanticUndefined, when_used=always)] = None, info: Annotated[dict | list | int | float | str | bool | BaseModel | None, WrapSerializer(func=_custom_serializer, return_type=PydanticUndefined, when_used=always)] = None)[source]

Bases: BaseModel

A frame is a snapshot at a given timestep. The name comes from video frame.

deepcopy: bool
info: Annotated[Serializable | None, WrapSerializer(_custom_serializer)]
classmethod make_deepcopy(v: dict | list | int | float | str | bool | BaseModel, info: ValidationInfo) dict | list | int | float | str | bool | BaseModel[source]
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'deepcopy': FieldInfo(annotation=bool, required=False, default=True, description="Whether to deepcopy the state and info fields. Disable if you're sure they're immutable or desire mutability."), 'info': FieldInfo(annotation=Union[dict, list, int, float, str, bool, BaseModel, NoneType], required=False, default=None, description="Optional metadata that doesn't vary with state.", metadata=[WrapSerializer(func=<staticmethod(<function Frame._custom_serializer>)>, return_type=PydanticUndefined, when_used='always')]), 'state': FieldInfo(annotation=Union[dict, list, int, float, str, bool, BaseModel, NoneType], required=False, default=None, description='Either entire (or a subset of) the current state. Leave as default of None if state is irrelevant.', metadata=[WrapSerializer(func=<staticmethod(<function Frame._custom_serializer>)>, return_type=PydanticUndefined, when_used='always')])}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

state: Annotated[Serializable | None, WrapSerializer(_custom_serializer)]
class core.FunctionInfo(*, name: str, description: str, parameters: Parameters)[source]

Bases: BaseModel

Function-level (not arg-level) information.

Matches LiteLLM’s desired “tools” schema, and resembles inspect.Signature.

describe_json() str[source]
describe_str() str[source]
describe_xml() str[source]
description: str
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'description': FieldInfo(annotation=str, required=True), 'name': FieldInfo(annotation=str, required=True), 'parameters': FieldInfo(annotation=Parameters, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

name: str
parameters: Parameters
exception core.MalformedMessageError[source]

Bases: ValueError

Error to throw if some aspect of a Message variant is malformed.

classmethod common_retryable_errors_log_filter(record: LogRecord) bool[source]

Filter out common parsing failures not worth looking into from logs.

Returns:

False if the LogRecord should be filtered out, otherwise True to keep it.

class core.Message(*, role: str = 'user', content: str | None = None, content_is_json_str: bool = False, info: dict | None = None)[source]

Bases: BaseModel

DEFAULT_ROLE: ClassVar[str] = 'user'
VALID_ROLES: ClassVar[set[str]] = {'assistant', 'function', 'system', 'tool', 'user'}
append_text(text: str, delim: str = '\n', inplace: bool = True) Message[source]

Append text to the content.

Parameters:
  • text – The text to append.

  • delim – The delimiter to use when concatenating strings.

  • inplace – Whether to modify the message in place.

Returns:

The modified message. Note that the original message is modified and returned if inplace=True and a new message is returned otherwise.

classmethod check_role(v: str) str[source]
content: str | None
content_is_json_str: bool
classmethod create_message(role: str = 'user', text: str | None = None, image: np.ndarray | None = None) Self[source]
info: dict | None
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_dump(*args, **kwargs) dict[source]

Usage docs: https://docs.pydantic.dev/2.9/concepts/serialization/#modelmodel_dump

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

Parameters:
  • mode – The mode in which to_python should run. If mode is ‘json’, the output will only contain JSON serializable types. If mode is ‘python’, the output may contain non-JSON-serializable Python objects.

  • include – A set of fields to include in the output.

  • exclude – A set of fields to exclude from the output.

  • context – Additional context to pass to the serializer.

  • by_alias – Whether to use the field’s alias in the dictionary key if defined.

  • exclude_unset – Whether to exclude fields that have not been explicitly set.

  • exclude_defaults – Whether to exclude fields that are set to their default value.

  • exclude_none – Whether to exclude fields that have a value of None.

  • round_trip – If True, dumped values should be valid as input for non-idempotent types such as Json[T].

  • warnings – How to handle serialization errors. False/”none” ignores them, True/”warn” logs errors, “error” raises a [PydanticSerializationError][pydantic_core.PydanticSerializationError].

  • serialize_as_any – Whether to serialize fields with duck-typing serialization behavior.

Returns:

A dictionary representation of the model.

model_fields: ClassVar[Dict[str, FieldInfo]] = {'content': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Optional message content. Can be a string or a dictionary or None. If a dictionary (for multimodal content), it will be JSON serialized. None is a sentinel value for the absence of content (different than empty string).'), 'content_is_json_str': FieldInfo(annotation=bool, required=False, default=False, description='Whether the content is JSON-serialized (e.g., for multiple modalities).', exclude=True, repr=False), 'info': FieldInfo(annotation=Union[dict, NoneType], required=False, default=None, description='Optional metadata about the message.', exclude=True, repr=False), 'role': FieldInfo(annotation=str, required=False, default='user', description="Message role matching OpenAI's role conventions.")}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

role: str
classmethod serialize_content(data)[source]
class core.Parameters(*, type: Literal['object'] = 'object', properties: Annotated[dict[str, dict[str, Any]], PlainSerializer(func=dict_serialize_exclude_none, return_type=PydanticUndefined, when_used=always)], required: list[str], **extra_data: Any)[source]

Bases: BaseModel

Matches LiteLLM’s desired “tools” schema.

model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'extra': 'allow'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'properties': FieldInfo(annotation=dict[str, dict[str, Any]], required=True, metadata=[PlainSerializer(func=<function dict_serialize_exclude_none>, return_type=PydanticUndefined, when_used='always')]), 'required': FieldInfo(annotation=list[str], required=True), 'type': FieldInfo(annotation=Literal['object'], required=False, default='object')}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

properties: Annotated[dict[str, dict[str, Any]], PlainSerializer(func=dict_serialize_exclude_none, return_type=PydanticUndefined, when_used=always)]
required: list[str]
type: Literal['object']
class core.Renderer(*, id: UUID | int | str = None, frames: list[Frame] = [], prefix: str, name: str = 'Trajectory')[source]

Bases: BaseModel

append(frame: Frame) None[source]
build(build_dir: str | PathLike, indent: int = 4, r2_bucket: str | None = None, extra_files: list[str | PathLike] | None = None) None[source]
classmethod check_prefix_is_alphanum(v: str) str[source]
frames: list[Frame]
id: UUID | int | str
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'frames': FieldInfo(annotation=list[Frame], required=False, default=[]), 'id': FieldInfo(annotation=Union[UUID, int, str], required=False, default_factory=<lambda>), 'name': FieldInfo(annotation=str, required=False, default='Trajectory', description='Name of the renderer, used in the manifest file.'), 'prefix': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

name: str
prefix: str
class core.TaskConfig(*, name: str, task_kwargs: dict[str, BaseModel | JsonValue] = None, train_kwargs: dict[str, BaseModel | JsonValue] = None, eval_kwargs: dict[str, BaseModel | JsonValue] = None, test_kwargs: dict[str, BaseModel | JsonValue] = None)[source]

Bases: BaseModel

Convenience for making a config file entry for a TaskDataset.

eval_kwargs: dict[str, BaseModel | JsonValue]
make_dataset(split: str) TaskDataset[source]
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'eval_kwargs': FieldInfo(annotation=dict[str, Union[BaseModel, JsonValue]], required=False, default_factory=dict, description='Additional arguments for the evaluation split.'), 'name': FieldInfo(annotation=str, required=True), 'task_kwargs': FieldInfo(annotation=dict[str, Union[BaseModel, JsonValue]], required=False, default_factory=dict, description='Arguments to pass to TaskDataset.from_name()'), 'test_kwargs': FieldInfo(annotation=dict[str, Union[BaseModel, JsonValue]], required=False, default_factory=dict, description='Additional arguments for the test split.'), 'train_kwargs': FieldInfo(annotation=dict[str, Union[BaseModel, JsonValue]], required=False, default_factory=dict, description='Additional arguments for the training split.')}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

name: str
task_kwargs: dict[str, BaseModel | JsonValue]
test_kwargs: dict[str, BaseModel | JsonValue]
train_kwargs: dict[str, BaseModel | JsonValue]
class core.TaskDataset[source]

Bases: ABC, Generic[TEnvironment]

A base class for a dataset of tasks as environments.

Examples of task datasets: GSM8k, HotPotQA, etc. These are related environments instances with different problem specifications and reward conditions.

classmethod from_name(name: str, **env_kwargs) TaskDataset[source]
get_new_env() TEnvironment[source]

Get an env from a non-indexable dataset.

get_new_env_by_idx(idx: int) TEnvironment[source]

Get an env from a finite dataset.

iter_batches(batch_size: int, shuffle: bool = False) Iterator[list[TEnvironment]][source]

Construct batches from this dataset.

Parameters:
  • batch_size – Size of each batch. Note that if this dataset’s size is finite and isn’t evenly divisible by this value, the last yielded batch will be smaller than batch_size.

  • shuffle – Opt-in flag to shuffle without replacement.

Yields:

An iterator over batches of environments.

class core.Tool(tool_fn: ~collections.abc.Callable[[...], ~typing.Any] | ~collections.abc.Callable[[...], ~collections.abc.Awaitable[~typing.Any]] = <function Tool.<lambda>>, *, type: ~typing.Literal['function'] = 'function', function: ~aviary.tools.base.FunctionInfo)[source]

Bases: BaseModel

classmethod from_function(function: Callable[[...], Any] | Callable[[...], Awaitable[Any]], docstring_style: DocstringStyle = DocstringStyle.AUTO, allow_empty_param_descriptions: bool = False, types_in_param_descriptions: bool = False, **formats) Tool[source]

Hydrate this class via inspection from a free function with a docstring.

info: FunctionInfo
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'populate_by_name': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'info': FieldInfo(annotation=FunctionInfo, required=True, alias='function', alias_priority=2, description="The serialization alias of 'function' is to match LiteLLM structure on serialization, and the validation alias enables deserialization."), 'type': FieldInfo(annotation=Literal['function'], required=False, default='function')}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

type: Literal['function']
class core.ToolCall(*, id: str, type: Literal['function'] = 'function', function: ToolCallFunction)[source]

Bases: BaseModel

classmethod from_name(function_name: str, **kwargs) Self[source]
classmethod from_tool(tool: Tool, *args, id: str | None = None, **kwargs) Self[source]

Create a ToolCall from a Tool and arguments.

The *args is packaged into the ToolCallFunction’s arguments dict with best effort. **kwargs is what is passed to toolcall because we have to use named parameters.

function: ToolCallFunction
static generate_id() str[source]

Generate a tool call ID of length 9 with values in [a-zA-Z0-9].

id: str
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'function': FieldInfo(annotation=ToolCallFunction, required=True), 'id': FieldInfo(annotation=str, required=True), 'type': FieldInfo(annotation=Literal['function'], required=False, default='function')}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

type: Literal['function']
class core.ToolCallFunction(*, arguments: dict[str, Any], name: str)[source]

Bases: BaseModel

arguments: dict[str, Any]
classmethod deserialize_args(data: Any) Any[source]
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'arguments': FieldInfo(annotation=dict[str, Any], required=True), 'name': FieldInfo(annotation=str, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

name: str
serialize_arguments(arguments: dict[str, Any]) str[source]
class core.ToolRequestMessage(*, role: Literal['assistant'] = 'assistant', content: str | None = None, content_is_json_str: bool = False, info: dict | None = None, function_call: None = None, tool_calls: list[ToolCall] = None)[source]

Bases: Message

content: str | None
function_call: None
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'content': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'content_is_json_str': FieldInfo(annotation=bool, required=False, default=False, description='Whether the content is JSON-serialized (e.g., for multiple modalities).', exclude=True, repr=False), 'function_call': FieldInfo(annotation=NoneType, required=False, default=None), 'info': FieldInfo(annotation=Union[dict, NoneType], required=False, default=None, description='Optional metadata about the message.', exclude=True, repr=False), 'role': FieldInfo(annotation=Literal['assistant'], required=False, default='assistant', description='Matching LiteLLM structure.'), 'tool_calls': FieldInfo(annotation=list[ToolCall], required=False, default_factory=list, description='List of ToolCalls to make concurrently and independently.')}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

role: Literal['assistant']
tool_calls: list[ToolCall]
class core.ToolResponseMessage(*, role: Literal['tool'] = 'tool', content: str, content_is_json_str: bool = False, info: dict | None = None, name: str, tool_call_id: str)[source]

Bases: Message

content: str
classmethod from_call(call: ToolCall, content: str) Self[source]
classmethod from_request(request: ToolRequestMessage, contents: Iterable[str]) list[Self][source]
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'content': FieldInfo(annotation=str, required=True, description='Response message content, required to be a string by OpenAI/Anthropic.'), 'content_is_json_str': FieldInfo(annotation=bool, required=False, default=False, description='Whether the content is JSON-serialized (e.g., for multiple modalities).', exclude=True, repr=False), 'info': FieldInfo(annotation=Union[dict, NoneType], required=False, default=None, description='Optional metadata about the message.', exclude=True, repr=False), 'name': FieldInfo(annotation=str, required=True, description='Name of the tool that was called.'), 'role': FieldInfo(annotation=Literal['tool'], required=False, default='tool', description='Matching LiteLLM structure.'), 'tool_call_id': FieldInfo(annotation=str, required=True, description='Propagated from ToolCall.id, enabling matching response with ToolRequestMessage.')}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

name: str
role: Literal['tool']
tool_call_id: str
class core.ToolSelector(model_name: str = 'gpt-4o', acompletion: Callable[..., Awaitable[ModelResponse]] | None = None, accum_messages: bool = False)[source]

Bases: object

Simple entity to select a tool based on messages.

TOOL_CHOICE_REQUIRED: ClassVar[str] = 'required'
class core.ToolSelectorLedger(*, tools: list[Tool] = None, messages: list[ToolRequestMessage | ToolResponseMessage | Message] = None)[source]

Bases: BaseModel

Simple ledger to record tools and messages.

messages: list[ToolRequestMessage | ToolResponseMessage | Message]
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'messages': FieldInfo(annotation=list[Union[ToolRequestMessage, ToolResponseMessage, Message]], required=False, default_factory=list), 'tools': FieldInfo(annotation=list[Tool], required=False, default_factory=list)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

tools: list[Tool]
core.argref_by_name(fxn_requires_state: bool = False, prefix: str = '', return_direct: bool = False, type_check: bool = False, args_to_skip: set[str] | None = None)[source]

Decorator to allow args to be a string key into a refs dict instead of the full object.

This can prevent LLM-powered tool selections from getting confused by full objects, instead it enables them to work using named references. If a reference is not found, it will fallback on passing the original argument unless it is the first argument. If the first argument str is not found in the state object, it will raise an error.

Parameters:
  • fxn_requires_state – Whether to pass the state object to the decorated function.

  • prefix – A prefix to add to the generated reference ID.

  • return_direct – Whether to return the result directly or update the state object.

  • type_check – Whether to type-check arguments with respect to the wrapped function’s type annotations.

  • args_to_skip – If provided, a set of argument names that should not be referenced by name.

Example 1:
>>> @argref_by_name()  
>>> def my_func(foo: float): ...  
Example 2:
>>> def my_func(foo: float, bar: float) -> list[float]:
...     return [foo, bar]
>>> wrapped_fxn = argref_by_name()(my_func)
>>> # Equivalent to my_func(state.refs["foo"])
>>> wrapped_fxn("foo", state=state)  

Working with lists: - If you return a list, the decorator will create a new reference for each item in the list. - If you pass multiple args that are strings, the decorator will assume those are the keys. - If you need to pass a string, then use a keyword argument.

Example 1:
>>> @argref_by_name()  
>>> def my_func(foo: float, bar: float) -> list[float]:  
...     return [foo, bar]  
Example 2:
>>> def my_func(foo: float, bar: float) -> list[float]:
...     return [foo, bar]
>>> wrapped_fxn = argref_by_name()(my_func)
>>> # Returns a multiline string with the new references
>>> # Equivalent to my_func(state.refs["a"], state.refs["b"])
>>> wrapped_fxn("a", "b", state=state)  
core.encode_image_to_base64(img: np.ndarray) str[source]

Encode an image to a base64 string, to be included as an image_url in a Message.

async core.eval_answer(proposed: str, correct: str, question: str | None = None, eval_mode: EvalAnswerMode = EvalAnswerMode.CONTAINS, llm_eval_config: dict | None = None) float[source]

Evaluate a proposed answer against a correct answer.

Will return 0 or 1, except for llm-score which should be between 0 and 1

core.is_coroutine_callable(obj) bool[source]

Get if the input object is awaitable.

core.join(msgs: Iterable[Message], delimiter: str = '\n', include_roles: bool = True) str[source]
core.partial_format(value: str, **formats: dict[str, Any]) str[source]

Partially format a string given a variable amount of formats.

core.wraps_doc_only(wrapped)[source]

A decorator to copy only the docstring from the wrapped function.

You cannot use functools wraps directly because it will set the __wrapped__ attribute, which causes inspect.signature to inspect the wrapped function instead of the wrapper.

Usage:
def my_documented_function(foo):

‘’’This is a function that does something with foo.’’’ pass

@wraps_doc_only(my_documented_function) def my_other_function(foo, state):

pass

In this example, the second function can have different arguments, types, etc. and only the docstring will be copied over.