Skip to content

Commit 96432be

Browse files
Use markdown for docs
1 parent 687d813 commit 96432be

17 files changed

+1689
-1045
lines changed

docs/api/agents.md

Lines changed: 348 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,348 @@
1+
# Agents
2+
3+
## Schema Hierarchy
4+
5+
The Atomic Agents framework uses Pydantic for schema validation and serialization. All input and output schemas follow this inheritance pattern:
6+
7+
```
8+
pydantic.BaseModel
9+
└── BaseIOSchema
10+
├── BaseAgentInputSchema
11+
└── BaseAgentOutputSchema
12+
```
13+
14+
### BaseIOSchema
15+
16+
The base schema class that all agent input/output schemas inherit from.
17+
18+
```{eval-rst}
19+
.. py:class:: BaseIOSchema
20+
21+
Base schema class for all agent input/output schemas. Inherits from :class:`pydantic.BaseModel`.
22+
23+
All agent schemas must inherit from this class to ensure proper serialization and validation.
24+
25+
**Inheritance:**
26+
- :class:`pydantic.BaseModel`
27+
```
28+
29+
### BaseAgentInputSchema
30+
31+
The default input schema for agents.
32+
33+
```{eval-rst}
34+
.. py:class:: BaseAgentInputSchema
35+
36+
Default input schema for agent interactions.
37+
38+
**Inheritance:**
39+
- :class:`BaseIOSchema` → :class:`pydantic.BaseModel`
40+
41+
.. py:attribute:: chat_message
42+
:type: str
43+
44+
The message to send to the agent.
45+
46+
Example:
47+
>>> input_schema = BaseAgentInputSchema(chat_message="Hello, agent!")
48+
>>> agent.run(input_schema)
49+
```
50+
51+
### BaseAgentOutputSchema
52+
53+
The default output schema for agents.
54+
55+
```{eval-rst}
56+
.. py:class:: BaseAgentOutputSchema
57+
58+
Default output schema for agent responses.
59+
60+
**Inheritance:**
61+
- :class:`BaseIOSchema` → :class:`pydantic.BaseModel`
62+
63+
.. py:attribute:: chat_message
64+
:type: str
65+
66+
The response message from the agent.
67+
68+
Example:
69+
>>> response = agent.run(input_schema)
70+
>>> print(response.chat_message)
71+
```
72+
73+
### Creating Custom Schemas
74+
75+
You can create custom input/output schemas by inheriting from `BaseIOSchema`:
76+
77+
```python
78+
from pydantic import Field
79+
from typing import List
80+
from atomic_agents.lib.base.base_io_schema import BaseIOSchema
81+
82+
class CustomInputSchema(BaseIOSchema):
83+
chat_message: str = Field(..., description="User's message")
84+
context: str = Field(None, description="Optional context for the agent")
85+
86+
class CustomOutputSchema(BaseIOSchema):
87+
chat_message: str = Field(..., description="Agent's response")
88+
follow_up_questions: List[str] = Field(
89+
default_factory=list,
90+
description="Suggested follow-up questions"
91+
)
92+
confidence: float = Field(
93+
...,
94+
description="Confidence score for the response",
95+
ge=0.0,
96+
le=1.0
97+
)
98+
```
99+
100+
## Base Agent Configuration
101+
102+
### BaseAgentConfig
103+
104+
The configuration class for BaseAgent that defines all settings and components.
105+
106+
```{eval-rst}
107+
.. py:class:: BaseAgentConfig
108+
109+
Configuration class for BaseAgent.
110+
111+
**Inheritance:**
112+
- :class:`pydantic.BaseModel`
113+
114+
.. py:attribute:: client
115+
:type: Any
116+
117+
The LLM client to use (e.g., OpenAI, Anthropic, etc.). Must be wrapped with instructor.
118+
119+
.. py:attribute:: model
120+
:type: str
121+
122+
The model identifier to use with the client (e.g., "gpt-4", "claude-3-opus-20240229").
123+
124+
.. py:attribute:: memory
125+
:type: Optional[AgentMemory]
126+
127+
Memory component for storing conversation history. Defaults to None.
128+
129+
.. py:attribute:: system_prompt_generator
130+
:type: Optional[SystemPromptGenerator]
131+
132+
Generator for creating system prompts. Defaults to None.
133+
134+
.. py:attribute:: input_schema
135+
:type: Optional[Type[BaseIOSchema]]
136+
137+
Schema for validating agent inputs. Defaults to BaseAgentInputSchema.
138+
Must be a subclass of BaseIOSchema.
139+
140+
.. py:attribute:: output_schema
141+
:type: Optional[Type[BaseIOSchema]]
142+
143+
Schema for validating agent outputs. Defaults to BaseAgentOutputSchema.
144+
Must be a subclass of BaseIOSchema.
145+
146+
.. py:attribute:: tools
147+
:type: Optional[List[str]]
148+
149+
List of tool names to make available to the agent. Defaults to None.
150+
151+
.. py:attribute:: tool_configs
152+
:type: Optional[Dict[str, Dict[str, Any]]]
153+
154+
Configuration parameters for tools. Defaults to None.
155+
156+
.. py:attribute:: components
157+
:type: Optional[Dict[str, Any]]
158+
159+
Additional components to attach to the agent. Defaults to None.
160+
161+
Example:
162+
>>> config = BaseAgentConfig(
163+
... client=instructor.from_openai(OpenAI()),
164+
... model="gpt-4",
165+
... input_schema=CustomInputSchema,
166+
... output_schema=CustomOutputSchema,
167+
... memory=AgentMemory(),
168+
... system_prompt_generator=SystemPromptGenerator(
169+
... background=["You are a helpful assistant"],
170+
... steps=["1. Understand the request", "2. Provide a response"]
171+
... )
172+
... )
173+
```
174+
175+
## Base Agent
176+
177+
The BaseAgent class provides core functionality for building AI agents with structured input/output schemas,
178+
memory management, and streaming capabilities.
179+
180+
### Basic Usage
181+
182+
```python
183+
import instructor
184+
from openai import OpenAI
185+
from atomic_agents.agents.base_agent import BaseAgent, BaseAgentConfig
186+
from atomic_agents.lib.components.agent_memory import AgentMemory
187+
188+
# Initialize with OpenAI
189+
client = instructor.from_openai(OpenAI())
190+
191+
# Create basic configuration
192+
config = BaseAgentConfig(
193+
client=client,
194+
model="gpt-4",
195+
memory=AgentMemory()
196+
)
197+
198+
# Initialize agent
199+
agent = BaseAgent(config)
200+
```
201+
202+
### Class Documentation
203+
204+
```{eval-rst}
205+
.. py:class:: BaseAgent
206+
207+
.. py:method:: __init__(config: BaseAgentConfig)
208+
209+
Initializes a new BaseAgent instance.
210+
211+
:param config: Configuration object containing client, model, memory, and other settings
212+
:type config: BaseAgentConfig
213+
214+
.. py:method:: run(user_input: Optional[BaseIOSchema] = None) -> BaseIOSchema
215+
216+
Runs the chat agent with the given user input synchronously.
217+
218+
:param user_input: The input from the user
219+
:type user_input: Optional[BaseIOSchema]
220+
:return: The response from the chat agent
221+
:rtype: BaseIOSchema
222+
223+
.. py:method:: run_async(user_input: Optional[BaseIOSchema] = None) -> AsyncGenerator[BaseIOSchema, None]
224+
225+
Runs the chat agent with streaming output asynchronously.
226+
227+
:param user_input: The input from the user
228+
:type user_input: Optional[BaseIOSchema]
229+
:return: An async generator yielding partial responses
230+
:rtype: AsyncGenerator[BaseIOSchema, None]
231+
232+
.. py:method:: reset_memory()
233+
234+
Resets the agent's memory to its initial state.
235+
236+
.. py:method:: get_context_provider(provider_name: str) -> Type[SystemPromptContextProviderBase]
237+
238+
Retrieves a context provider by name.
239+
240+
:param provider_name: The name of the context provider
241+
:type provider_name: str
242+
:return: The context provider if found
243+
:rtype: SystemPromptContextProviderBase
244+
:raises KeyError: If the context provider is not found
245+
246+
.. py:method:: register_context_provider(provider_name: str, provider: SystemPromptContextProviderBase)
247+
248+
Registers a new context provider.
249+
250+
:param provider_name: The name of the context provider
251+
:type provider_name: str
252+
:param provider: The context provider instance
253+
:type provider: SystemPromptContextProviderBase
254+
255+
.. py:method:: unregister_context_provider(provider_name: str)
256+
257+
Unregisters an existing context provider.
258+
259+
:param provider_name: The name of the context provider to remove
260+
:type provider_name: str
261+
:raises KeyError: If the context provider is not found
262+
```
263+
264+
### Examples
265+
266+
#### Basic Synchronous Interaction
267+
268+
```python
269+
# Create input and get response
270+
user_input = agent.input_schema(chat_message="Tell me about quantum computing")
271+
response = agent.run(user_input)
272+
273+
print(f"Assistant: {response.chat_message}")
274+
```
275+
276+
#### Streaming Response
277+
278+
```python
279+
import asyncio
280+
281+
async def stream_chat():
282+
# Initialize with AsyncOpenAI for streaming
283+
client = instructor.from_openai(AsyncOpenAI())
284+
agent = BaseAgent(BaseAgentConfig(client=client, model="gpt-4"))
285+
286+
# Create input and stream response
287+
user_input = agent.input_schema(chat_message="Explain streaming")
288+
print("\nUser: Explain streaming")
289+
print("Assistant: ", end="", flush=True)
290+
291+
async for partial_response in agent.run_async(user_input):
292+
if hasattr(partial_response, "chat_message"):
293+
print(partial_response.chat_message, end="", flush=True)
294+
print()
295+
296+
asyncio.run(stream_chat())
297+
```
298+
299+
#### Using Tools
300+
301+
```python
302+
# Create agent with tools
303+
agent = BaseAgent(
304+
config=BaseAgentConfig(
305+
client=client,
306+
model="gpt-4",
307+
tools=["calculator", "searxng_search"],
308+
tool_configs={
309+
"searxng_search": {
310+
"instance_url": "https://your-searxng-instance.com"
311+
}
312+
}
313+
)
314+
)
315+
316+
# The agent can now use these tools in its responses
317+
response = agent.run(
318+
agent.input_schema(
319+
chat_message="What is the square root of 144 plus the current temperature in London?"
320+
)
321+
)
322+
```
323+
324+
#### Custom Memory and System Prompt
325+
326+
```python
327+
from atomic_agents.lib.components.system_prompt_generator import SystemPromptGenerator
328+
329+
# Create custom system prompt
330+
generator = SystemPromptGenerator(
331+
background=["You are a helpful AI assistant specializing in technical support."],
332+
steps=[
333+
"1. Understand the technical issue",
334+
"2. Ask clarifying questions if needed",
335+
"3. Provide step-by-step solutions"
336+
]
337+
)
338+
339+
# Create agent with custom memory and prompt
340+
agent = BaseAgent(
341+
config=BaseAgentConfig(
342+
client=client,
343+
model="gpt-4",
344+
memory=AgentMemory(),
345+
system_prompt_generator=generator
346+
)
347+
)
348+
```

0 commit comments

Comments
 (0)