Skip to content

Feat/add logfire #5

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 17 additions & 3 deletions 2025-05-16-fastapi-demo/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
# FastAPI Demo with Math, Database and PydanticAI
# FastAPI Demo with Math, Database, PydanticAI and MCP

This is a FastAPI application that demonstrates:
- Mathematical operations (division, Fibonacci)
- Database operations with SQLAlchemy
- PydanticAI agent integration with Tavily search
- MCP (Model Context Protocol) integration with Playwright MCP server
- Logfire observability

## Setup
Expand All @@ -13,15 +14,20 @@ This is a FastAPI application that demonstrates:
uv sync
```

2. Create a `.env` file in the root directory with the following environment variables:
2. Ensure you have Node.js installed for the MCP filesystem server:
```bash
node --version # Should be v16 or higher
```

3. Create a `.env` file in the root directory with the following environment variables:
```
OPENAI_API_KEY=your_openai_api_key_here
TAVILY_API_KEY=your_tavily_api_key_here
LOGFIRE_TOKEN=your_logfire_token_here
DATABASE_URL=sqlite:///./test.db
```

3. Run the application:
4. Run the application:
```bash
uv run uvicorn src.app:app --host 0.0.0.0 --port 8000 --reload
```
Expand All @@ -34,6 +40,7 @@ This is a FastAPI application that demonstrates:
- `GET /items/` - List all items with pagination
- `GET /items/{item_id}` - Get a specific item by ID
- `POST /agent/query` - Query the PydanticAI agent with a question
- `POST /mcp/query` - Query the MCP-enabled agent with Playwright MCP

## Example Usage

Expand All @@ -44,6 +51,13 @@ curl -X POST "http://localhost:8000/agent/query" \
-d '{"question": "How do I use PydanticAI tools?"}'
```

Query the MCP agent with filesystem capabilities:
```bash
curl -X POST "http://localhost:8000/mcp/query" \
-H "Content-Type: application/json" \
-d '{"question": "Create a simple Python script that prints hello world and save it to hello.py"}'
```

Run evals:
```bash
PYTHONPATH=. uv run python tests/evals.py
Expand Down
2 changes: 2 additions & 0 deletions 2025-05-16-fastapi-demo/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,10 @@ dependencies = [
"uvicorn>=0.34.2",
"pydantic-ai>=0.2.9",
"pydantic-ai-slim[tavily]>=0.2.9",
"mcp>=1.9.2",
"python-dotenv>=1.1.0",
"pydantic-evals>=0.2.9",
"logfire-mcp>=0.0.11",
]

[tool.ruff]
Expand Down
17 changes: 17 additions & 0 deletions 2025-05-16-fastapi-demo/src/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
from sqlalchemy.orm import sessionmaker, Session

from src.agent import build_agent, answer_question, BotResponse
from src.mcp_agent import answer_mcp_question, MCPBotResponse

logfire.configure(
service_name='api',
Expand All @@ -26,6 +27,7 @@
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
logfire.instrument_sqlalchemy(engine)
logfire.instrument_mcp()


# Database model
Expand Down Expand Up @@ -58,6 +60,10 @@ class AgentQuery(BaseModel):
question: str


class MCPQuery(BaseModel):
question: str


# Dependency to get DB session
def get_db():
db = SessionLocal()
Expand Down Expand Up @@ -146,6 +152,17 @@ async def query_agent(query: AgentQuery, agent: Agent[None, BotResponse] = Depen
return response


@app.post("/mcp/query", response_model=MCPBotResponse)
async def query_mcp_agent(query: MCPQuery):
"""
Queries the MCP-enabled PydanticAI agent with browser automation capabilities.
"""
logfire.info(f"Querying MCP agent with question: {query.question}")
response = await answer_mcp_question(query.question)
return response



if __name__ == '__main__': # Fixed double asterisks
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000) # Added this line to complete the if block
83 changes: 83 additions & 0 deletions 2025-05-16-fastapi-demo/src/mcp_agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
import asyncio
from pathlib import Path
from textwrap import dedent
from typing import Annotated

import logfire
from dotenv import load_dotenv
from pydantic import BaseModel, Field
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStdio

ROOT_DIR = Path(__file__).parent.parent
load_dotenv(dotenv_path=ROOT_DIR / ".env")

# Configure logfire instrumentation
logfire.configure(scrubbing=False, service_name='playwright-browser')
logfire.instrument_mcp()
logfire.instrument_pydantic_ai()

class MCPBotResponse(BaseModel):
answer: str
reasoning: str
services_used: list[str] = []
confidence_percentage: Annotated[int, Field(ge=0, le=100)]

SYSTEM_PROMPT = dedent(
"""
You're a helpful AI assistant with access to browser automation and Logfire telemetry analysis capabilities.

Browser capabilities (via Playwright):
- Navigate to websites, interact with web pages, take screenshots, and extract information
- Be thorough in web navigation and information extraction
- Take screenshots when helpful for verification
- Extract relevant information clearly and accurately

Logfire capabilities:
- Find and analyze exceptions in OpenTelemetry traces grouped by file
- Get detailed trace information about exceptions in specific files
- Run custom SQL queries on traces and metrics data
- Access OpenTelemetry schema information for query building
- Analyze application performance and error patterns

When working with these services:
- Explain what you're doing clearly
- Be mindful of website terms of service and respectful browsing practices
- Use appropriate time ranges for telemetry queries (max 7 days)
- Help identify patterns in application behavior and errors

Give a confidence percentage for your answer, from 0 to 100.
List any services you used (e.g., "playwright", "logfire") in the services_used field.
"""
)

# Set up MCP servers with correct command syntax
browser_mcp = MCPServerStdio('npx', args=['--yes', '@playwright/mcp@latest'], tool_prefix='browser')
logfire_mcp = MCPServerStdio('uvx', args=['logfire-mcp'], tool_prefix='logfire')

# Create the agent with both MCP servers
agent = Agent(
'openai:gpt-4o',
output_type=MCPBotResponse,
system_prompt=SYSTEM_PROMPT,
mcp_servers=[browser_mcp, logfire_mcp],
instrument=True,
)

async def answer_mcp_question(question: str) -> MCPBotResponse:
"""Run a question through the MCP-enabled agent."""
async with agent.run_mcp_servers():
result = await agent.run(user_prompt=question)
return result.output

async def main():
"""Example usage of the browser and Logfire telemetry agent."""
question = ('Help me analyze my application: First, check for any exceptions in traces from the last hour using Logfire. '
'Then navigate to the Logfire documentation to get information about best practices for error monitoring. '
'Finally, provide recommendations based on what you find.')

result = await answer_mcp_question(question)
print(result)

if __name__ == '__main__':
asyncio.run(main())
53 changes: 50 additions & 3 deletions 2025-05-16-fastapi-demo/uv.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.