@@ -73,7 +73,7 @@ The Model Context Protocol allows applications to provide context for LLMs in a
73
73
74
74
### Adding MCP to your python project
75
75
76
- We recommend using [ uv] ( https://docs.astral.sh/uv/ ) to manage your Python projects.
76
+ We recommend using [ uv] ( https://docs.astral.sh/uv/ ) to manage your Python projects.
77
77
78
78
If you haven't created a uv-managed project yet, create one:
79
79
@@ -89,6 +89,7 @@ If you haven't created a uv-managed project yet, create one:
89
89
```
90
90
91
91
Alternatively, for projects using pip for dependencies:
92
+
92
93
``` bash
93
94
pip install " mcp[cli]"
94
95
```
@@ -128,11 +129,13 @@ def get_greeting(name: str) -> str:
128
129
```
129
130
130
131
You can install this server in [ Claude Desktop] ( https://claude.ai/download ) and interact with it right away by running:
132
+
131
133
``` bash
132
134
mcp install server.py
133
135
```
134
136
135
137
Alternatively, you can test it with the MCP Inspector:
138
+
136
139
``` bash
137
140
mcp dev server.py
138
141
```
@@ -245,6 +248,67 @@ async def fetch_weather(city: str) -> str:
245
248
return response.text
246
249
```
247
250
251
+ #### Output Schemas
252
+
253
+ Tools automatically generate JSON Schema definitions for their return types, helping LLMs understand the structure of the data they'll receive:
254
+
255
+ ``` python
256
+ from pydantic import BaseModel
257
+
258
+ # Tools with primitive return types
259
+ @mcp.tool ()
260
+ def get_temperature (city : str ) -> float :
261
+ """ Get the current temperature for a city"""
262
+ # In a real implementation, this would fetch actual weather data
263
+ return 72.5
264
+
265
+ # Tools with dictionary return types
266
+ @mcp.tool ()
267
+ def get_user (user_id : int ) -> dict :
268
+ """ Get user information by ID"""
269
+ return {" id" : user_id, " name" : " John Doe" , " email" : " john@example.com" }
270
+
271
+ # Using Pydantic models for structured output
272
+ class WeatherData (BaseModel ):
273
+ temperature: float
274
+ humidity: float
275
+ conditions: str
276
+
277
+ @mcp.tool ()
278
+ def get_weather_data (city : str ) -> WeatherData:
279
+ """ Get structured weather data for a city"""
280
+ # In a real implementation, this would fetch actual weather data
281
+ return WeatherData(
282
+ temperature = 72.5 ,
283
+ humidity = 65.0 ,
284
+ conditions = " Partly cloudy"
285
+ )
286
+
287
+ # Complex nested models
288
+ class Location (BaseModel ):
289
+ city: str
290
+ country: str
291
+ coordinates: tuple[float , float ]
292
+
293
+ class WeatherForecast (BaseModel ):
294
+ current: WeatherData
295
+ location: Location
296
+ forecast: list[WeatherData]
297
+
298
+ @mcp.tool ()
299
+ def get_weather_forecast (city : str ) -> WeatherForecast:
300
+ """ Get detailed weather forecast for a city"""
301
+ # In a real implementation, this would fetch actual forecast data
302
+ return WeatherForecast(
303
+ current = WeatherData(temperature = 72.5 , humidity = 65.0 , conditions = " Partly cloudy" ),
304
+ location = Location(city = city, country = " USA" , coordinates = (37.7749 , - 122.4194 )),
305
+ forecast = [
306
+ WeatherData(temperature = 75.0 , humidity = 62.0 , conditions = " Sunny" ),
307
+ WeatherData(temperature = 68.0 , humidity = 80.0 , conditions = " Rainy" )
308
+ ]
309
+ )
310
+ ```
311
+
248
312
### Prompts
249
313
250
314
Prompts are reusable templates that help LLMs interact with your server effectively:
@@ -381,6 +445,7 @@ if __name__ == "__main__":
381
445
```
382
446
383
447
Run it with:
448
+
384
449
``` bash
385
450
python server.py
386
451
# or
@@ -458,18 +523,17 @@ app.mount("/math", math.mcp.streamable_http_app())
458
523
```
459
524
460
525
For low level server with Streamable HTTP implementations, see:
526
+
461
527
- Stateful server: [ ` examples/servers/simple-streamablehttp/ ` ] ( examples/servers/simple-streamablehttp/ )
462
528
- Stateless server: [ ` examples/servers/simple-streamablehttp-stateless/ ` ] ( examples/servers/simple-streamablehttp-stateless/ )
463
529
464
-
465
-
466
530
The streamable HTTP transport supports:
531
+
467
532
- Stateful and stateless operation modes
468
533
- Resumability with event stores
469
- - JSON or SSE response formats
534
+ - JSON or SSE response formats
470
535
- Better scalability for multi-node deployments
471
536
472
-
473
537
### Mounting to an Existing ASGI Server
474
538
475
539
> ** Note** : SSE transport is being superseded by [ Streamable HTTP transport] ( https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http ) .
@@ -637,6 +701,7 @@ async def query_db(name: str, arguments: dict) -> list:
637
701
```
638
702
639
703
The lifespan API provides:
704
+
640
705
- A way to initialize resources when the server starts and clean them up when it stops
641
706
- Access to initialized resources through the request context in handlers
642
707
- Type-safe context passing between lifespan and request handlers
0 commit comments