@@ -62,7 +62,41 @@ if you are having issues with the producer, like the error `record size (3570720
62
62
fluvio produce hello-node --max-request-size 37748736
63
63
```
64
64
65
+ # Fluvio Client Disconnects
66
+
67
+ If you frequently experience Fluvio client disconnects or reconnects for either
68
+ the producer, consumer, or both, you may see error messages like:
69
+
70
+ - ` Error: the produce request retry timeout limit reached `
71
+ - ` Error: Reconnecting to stream consumer `
72
+
73
+ These issues often stem from network reliability or insufficient async task
74
+ scheduling time.
75
+
76
+ ### Troubleshooting Steps
77
+
78
+ 1 . ** Check Network Connectivity** :
79
+ - Review network logs to ensure a stable connection to the Fluvio cluster.
80
+
81
+ 2 . ** Verify CPU Allocation** :
82
+ - Ensure sufficient CPUs are allocated to the application and the async
83
+ runtime. Running all async tasks on a single CPU can starve the Fluvio
84
+ client, causing timeouts and reconnections.
85
+
86
+ 3 . ** Inspect Async Runtime Configuration** :
87
+ - If using the Tokio crate, ensure the ` full ` or ` rt-multi-thread ` feature is
88
+ enabled in your ` Cargo.toml ` :
89
+ ``` toml
90
+ [dependencies ]
91
+ tokio = { version = " 1.0" , features = [" full" ] }
92
+ ```
93
+
94
+ For more details on handling CPU-bound tasks, refer to the Tokio documentation
95
+ on [ CPU bound tasks] .
96
+
97
+
65
98
[ `fluvio cluster upgrade` ] : cli/fluvio/cluster.mdx#fluvio-cluster-upgrade
66
99
[ filtering tracing log ] : https://docs.rs/tracing-subscriber/latest/tracing_subscriber/filter/struct.EnvFilter.html
67
100
[ Discord ] : https://discord.com/invite/bBG2dTz
101
+ [ CPU bound tasks ] : https://docs.rs/tokio/latest/tokio/#cpu-bound-tasks-and-blocking-code
68
102
0 commit comments