You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I read the docs and source code, but can't really understand what are those segments, why are they stored for so long. You probably discussed it internally, but the repo just does not contain any info except "Segments are used to store the published messages. ".
We tried it on a +- 250MB machine, and were shocked of 10MB/hour "leaks". After some hours of debugging rumqttd (valgrind cannot understand what is happening to reference counted allocation), we found out that segments were configured improperly.
Questions:
What is the trade off between segment count and segment length? How should users choose between one big segment and a lot of smaller ones?
Why segments always store messages, until saturated? Code of readv is:
let o = self.data[idx asusize..limit asusize].iter().cloned().zip(offsets);
out.extend(o);
So in case of publishes, it is cloning (reference counted) body and topic. It is only dropped in apply_retention. If the message is not retained, why store it longer than it takes to send to all currently connected clients? Instead, it just keeps the messages until segments are saturated (that memory could've been used productively by other processes), and frees them gradually when full.
I think it is not optimal, because I noticed that rumqttd sends messages a lot later, when it has to do that retention. It does it just before appending a new message:
After another session of valgrind I think I got it. max_segment_size and max_segment_count is used for every "filter" separately, so you can't really limit memory usage without specifying all possible filters upfront and summing up among them.
I read the docs and source code, but can't really understand what are those segments, why are they stored for so long. You probably discussed it internally, but the repo just does not contain any info except "Segments are used to store the published messages. ".
We tried it on a +- 250MB machine, and were shocked of 10MB/hour "leaks". After some hours of debugging rumqttd (valgrind cannot understand what is happening to reference counted allocation), we found out that segments were configured improperly.
Questions:
So in case of publishes, it is cloning (reference counted) body and topic. It is only dropped in
apply_retention
. If the message is not retained, why store it longer than it takes to send to all currently connected clients? Instead, it just keeps the messages until segments are saturated (that memory could've been used productively by other processes), and frees them gradually when full.I think it is not optimal, because I noticed that rumqttd sends messages a lot later, when it has to do that retention. It does it just before appending a new message:
But if it did it before it arrived, clients wouldn't have to wait for
apply_retention
to end.The text was updated successfully, but these errors were encountered: