-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OutOfMemoryError thrown when logging records for failed tests #818
Comments
ZPA Triage:
@bulivlad Do you have a reproducible example for this? It would be very helpful if you could share the process and test case that causes this. |
Hello, this issue still persists with around 100,000 records. We currently managed to get around this by implementing a custom test annotation, that injects a RecordStreamSource that returns an empty ArrayList. However, we would still appreciate if this issue could be fixed within the library. As far as I can tell, converting RecordStreamSourceImpl to use streaming should do the trick. Would you be willing to accept a PR for this? |
Of course! 🙌
🤔 Interesting, but how would you ensure multiple assertions can be made? A stream can only be consumed once. For example, I want to assert something about the process and about a published message. |
Fair, did not think of that issue. This also makes it clear why the records are stored in a collection in the first place. What about creating a separate source stream for each usage? However, this would add some overhead to the transformation, as each stream would create record copies for itself. |
As ZPT is built on top of an in-memory log storage, we'll always have some memory limit that we might exceed. So, I have two thoughts:
1 is the easy fix/workaround for your problem. For 2, we can look at re-using the data stored by We currently copy each record into that class for simpler code and performance: each record only needs to be deserialized once and can be easily used in any assertion afterward. We could postpone deserialization to the assertions and make the deserialization selective based on the type of assertion to find a much more memory-friendly solution (half the memory load of the complete log) without costing too much CPU. @timvahlbrock Is that something you'd be interested in trying out? It could be a change that requires some deeper cuts, as the 💭 Please note that records are deserialized in multiple stages:
|
@korthout Unfortunately, the time I got to work one this was cut by organisational changes, but I can speak with my team mates if anyone has the capacity to persue this. One more thing to consider: I'm quite confident, that this library is not used for assertions in our case. So as far as I can tell the only place where the RecordStream is used is the debug logging in case of a test failure. So if we could disable that debug logging at the end by configuration, the records wouldn't be collected and our workaround would be obsolete. However, I feel like your solution is somewhat cleaner and I also need to verify that what I just stated is actually accurate. Can do the later tomorrow. |
That's interesting. Can I ask how you're using this instead? |
So i just checked, we don't make any assertions on the zeebe records in the affected test suites. I also remembered, we don't use this library directly, but rather spring-starter-camunda-test to get an in memory zeebe engine per test. It is also responsible for logging the records in the case of a test failure. So my proposal to disable the logging of records would need to be addressed there. |
Description
OutOfMemoryError is thrown when trying to log all the records for complex processes due to the way the logging is implemented. As of now when a test is failing all the records are collected into a string builder then logged. There is no way to trim down or suppress the records logging.
Expected behaviour
The records logging should not throw OutOtMemoryError
Reproduction steps
Create a complex process that might generate a large stream of records.
Stacktrace
Environment
The text was updated successfully, but these errors were encountered: