-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Crash caused by leaked semaphore objects #13
Comments
Hey @GenevieveBuckley , thanks for reporting this!
Assuming it's a
Well, your image has 4 billion :-D I wrote a little minimal-working-example to reproduce your issue.
This code runs forever on my machine and fills up all 32 GB of memory (despite the image is just 1.6 GB). I contiued playing with the underlying code and suspect that I found an issue that can cause your issue. You'll see a PR in a minute, and a release of a new version some minutes after. Let me know if that solves your issue! And thanks again for reporting! |
Quick benchmark, using the new release
and using this image (note: just 100 slices):
and this code:
Takes about one minute on my computer. Increasing it to 1000 slices exceeds what my little laptop can do. I don't expect a label image of 16 GB to be processed by a computer with 32 GB of memory. If you test it on a computer with more memory, I would be curious how long it takes. (-: |
Hm, ok. Those back of the envelope calculations make sense, I guess even my "small" examples are still pretty big. It's difficult getting access to a machine with larger memory at the moment - the supercomputing cluster is undergoing an upgrade (which should be good eventually, but quite a bit of it is offline now) |
I'm getting crashes when running label statistics for datasets around size
(1000, 2048, 2048)
pixels.There's some sort of memory leak, which kills ipython completely. I'd understand more if this was a truly giant dataset, or I was doing exessively complicated label statistics, instead of just label size, but that's not the case. I'm using a small, cropped subsection of my larger dataset, and I'd always considered one to two thousand pixels a fairly reasonable size to process in memory.
The text was updated successfully, but these errors were encountered: