Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix MPI buffer max_leading_slices #1079

Conversation

AlexanderSinn
Copy link
Member

As noticed by @huixingjian, the hipace.comms_buffer_max_leading_slices option doesn't work sometimes. This is because there was one slice where incoming metadata could be received too early, if the Isend of the same slice would happen instantly. Furthermore, I noticed that in some situations (all ranks compute slices faster or at the same speed as the head rank) it is possible to go into a deadlock where none of the leading slices are filled, so I adjusted the assert and documentation to only include the trailing slices.

  • Small enough (< few 100s of lines), otherwise it should probably be split into smaller PRs
  • Tested (describe the tests in the PR description)
  • Runs on GPU (basic: the code compiles and run well with the new module)
  • Contains an automated test (checksum and/or comparison with theory)
  • Documented: all elements (classes and their members, functions, namespaces, etc.) are documented
  • Constified (All that can be const is const)
  • Code is clean (no unwanted comments, )
  • Style and code conventions are respected at the bottom of https://github.com/Hi-PACE/hipace
  • Proper label and GitHub project, if applicable

@AlexanderSinn AlexanderSinn added bug Something isn't working Parallelization Longitudinal and transverse MPI decomposition labels Mar 9, 2024
Copy link
Member

@MaxThevenet MaxThevenet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix!

@MaxThevenet MaxThevenet merged commit 89e7976 into Hi-PACE:development Mar 13, 2024
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Parallelization Longitudinal and transverse MPI decomposition
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants