Default batch transfer behavior #20541
Unanswered
ziw-liu
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 1 comment
-
It reads like it does: pytorch-lightning/src/lightning/fabric/utilities/apply_func.py Lines 98 to 108 in a944e77 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
For CPU to CUDA batch transfer, when the batch is on host pinned memory, is the default trainer behavior set to non-blocking transfer automatically? Or do the users need to override the batch transfer hook for it to be non-blocking?
Beta Was this translation helpful? Give feedback.
All reactions