Skip to content

Commit e1a6dd9

Browse files
awaelchlijhauretBorda
authored
Clarify return type in training_step docs in case of manual optimization (#19327)
Co-authored-by: Julien Hauret <53187038+jhauret@users.noreply.github.com> Co-authored-by: Jirka Borovec <6035284+Borda@users.noreply.github.com>
1 parent 40197ed commit e1a6dd9

File tree

1 file changed

+5
-3
lines changed

1 file changed

+5
-3
lines changed

src/lightning/pytorch/core/module.py

+5-3
Original file line numberDiff line numberDiff line change
@@ -689,9 +689,11 @@ def training_step(self, *args: Any, **kwargs: Any) -> STEP_OUTPUT:
689689
690690
Return:
691691
- :class:`~torch.Tensor` - The loss tensor
692-
- ``dict`` - A dictionary. Can include any keys, but must include the key ``'loss'``.
693-
- ``None`` - Skip to the next batch. This is only supported for automatic optimization.
694-
This is not supported for multi-GPU, TPU, IPU, or DeepSpeed.
692+
- ``dict`` - A dictionary which can include any keys, but must include the key ``'loss'`` in the case of
693+
automatic optimization.
694+
- ``None`` - In automatic optimization, this will skip to the next batch (but is not supported for
695+
multi-GPU, TPU, or DeepSpeed). For manual optimization, this has no special meaning, as returning
696+
the loss is not required.
695697
696698
In this step you'd normally do the forward pass and calculate the loss for a batch.
697699
You can also do fancier things like multiple forward passes or something model specific.

0 commit comments

Comments
 (0)