Skip to content

Commit afe5708

Browse files
authored
chore: remove redundant words in comment (#20510)
Signed-off-by: withbest <seekseat@outlook.com.>
1 parent 9177ec0 commit afe5708

File tree

4 files changed

+5
-5
lines changed

4 files changed

+5
-5
lines changed

docs/source-pytorch/tuning/profiler_intermediate.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ The profiler will generate an output like this:
5555
Self CPU time total: 1.681ms
5656
5757
.. note::
58-
When using the PyTorch Profiler, wall clock time will not not be representative of the true wall clock time.
58+
When using the PyTorch Profiler, wall clock time will not be representative of the true wall clock time.
5959
This is due to forcing profiled operations to be measured synchronously, when many CUDA ops happen asynchronously.
6060
It is recommended to use this Profiler to find bottlenecks/breakdowns, however for end to end wall clock time use
6161
the ``SimpleProfiler``.
@@ -142,7 +142,7 @@ This profiler will record ``training_step``, ``validation_step``, ``test_step``,
142142
The output above shows the profiling for the action ``training_step``.
143143

144144
.. note::
145-
When using the PyTorch Profiler, wall clock time will not not be representative of the true wall clock time.
145+
When using the PyTorch Profiler, wall clock time will not be representative of the true wall clock time.
146146
This is due to forcing profiled operations to be measured synchronously, when many CUDA ops happen asynchronously.
147147
It is recommended to use this Profiler to find bottlenecks/breakdowns, however for end to end wall clock time use
148148
the ``SimpleProfiler``.

src/lightning/fabric/strategies/deepspeed.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -144,7 +144,7 @@ def __init__(
144144
nvme_path: Filesystem path for NVMe device for optimizer/parameter state offloading.
145145
146146
optimizer_buffer_count: Number of buffers in buffer pool for optimizer state offloading
147-
when ``offload_optimizer_device`` is set to to ``nvme``.
147+
when ``offload_optimizer_device`` is set to ``nvme``.
148148
This should be at least the number of states maintained per parameter by the optimizer.
149149
For example, Adam optimizer has 4 states (parameter, gradient, momentum, and variance).
150150

src/lightning/pytorch/core/module.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -979,7 +979,7 @@ def configure_optimizers(self) -> OptimizerLRScheduler:
979979
# `scheduler.step()`. 1 corresponds to updating the learning
980980
# rate after every epoch/step.
981981
"frequency": 1,
982-
# Metric to to monitor for schedulers like `ReduceLROnPlateau`
982+
# Metric to monitor for schedulers like `ReduceLROnPlateau`
983983
"monitor": "val_loss",
984984
# If set to `True`, will enforce that the value specified 'monitor'
985985
# is available when the scheduler is updated, thus stopping

src/lightning/pytorch/strategies/deepspeed.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@ def __init__(
166166
nvme_path: Filesystem path for NVMe device for optimizer/parameter state offloading.
167167
168168
optimizer_buffer_count: Number of buffers in buffer pool for optimizer state offloading
169-
when ``offload_optimizer_device`` is set to to ``nvme``.
169+
when ``offload_optimizer_device`` is set to ``nvme``.
170170
This should be at least the number of states maintained per parameter by the optimizer.
171171
For example, Adam optimizer has 4 states (parameter, gradient, momentum, and variance).
172172

0 commit comments

Comments
 (0)