Skip to content

Commit a1083fa

Browse files
authored
Merge branch 'master' into dependabot-github_actions-Lightning-AI-utilities-0.11.8
2 parents ea8ff6b + 0e1e14f commit a1083fa

File tree

7 files changed

+9
-9
lines changed

7 files changed

+9
-9
lines changed

.pre-commit-config.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ repos:
5858
#args: ["--write-changes"] # uncomment if you want to get automatic fixing
5959

6060
- repo: https://github.com/PyCQA/docformatter
61-
rev: v1.7.5
61+
rev: 06907d0267368b49b9180eed423fae5697c1e909 # todo: fix for docformatter after last 1.7.5
6262
hooks:
6363
- id: docformatter
6464
additional_dependencies: [tomli]

docs/source-pytorch/accelerators/tpu_advanced.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ Example:
5252
model = WeightSharingModule()
5353
trainer = Trainer(max_epochs=1, accelerator="tpu")
5454
55-
See `XLA Documentation <https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#xla-tensor-quirks>`_
55+
See `XLA Documentation <https://github.com/pytorch/xla/blob/v2.5.0/TROUBLESHOOTING.md#xla-tensor-quirks>`_
5656

5757
----
5858

@@ -61,4 +61,4 @@ XLA
6161
XLA is the library that interfaces PyTorch with the TPUs.
6262
For more information check out `XLA <https://github.com/pytorch/xla>`_.
6363

64-
Guide for `troubleshooting XLA <https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md>`_
64+
Guide for `troubleshooting XLA <https://github.com/pytorch/xla/blob/v2.5.0/TROUBLESHOOTING.md>`_

docs/source-pytorch/accelerators/tpu_basic.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ There are cases in which training on TPUs is slower when compared with GPUs, for
108108
- XLA Graph compilation during the initial steps `Reference <https://github.com/pytorch/xla/issues/2383#issuecomment-666519998>`_
109109
- Some tensor ops are not fully supported on TPU, or not supported at all. These operations will be performed on CPU (context switch).
110110

111-
The official PyTorch XLA `performance guide <https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#known-performance-caveats>`_
111+
The official PyTorch XLA `performance guide <https://github.com/pytorch/xla/blob/v2.5.0/TROUBLESHOOTING.md#known-performance-caveats>`_
112112
has more detailed information on how PyTorch code can be optimized for TPU. In particular, the
113-
`metrics report <https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#get-a-metrics-report>`_ allows
113+
`metrics report <https://github.com/pytorch/xla/blob/v2.5.0/TROUBLESHOOTING.md#get-a-metrics-report>`_ allows
114114
one to identify operations that lead to context switching.

docs/source-pytorch/accelerators/tpu_faq.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ A lot of PyTorch operations aren't lowered to XLA, which could lead to significa
7878
These operations are moved to the CPU memory and evaluated, and then the results are transferred back to the XLA device(s).
7979
By using the `xla_debug` Strategy, users could create a metrics report to diagnose issues.
8080

81-
The report includes things like (`XLA Reference <https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#troubleshooting>`_):
81+
The report includes things like (`XLA Reference <https://github.com/pytorch/xla/blob/v2.5.0/TROUBLESHOOTING.md#troubleshooting>`_):
8282

8383
* how many times we issue XLA compilations and time spent on issuing.
8484
* how many times we execute and time spent on execution

src/lightning/fabric/strategies/deepspeed.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -598,7 +598,7 @@ def _initialize_engine(
598598
) -> Tuple["DeepSpeedEngine", Optimizer]:
599599
"""Initialize one model and one optimizer with an optional learning rate scheduler.
600600
601-
This calls :func:`deepspeed.initialize` internally.
601+
This calls ``deepspeed.initialize`` internally.
602602
603603
"""
604604
import deepspeed

src/lightning/fabric/strategies/xla_fsdp.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ class XLAFSDPStrategy(ParallelStrategy, _Sharded):
5656
5757
.. warning:: This is an :ref:`experimental <versioning:Experimental API>` feature.
5858
59-
For more information check out https://github.com/pytorch/xla/blob/master/docs/fsdp.md
59+
For more information check out https://github.com/pytorch/xla/blob/v2.5.0/docs/fsdp.md
6060
6161
Args:
6262
auto_wrap_policy: Same as ``auto_wrap_policy`` parameter in

src/lightning/pytorch/strategies/deepspeed.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -414,7 +414,7 @@ def _setup_model_and_optimizer(
414414
) -> Tuple["deepspeed.DeepSpeedEngine", Optimizer]:
415415
"""Initialize one model and one optimizer with an optional learning rate scheduler.
416416
417-
This calls :func:`deepspeed.initialize` internally.
417+
This calls ``deepspeed.initialize`` internally.
418418
419419
"""
420420
import deepspeed

0 commit comments

Comments
 (0)