Skip to content

Commit e950bb4

Browse files
awaelchlicarmocca
andauthored
Remove the Graphcore IPU integration (#19405)
Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
1 parent 8d4768f commit e950bb4

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

42 files changed

+125
-591
lines changed

.github/workflows/README.md

-1
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,6 @@ Brief description of all our automation tools used for boosting development perf
2727

2828
- GPU: 2 x NVIDIA RTX 3090
2929
- TPU: [Google TPU v4-8](https://cloud.google.com/tpu/docs)
30-
- IPU: [Colossus MK1 IPU](https://www.graphcore.ai/products/ipu)
3130

3231
- To check which versions of Python or PyTorch are used for testing in our CI, see the corresponding workflow files or checkgroup config file at [`.github/checkgroup.yml`](../checkgroup.yml).
3332

.gitignore

-2
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,7 @@ docs/source-pytorch/notebooks
2222
docs/source-pytorch/_static/images/course_UvA-DL
2323
docs/source-pytorch/_static/images/lightning_examples
2424
docs/source-pytorch/_static/fetched-s3-assets
25-
docs/source-pytorch/_static/images/ipu/
2625
docs/source-pytorch/integrations/hpu
27-
docs/source-pytorch/integrations/ipu
2826

2927
docs/source-fabric/*/generated
3028

docs/source-app/quickstart.rst

-1
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,6 @@ And that's it!
5353
5454
GPU available: True (mps), used: False
5555
TPU available: False, using: 0 TPU cores
56-
IPU available: False, using: 0 IPUs
5756
5857
| Name | Type | Params | In sizes | Out sizes
5958
------------------------------------------------------------------

docs/source-pytorch/advanced/speed.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ Training on Accelerators
2020

2121
**Use when:** Whenever possible!
2222

23-
With Lightning, running on GPUs, TPUs, IPUs on multiple nodes is a simple switch of a flag.
23+
With Lightning, running on GPUs, TPUs, HPUs on multiple nodes is a simple switch of a flag.
2424

2525
GPU Training
2626
============

docs/source-pytorch/common/index.rst

-8
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,6 @@
1717
../advanced/model_parallel
1818
Train on single or multiple GPUs <../accelerators/gpu>
1919
Train on single or multiple HPUs <../integrations/hpu/index>
20-
Train on single or multiple IPUs <../integrations/ipu/index>
2120
Train on single or multiple TPUs <../accelerators/tpu>
2221
Train on MPS <../accelerators/mps>
2322
Use a pretrained model <../advanced/pretrained>
@@ -168,13 +167,6 @@ How-to Guides
168167
:col_css: col-md-4
169168
:height: 180
170169

171-
.. displayitem::
172-
:header: Train on single or multiple IPUs
173-
:description: Train models faster with IPU accelerators
174-
:button_link: ../integrations/ipu/index.html
175-
:col_css: col-md-4
176-
:height: 180
177-
178170
.. displayitem::
179171
:header: Train on single or multiple TPUs
180172
:description: TTrain models faster with TPU accelerators

docs/source-pytorch/common/precision_basic.rst

+1-6
Original file line numberDiff line numberDiff line change
@@ -103,31 +103,26 @@ Precision support by accelerator
103103
********************************
104104

105105
.. list-table:: Precision with Accelerators
106-
:widths: 20 20 20 20 20
106+
:widths: 20 20 20 20
107107
:header-rows: 1
108108

109109
* - Precision
110110
- CPU
111111
- GPU
112112
- TPU
113-
- IPU
114113
* - 16 Mixed
115114
- No
116115
- Yes
117116
- No
118-
- Yes
119117
* - BFloat16 Mixed
120118
- Yes
121119
- Yes
122120
- Yes
123-
- No
124121
* - 32 True
125122
- Yes
126123
- Yes
127124
- Yes
128-
- Yes
129125
* - 64 True
130126
- Yes
131127
- Yes
132128
- No
133-
- No

docs/source-pytorch/common/trainer.rst

+1-4
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@ Trainer flags
175175
accelerator
176176
^^^^^^^^^^^
177177

178-
Supports passing different accelerator types (``"cpu", "gpu", "tpu", "ipu", "auto"``)
178+
Supports passing different accelerator types (``"cpu", "gpu", "tpu", "hpu", "auto"``)
179179
as well as custom accelerator instances.
180180

181181
.. code-block:: python
@@ -393,9 +393,6 @@ Number of devices to train on (``int``), which devices to train on (``list`` or
393393
# Training with TPU Accelerator using 8 tpu cores
394394
trainer = Trainer(devices="auto", accelerator="tpu")
395395
396-
# Training with IPU Accelerator using 4 ipus
397-
trainer = Trainer(devices="auto", accelerator="ipu")
398-
399396
.. note::
400397

401398
If the ``devices`` flag is not defined, it will assume ``devices`` to be ``"auto"`` and fetch the ``auto_device_count``

docs/source-pytorch/common_usecases.rst

-7
Original file line numberDiff line numberDiff line change
@@ -133,13 +133,6 @@ Customize and extend Lightning for things like custom hardware or distributed st
133133
:button_link: integrations/hpu/index.html
134134
:height: 100
135135

136-
.. displayitem::
137-
:header: Train on single or multiple IPUs
138-
:description: Train models faster with IPUs.
139-
:col_css: col-md-12
140-
:button_link: integrations/ipu/index.html
141-
:height: 100
142-
143136
.. displayitem::
144137
:header: Train on single or multiple TPUs
145138
:description: Train models faster with TPUs.

docs/source-pytorch/conf.py

-13
Original file line numberDiff line numberDiff line change
@@ -94,18 +94,6 @@ def _load_py_module(name: str, location: str) -> ModuleType:
9494
target_dir="docs/source-pytorch/integrations/hpu",
9595
checkout="refs/tags/1.3.0",
9696
)
97-
assist_local.AssistantCLI.pull_docs_files(
98-
gh_user_repo="Lightning-AI/lightning-Graphcore",
99-
target_dir="docs/source-pytorch/integrations/ipu",
100-
checkout="refs/tags/v0.1.0",
101-
as_orphan=True, # todo: this can be dropped after new IPU release
102-
)
103-
# the IPU also need one image
104-
URL_RAW_DOCS_GRAPHCORE = "https://raw.githubusercontent.com/Lightning-AI/lightning-Graphcore/v0.1.0/docs/source"
105-
for img in ["_static/images/ipu/profiler.png"]:
106-
img_ = os.path.join(_PATH_HERE, "integrations", "ipu", img)
107-
os.makedirs(os.path.dirname(img_), exist_ok=True)
108-
urllib.request.urlretrieve(f"{URL_RAW_DOCS_GRAPHCORE}/{img}", img_)
10997

11098
# Copy strategies docs as single pages
11199
assist_local.AssistantCLI.pull_docs_files(
@@ -340,7 +328,6 @@ def _load_py_module(name: str, location: str) -> ModuleType:
340328
"numpy": ("https://numpy.org/doc/stable/", None),
341329
"PIL": ("https://pillow.readthedocs.io/en/stable/", None),
342330
"torchmetrics": ("https://torchmetrics.readthedocs.io/en/stable/", None),
343-
"graphcore": ("https://docs.graphcore.ai/en/latest/", None),
344331
"lightning_habana": ("https://lightning-ai.github.io/lightning-Habana/", None),
345332
"tensorboardX": ("https://tensorboardx.readthedocs.io/en/stable/", None),
346333
# needed for referencing App from lightning scope

docs/source-pytorch/expertise_levels.rst

+11-19
Original file line numberDiff line numberDiff line change
@@ -190,34 +190,26 @@ Configure all aspects of Lightning for advanced usecases.
190190
:tag: advanced
191191

192192
.. displayitem::
193-
:header: Level 18: Explore IPUs
194-
:description: Explore Intelligence Processing Unit (IPU) for model scaling.
195-
:col_css: col-md-6
196-
:button_link: levels/advanced_level_19.html
197-
:height: 150
198-
:tag: advanced
199-
200-
.. displayitem::
201-
:header: Level 19: Explore HPUs
193+
:header: Level 18: Explore HPUs
202194
:description: Explore Havana Gaudi Processing Unit (HPU) for model scaling.
203195
:col_css: col-md-6
204-
:button_link: levels/advanced_level_20.html
196+
:button_link: levels/advanced_level_19.html
205197
:height: 150
206198
:tag: advanced
207199

208200
.. displayitem::
209-
:header: Level 20: Master TPUs
201+
:header: Level 19: Master TPUs
210202
:description: Master TPUs and run on cloud TPUs.
211203
:col_css: col-md-6
212-
:button_link: levels/advanced_level_21.html
204+
:button_link: levels/advanced_level_20.html
213205
:height: 150
214206
:tag: advanced
215207

216208
.. displayitem::
217-
:header: Level 21: Train models with billions of parameters
209+
:header: Level 20: Train models with billions of parameters
218210
:description: Scale GPU training to models with billions of parameters
219211
:col_css: col-md-6
220-
:button_link: levels/advanced_level_22.html
212+
:button_link: levels/advanced_level_21.html
221213
:height: 150
222214
:tag: advanced
223215

@@ -240,34 +232,34 @@ Customize and extend Lightning for things like custom hardware or distributed st
240232
.. Add callout items below this line
241233
242234
.. displayitem::
243-
:header: Level 22: Extend the Lightning CLI
235+
:header: Level 21: Extend the Lightning CLI
244236
:description: Extend the functionality of the Lightning CLI.
245237
:col_css: col-md-6
246238
:button_link: levels/expert_level_23.html
247239
:height: 150
248240
:tag: expert
249241

250242
.. displayitem::
251-
:header: Level 23: Integrate a custom cluster
243+
:header: Level 22: Integrate a custom cluster
252244
:description: Integrate a custom cluster into Lightning.
253245
:col_css: col-md-6
254246
:button_link: levels/expert_level_24.html
255247
:height: 150
256248
:tag: expert
257249

258250
.. displayitem::
259-
:header: Level 24: Make your own profiler
251+
:header: Level 23: Make your own profiler
260252
:description: Make your own profiler.
261253
:col_css: col-md-6
262254
:button_link: tuning/profiler_expert.html
263255
:height: 150
264256
:tag: expert
265257

266258
.. displayitem::
267-
:header: Level 25: Add a new accelerator or Strategy
259+
:header: Level 24: Add a new accelerator or Strategy
268260
:description: Integrate a new accelerator or distributed strategy.
269261
:col_css: col-md-6
270-
:button_link: levels/expert_level_27.html
262+
:button_link: levels/expert_level_25.html
271263
:height: 150
272264
:tag: expert
273265

docs/source-pytorch/extensions/accelerator.rst

+1-2
Original file line numberDiff line numberDiff line change
@@ -4,13 +4,12 @@
44
Accelerator
55
###########
66

7-
The Accelerator connects a Lightning Trainer to arbitrary hardware (CPUs, GPUs, TPUs, IPUs, MPS, ...).
7+
The Accelerator connects a Lightning Trainer to arbitrary hardware (CPUs, GPUs, TPUs, HPUs, MPS, ...).
88
Currently there are accelerators for:
99

1010
- CPU
1111
- :doc:`GPU <../accelerators/gpu>`
1212
- :doc:`TPU <../accelerators/tpu>`
13-
- :doc:`IPU <../integrations/ipu/index>`
1413
- :doc:`HPU <../integrations/hpu/index>`
1514
- :doc:`MPS <../accelerators/mps>`
1615

docs/source-pytorch/extensions/strategy.rst

-6
Original file line numberDiff line numberDiff line change
@@ -57,9 +57,6 @@ Here are some examples:
5757
# Training with the DDP Spawn strategy on 8 TPU cores
5858
trainer = Trainer(strategy="ddp_spawn", accelerator="tpu", devices=8)
5959
60-
# Training with the default IPU strategy on 8 IPUs
61-
trainer = Trainer(accelerator="ipu", devices=8)
62-
6360
The below table lists all relevant strategies available in Lightning with their corresponding short-hand name:
6461

6562
.. list-table:: Strategy Classes and Nicknames
@@ -87,9 +84,6 @@ The below table lists all relevant strategies available in Lightning with their
8784
* - hpu_single
8885
- ``SingleHPUStrategy``
8986
- Strategy for training on a single HPU device. :doc:`Learn more. <../integrations/hpu/index>`
90-
* - ipu_strategy
91-
- ``IPUStrategy``
92-
- Plugin for training on IPU devices. :doc:`Learn more. <../integrations/ipu/index>`
9387
* - xla
9488
- :class:`~lightning.pytorch.strategies.XLAStrategy`
9589
- Strategy for training on multiple TPU devices using the :func:`torch_xla.distributed.xla_multiprocessing.spawn` method. :doc:`Learn more. <../accelerators/tpu>`

docs/source-pytorch/glossary/index.rst

-8
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,6 @@
2020
Half precision <../common/precision>
2121
HPU <../integrations/hpu/index>
2222
Inference <../deploy/production_intermediate>
23-
IPU <../integrations/ipu/index>
2423
Lightning CLI <../cli/lightning_cli>
2524
LightningDataModule <../data/datamodule>
2625
LightningModule <../common/lightning_module>
@@ -177,13 +176,6 @@ Glossary
177176
:button_link: ../deploy/production_intermediate.html
178177
:height: 100
179178

180-
.. displayitem::
181-
:header: IPU
182-
:description: Graphcore Intelligence Processing Unit for faster training
183-
:col_css: col-md-12
184-
:button_link: ../integrations/ipu/index.html
185-
:height: 100
186-
187179
.. displayitem::
188180
:header: Lightning CLI
189181
:description: A Command-line Interface (CLI) to interact with Lightning code via a terminal

docs/source-pytorch/integrations/ipu/index.rst

-48
This file was deleted.

docs/source-pytorch/levels/advanced.rst

+6-14
Original file line numberDiff line numberDiff line change
@@ -46,34 +46,26 @@ Configure all aspects of Lightning for advanced usecases.
4646
:tag: advanced
4747

4848
.. displayitem::
49-
:header: Level 18: Explore IPUs
50-
:description: Explore Intelligence Processing Unit (IPU) for model scaling.
51-
:col_css: col-md-6
52-
:button_link: advanced_level_19.html
53-
:height: 150
54-
:tag: advanced
55-
56-
.. displayitem::
57-
:header: Level 19: Explore HPUs
49+
:header: Level 18: Explore HPUs
5850
:description: Explore Habana Gaudi Processing Unit (HPU) for model scaling.
5951
:col_css: col-md-6
60-
:button_link: advanced_level_20.html
52+
:button_link: advanced_level_19.html
6153
:height: 150
6254
:tag: advanced
6355

6456
.. displayitem::
65-
:header: Level 20: Master TPUs
57+
:header: Level 19: Master TPUs
6658
:description: Master TPUs and run on cloud TPUs.
6759
:col_css: col-md-6
68-
:button_link: advanced_level_21.html
60+
:button_link: advanced_level_20.html
6961
:height: 150
7062
:tag: advanced
7163

7264
.. displayitem::
73-
:header: Level 21: Train models with billions of parameters
65+
:header: Level 20: Train models with billions of parameters
7466
:description: Scale GPU training to models with billions of parameters
7567
:col_css: col-md-6
76-
:button_link: advanced_level_22.html
68+
:button_link: advanced_level_21.html
7769
:height: 150
7870
:tag: advanced
7971

0 commit comments

Comments
 (0)