You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
114
114
md"""
115
-
7. Use the `max_g` function in the timestep `dt` definition (instead of `maximum`) as one now needs to gather the global maximum among all MPI processes.
116
-
"""
117
-
118
-
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "fragment"}}
119
-
md"""
120
-
8. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)
115
+
7. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
129
124
md"""
130
-
9. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)
125
+
8. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
141
136
md"""
142
-
10. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes.
137
+
9. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes. Use it in the timestep `dt` definition and in the error calculation (instead of `maximum`).
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
153
148
md"""
154
-
11. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points.
149
+
10. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points.
155
150
"""
156
151
157
152
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "fragment"}}
158
153
md"""
159
-
12. Update the visualisation and output saving part
154
+
11. Update the visualisation and output saving part
Copy file name to clipboardExpand all lines: slide-notebooks/notebooks/l9_1-projects.ipynb
+7-19
Original file line number
Diff line number
Diff line change
@@ -118,7 +118,7 @@
118
118
{
119
119
"cell_type": "markdown",
120
120
"source": [
121
-
"3. Also add global maximum computation using MPI reduction function"
121
+
"3. Further, add global maximum computation using MPI reduction function to be used instead of `maximum()`"
122
122
],
123
123
"metadata": {
124
124
"name": "A slide ",
@@ -220,7 +220,7 @@
220
220
{
221
221
"cell_type": "markdown",
222
222
"source": [
223
-
"7. Use the `max_g` function in the timestep `dt` definition (instead of `maximum`) as one now needs to gather the global maximum among all MPI processes."
223
+
"7. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)"
224
224
],
225
225
"metadata": {
226
226
"name": "A slide ",
@@ -229,18 +229,6 @@
229
229
}
230
230
}
231
231
},
232
-
{
233
-
"cell_type": "markdown",
234
-
"source": [
235
-
"8. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)"
236
-
],
237
-
"metadata": {
238
-
"name": "A slide ",
239
-
"slideshow": {
240
-
"slide_type": "fragment"
241
-
}
242
-
}
243
-
},
244
232
{
245
233
"outputs": [],
246
234
"cell_type": "code",
@@ -256,7 +244,7 @@
256
244
{
257
245
"cell_type": "markdown",
258
246
"source": [
259
-
"9. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)"
247
+
"8. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)"
260
248
],
261
249
"metadata": {
262
250
"name": "A slide ",
@@ -282,7 +270,7 @@
282
270
{
283
271
"cell_type": "markdown",
284
272
"source": [
285
-
"10. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes."
273
+
"9. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes. Use it in the timestep `dt` definition and in the error calculation (instead of `maximum`)."
286
274
],
287
275
"metadata": {
288
276
"name": "A slide ",
@@ -308,7 +296,7 @@
308
296
{
309
297
"cell_type": "markdown",
310
298
"source": [
311
-
"11. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points."
299
+
"10. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points."
312
300
],
313
301
"metadata": {
314
302
"name": "A slide ",
@@ -320,7 +308,7 @@
320
308
{
321
309
"cell_type": "markdown",
322
310
"source": [
323
-
"12. Update the visualisation and output saving part"
311
+
"11. Update the visualisation and output saving part"
324
312
],
325
313
"metadata": {
326
314
"name": "A slide ",
@@ -350,7 +338,7 @@
350
338
{
351
339
"cell_type": "markdown",
352
340
"source": [
353
-
"13. Finalise the global grid before returning from the main function"
341
+
"12. Finalise the global grid before returning from the main function"
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
114
114
md"""
115
-
7. Use the `max_g` function in the timestep `dt` definition (instead of `maximum`) as one now needs to gather the global maximum among all MPI processes.
116
-
"""
117
-
118
-
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "fragment"}}
119
-
md"""
120
-
8. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)
115
+
7. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
129
124
md"""
130
-
9. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)
125
+
8. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
141
136
md"""
142
-
10. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes.
137
+
9. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes. Use it in the timestep `dt` definition and in the error calculation (instead of `maximum`).
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
153
148
md"""
154
-
11. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points.
149
+
10. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points.
155
150
"""
156
151
157
152
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "fragment"}}
158
153
md"""
159
-
12. Update the visualisation and output saving part
154
+
11. Update the visualisation and output saving part
0 commit comments