Skip to content

Commit c321aa8

Browse files
authored
Merge pull request #23
Fixup redundant content
2 parents 7bd1a86 + c92441d commit c321aa8

File tree

6 files changed

+30
-52
lines changed

6 files changed

+30
-52
lines changed

slide-notebooks/l9_1-projects.jl

+7-12
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ import MPI
6666
#src #########################################################################
6767
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
6868
md"""
69-
3. Also add global maximum computation using MPI reduction function
69+
3. Further, add global maximum computation using MPI reduction function to be used instead of `maximum()`
7070
"""
7171
max_g(A) = (max_l = maximum(A); MPI.Allreduce(max_l, MPI.MAX, MPI.COMM_WORLD))
7272

@@ -112,12 +112,7 @@ end
112112
#src #########################################################################
113113
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
114114
md"""
115-
7. Use the `max_g` function in the timestep `dt` definition (instead of `maximum`) as one now needs to gather the global maximum among all MPI processes.
116-
"""
117-
118-
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "fragment"}}
119-
md"""
120-
8. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)
115+
7. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)
121116
"""
122117
@hide_communication b_width begin
123118
@parallel compute_Dflux!(qDx, qDy, qDz, Pf, T, k_ηf, _dx, _dy, _dz, αρg, _1_θ_dτ_D)
@@ -127,7 +122,7 @@ end
127122
#src #########################################################################
128123
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
129124
md"""
130-
9. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)
125+
8. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)
131126
"""
132127
@hide_communication b_width begin
133128
@parallel update_T!(T, qTx, qTy, qTz, dTdt, _dx, _dy, _dz, _1_dt_β_dτ_T)
@@ -139,7 +134,7 @@ end
139134
#src #########################################################################
140135
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
141136
md"""
142-
10. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes.
137+
9. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes. Use it in the timestep `dt` definition and in the error calculation (instead of `maximum`).
143138
"""
144139
## time step
145140
dt = if it == 1
@@ -151,12 +146,12 @@ end
151146
#src #########################################################################
152147
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
153148
md"""
154-
11. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points.
149+
10. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points.
155150
"""
156151

157152
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "fragment"}}
158153
md"""
159-
12. Update the visualisation and output saving part
154+
11. Update the visualisation and output saving part
160155
"""
161156
## visualisation
162157
if do_viz && (it % nvis == 0)
@@ -172,7 +167,7 @@ end
172167
#src #########################################################################
173168
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
174169
md"""
175-
13. Finalise the global grid before returning from the main function
170+
12. Finalise the global grid before returning from the main function
176171
"""
177172
finalize_global_grid()
178173
return

slide-notebooks/notebooks/l1_1-admin.ipynb

+3-3
Original file line numberDiff line numberDiff line change
@@ -187,11 +187,11 @@
187187
"file_extension": ".jl",
188188
"mimetype": "application/julia",
189189
"name": "julia",
190-
"version": "1.10.5"
190+
"version": "1.11.1"
191191
},
192192
"kernelspec": {
193-
"name": "julia-1.10",
194-
"display_name": "Julia 1.10.5",
193+
"name": "julia-1.11",
194+
"display_name": "Julia 1.11.1",
195195
"language": "julia"
196196
}
197197
},

slide-notebooks/notebooks/l1_2-why-gpu.ipynb

+3-3
Original file line numberDiff line numberDiff line change
@@ -331,11 +331,11 @@
331331
"file_extension": ".jl",
332332
"mimetype": "application/julia",
333333
"name": "julia",
334-
"version": "1.10.5"
334+
"version": "1.11.1"
335335
},
336336
"kernelspec": {
337-
"name": "julia-1.10",
338-
"display_name": "Julia 1.10.5",
337+
"name": "julia-1.11",
338+
"display_name": "Julia 1.11.1",
339339
"language": "julia"
340340
}
341341
},

slide-notebooks/notebooks/l1_3-julia-intro.ipynb

+3-3
Original file line numberDiff line numberDiff line change
@@ -1533,11 +1533,11 @@
15331533
"file_extension": ".jl",
15341534
"mimetype": "application/julia",
15351535
"name": "julia",
1536-
"version": "1.10.5"
1536+
"version": "1.11.1"
15371537
},
15381538
"kernelspec": {
1539-
"name": "julia-1.10",
1540-
"display_name": "Julia 1.10.5",
1539+
"name": "julia-1.11",
1540+
"display_name": "Julia 1.11.1",
15411541
"language": "julia"
15421542
}
15431543
},

slide-notebooks/notebooks/l9_1-projects.ipynb

+7-19
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@
118118
{
119119
"cell_type": "markdown",
120120
"source": [
121-
"3. Also add global maximum computation using MPI reduction function"
121+
"3. Further, add global maximum computation using MPI reduction function to be used instead of `maximum()`"
122122
],
123123
"metadata": {
124124
"name": "A slide ",
@@ -220,7 +220,7 @@
220220
{
221221
"cell_type": "markdown",
222222
"source": [
223-
"7. Use the `max_g` function in the timestep `dt` definition (instead of `maximum`) as one now needs to gather the global maximum among all MPI processes."
223+
"7. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)"
224224
],
225225
"metadata": {
226226
"name": "A slide ",
@@ -229,18 +229,6 @@
229229
}
230230
}
231231
},
232-
{
233-
"cell_type": "markdown",
234-
"source": [
235-
"8. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)"
236-
],
237-
"metadata": {
238-
"name": "A slide ",
239-
"slideshow": {
240-
"slide_type": "fragment"
241-
}
242-
}
243-
},
244232
{
245233
"outputs": [],
246234
"cell_type": "code",
@@ -256,7 +244,7 @@
256244
{
257245
"cell_type": "markdown",
258246
"source": [
259-
"9. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)"
247+
"8. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)"
260248
],
261249
"metadata": {
262250
"name": "A slide ",
@@ -282,7 +270,7 @@
282270
{
283271
"cell_type": "markdown",
284272
"source": [
285-
"10. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes."
273+
"9. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes. Use it in the timestep `dt` definition and in the error calculation (instead of `maximum`)."
286274
],
287275
"metadata": {
288276
"name": "A slide ",
@@ -308,7 +296,7 @@
308296
{
309297
"cell_type": "markdown",
310298
"source": [
311-
"11. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points."
299+
"10. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points."
312300
],
313301
"metadata": {
314302
"name": "A slide ",
@@ -320,7 +308,7 @@
320308
{
321309
"cell_type": "markdown",
322310
"source": [
323-
"12. Update the visualisation and output saving part"
311+
"11. Update the visualisation and output saving part"
324312
],
325313
"metadata": {
326314
"name": "A slide ",
@@ -350,7 +338,7 @@
350338
{
351339
"cell_type": "markdown",
352340
"source": [
353-
"13. Finalise the global grid before returning from the main function"
341+
"12. Finalise the global grid before returning from the main function"
354342
],
355343
"metadata": {
356344
"name": "A slide ",

website/_literate/l9_1-projects_web.jl

+7-12
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ import MPI
6666
#src #########################################################################
6767
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
6868
md"""
69-
3. Also add global maximum computation using MPI reduction function
69+
3. Further, add global maximum computation using MPI reduction function to be used instead of `maximum()`
7070
"""
7171
max_g(A) = (max_l = maximum(A); MPI.Allreduce(max_l, MPI.MAX, MPI.COMM_WORLD))
7272

@@ -112,12 +112,7 @@ end
112112
#src #########################################################################
113113
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
114114
md"""
115-
7. Use the `max_g` function in the timestep `dt` definition (instead of `maximum`) as one now needs to gather the global maximum among all MPI processes.
116-
"""
117-
118-
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "fragment"}}
119-
md"""
120-
8. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)
115+
7. Moving to the time loop, add halo update function `update_halo!` after the kernel that computes the fluid fluxes. You can additionally wrap it in the `@hide_communication` block to enable communication/computation overlap (using `b_width` defined above)
121116
"""
122117
@hide_communication b_width begin
123118
@parallel compute_Dflux!(qDx, qDy, qDz, Pf, T, k_ηf, _dx, _dy, _dz, αρg, _1_θ_dτ_D)
@@ -127,7 +122,7 @@ end
127122
#src #########################################################################
128123
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
129124
md"""
130-
9. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)
125+
8. Apply a similar step to the temperature update, where you can also include boundary condition computation as following (⚠️ no other construct is currently allowed)
131126
"""
132127
@hide_communication b_width begin
133128
@parallel update_T!(T, qTx, qTy, qTz, dTdt, _dx, _dy, _dz, _1_dt_β_dτ_T)
@@ -139,7 +134,7 @@ end
139134
#src #########################################################################
140135
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
141136
md"""
142-
10. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes.
137+
9. Use now the `max_g` function instead of `maximum` to collect the global maximum among all local arrays spanning all MPI processes. Use it in the timestep `dt` definition and in the error calculation (instead of `maximum`).
143138
"""
144139
## time step
145140
dt = if it == 1
@@ -151,12 +146,12 @@ end
151146
#src #########################################################################
152147
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
153148
md"""
154-
11. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points.
149+
10. Make sure all printing statements are only executed by `me==0` in order to avoid each MPI process to print to screen, and use `nx_g()` instead of local `nx` in the printed statements when assessing the iteration per number of grid points.
155150
"""
156151

157152
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "fragment"}}
158153
md"""
159-
12. Update the visualisation and output saving part
154+
11. Update the visualisation and output saving part
160155
"""
161156
## visualisation
162157
if do_viz && (it % nvis == 0)
@@ -172,7 +167,7 @@ end
172167
#src #########################################################################
173168
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
174169
md"""
175-
13. Finalise the global grid before returning from the main function
170+
12. Finalise the global grid before returning from the main function
176171
"""
177172
finalize_global_grid()
178173
return

0 commit comments

Comments
 (0)