Skip to content

Commit 8ee5253

Browse files
authored
Support ParallelCluster 3.9.2 and 3.9.3. Fix ansible playbooks. (#241)
Replace include with include_tasks. Resolves #238 Resolve ansible-lint warnings and errors Use snake case instead of camel cases. Ansible naming conventions recommends only using lower-case alphanumeric variable names with underscores. Support ParallelCluster 3.9.2. Resolves #236 Add support for ParallelCluster 3.9.3 Resolves #240 Fix filename in documentation Update the file where the Licenses are configured if you aren't using the slurmdb. Resolves #239
1 parent 7255024 commit 8ee5253

File tree

38 files changed

+425
-399
lines changed

38 files changed

+425
-399
lines changed

.gitignore

+3
Original file line numberDiff line numberDiff line change
@@ -8,3 +8,6 @@ source/resources/parallel-cluster/config/build-files/*/*/parallelcluster-*.yml
88
security_scan/bandit-env
99
security_scan/bandit.log
1010
security_scan/cfn_nag.log
11+
security_scan/ScoutSuite
12+
13+
__pycache__

Makefile

+3
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,9 @@ security_scan:
2222
test:
2323
pytest -x -v tests
2424

25+
ansible-lint:
26+
source setup.sh; pip install ansible ansible-lint; ansible-lint --nocolor source/resources/playbooks
27+
2528
clean:
2629
git clean -d -f -x
2730
# -d: Recurse into directories

docs/deployment-prerequisites.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -362,9 +362,9 @@ then jobs will stay pending in the queue until a job completes and frees up a li
362362
Combined with the fairshare algorithm, this can prevent users from monopolizing licenses and preventing others from
363363
being able to run their jobs.
364364

365-
Licenses are configured using the [slurm/Licenses](https://github.com/aws-samples/aws-eda-slurm-cluster/blob/main/source/cdk/config_schema.py#L569-L577) configuration variable.
365+
Licenses are configured using the [slurm/Licenses](../config#licenses) configuration variable.
366366
If you are using the Slurm database then these will be configured in the database.
367-
Otherwises they will be configured in **/opt/slurm/{{ClusterName}}/etc/slurm_licenses.conf**.
367+
Otherwise they will be configured in **/opt/slurm/{{ClusterName}}/etc/pcluster/custom_slurm_settings_include_file_slurm.conf**.
368368

369369
The example configuration shows how the number of licenses can be configured.
370370
In this example, the cluster will manage 800 vcs licenses and 1 ansys license.

source/cdk/cdk_slurm_stack.py

+33-32
Original file line numberDiff line numberDiff line change
@@ -1998,46 +1998,47 @@ def get_instance_template_vars(self, instance_role):
19981998
# The keys are the environment and ansible variable names.
19991999
cluster_name = self.config['slurm']['ClusterName']
20002000
if instance_role.startswith('ParallelCluster'):
2001+
# Ansible template variables should be lowercase alphanumeric and underscores so use snake case instead of camel case.
20012002
instance_template_vars = {
20022003
"AWS_DEFAULT_REGION": self.cluster_region,
2003-
"ClusterName": cluster_name,
2004-
"Region": self.cluster_region,
2005-
"TimeZone": self.config['TimeZone'],
2004+
"cluster_name": cluster_name,
2005+
"region": self.cluster_region,
2006+
"time_zone": self.config['TimeZone'],
20062007
}
2007-
instance_template_vars['DefaultPartition'] = 'batch'
2008-
instance_template_vars['FileSystemMountPath'] = '/opt/slurm'
2009-
instance_template_vars['ParallelClusterVersion'] = self.config['slurm']['ParallelClusterConfig']['Version']
2010-
instance_template_vars['SlurmBaseDir'] = '/opt/slurm'
2008+
instance_template_vars['default_partition'] = 'batch'
2009+
instance_template_vars['file_system_mount_path'] = '/opt/slurm'
2010+
instance_template_vars['parallel_cluster_version'] = self.config['slurm']['ParallelClusterConfig']['Version']
2011+
instance_template_vars['slurm_base_dir'] = '/opt/slurm'
20112012

20122013
if instance_role == 'ParallelClusterHeadNode':
2013-
instance_template_vars['PCSlurmVersion'] = get_PC_SLURM_VERSION(self.config)
2014+
instance_template_vars['pc_slurm_version'] = get_PC_SLURM_VERSION(self.config)
20142015
if 'Database' in self.config['slurm']['ParallelClusterConfig']:
2015-
instance_template_vars['AccountingStorageHost'] = 'pcvluster-head-node'
2016+
instance_template_vars['accounting_storage_host'] = 'pcvluster-head-node'
20162017
else:
2017-
instance_template_vars['AccountingStorageHost'] = ''
2018-
instance_template_vars['Licenses'] = self.config['Licenses']
2019-
instance_template_vars['ParallelClusterMungeVersion'] = get_PARALLEL_CLUSTER_MUNGE_VERSION(self.config)
2020-
instance_template_vars['ParallelClusterPythonVersion'] = get_PARALLEL_CLUSTER_PYTHON_VERSION(self.config)
2021-
instance_template_vars['PrimaryController'] = True
2022-
instance_template_vars['SlurmctldPort'] = self.slurmctld_port
2023-
instance_template_vars['SlurmctldPortMin'] = self.slurmctld_port_min
2024-
instance_template_vars['SlurmctldPortMax'] = self.slurmctld_port_max
2025-
instance_template_vars['SlurmrestdJwtForRootParameter'] = self.jwt_token_for_root_ssm_parameter_name
2026-
instance_template_vars['SlurmrestdJwtForSlurmrestdParameter'] = self.jwt_token_for_slurmrestd_ssm_parameter_name
2027-
instance_template_vars['SlurmrestdPort'] = self.slurmrestd_port
2028-
instance_template_vars['SlurmrestdSocketDir'] = '/opt/slurm/com'
2029-
instance_template_vars['SlurmrestdSocket'] = f"{instance_template_vars['SlurmrestdSocketDir']}/slurmrestd.socket"
2030-
instance_template_vars['SlurmrestdUid'] = self.config['slurm']['SlurmCtl']['SlurmrestdUid']
2018+
instance_template_vars['accounting_storage_host'] = ''
2019+
instance_template_vars['licenses'] = self.config['Licenses']
2020+
instance_template_vars['parallel_cluster_munge_version'] = get_PARALLEL_CLUSTER_MUNGE_VERSION(self.config)
2021+
instance_template_vars['parallel_cluster_python_version'] = get_PARALLEL_CLUSTER_PYTHON_VERSION(self.config)
2022+
instance_template_vars['primary_controller'] = True
2023+
instance_template_vars['slurmctld_port'] = self.slurmctld_port
2024+
instance_template_vars['slurmctld_port_min'] = self.slurmctld_port_min
2025+
instance_template_vars['slurmctld_port_max'] = self.slurmctld_port_max
2026+
instance_template_vars['slurmrestd_jwt_for_root_parameter'] = self.jwt_token_for_root_ssm_parameter_name
2027+
instance_template_vars['slurmrestd_jwt_for_slurmrestd_parameter'] = self.jwt_token_for_slurmrestd_ssm_parameter_name
2028+
instance_template_vars['slurmrestd_port'] = self.slurmrestd_port
2029+
instance_template_vars['slurmrestd_socket_dir'] = '/opt/slurm/com'
2030+
instance_template_vars['slurmrestd_socket'] = f"{instance_template_vars['slurmrestd_socket_dir']}/slurmrestd.socket"
2031+
instance_template_vars['slurmrestd_uid'] = self.config['slurm']['SlurmCtl']['SlurmrestdUid']
20312032
elif instance_role == 'ParallelClusterSubmitter':
2032-
instance_template_vars['SlurmVersion'] = get_SLURM_VERSION(self.config)
2033-
instance_template_vars['ParallelClusterMungeVersion'] = get_PARALLEL_CLUSTER_MUNGE_VERSION(self.config)
2034-
instance_template_vars['SlurmrestdPort'] = self.slurmrestd_port
2035-
instance_template_vars['FileSystemMountPath'] = f'/opt/slurm/{cluster_name}'
2036-
instance_template_vars['SlurmBaseDir'] = f'/opt/slurm/{cluster_name}'
2037-
instance_template_vars['SubmitterSlurmBaseDir'] = f'/opt/slurm/{cluster_name}'
2038-
instance_template_vars['SlurmConfigDir'] = f'/opt/slurm/{cluster_name}/config'
2039-
instance_template_vars['SlurmEtcDir'] = f'/opt/slurm/{cluster_name}/etc'
2040-
instance_template_vars['ModulefilesBaseDir'] = f'/opt/slurm/{cluster_name}/config/modules/modulefiles'
2033+
instance_template_vars['slurm_version'] = get_SLURM_VERSION(self.config)
2034+
instance_template_vars['parallel_cluster_munge_version'] = get_PARALLEL_CLUSTER_MUNGE_VERSION(self.config)
2035+
instance_template_vars['slurmrestd_port'] = self.slurmrestd_port
2036+
instance_template_vars['file_system_mount_path'] = f'/opt/slurm/{cluster_name}'
2037+
instance_template_vars['slurm_base_dir'] = f'/opt/slurm/{cluster_name}'
2038+
instance_template_vars['submitter_slurm_base_dir'] = f'/opt/slurm/{cluster_name}'
2039+
instance_template_vars['slurm_config_dir'] = f'/opt/slurm/{cluster_name}/config'
2040+
instance_template_vars['slurm_etc_dir'] = f'/opt/slurm/{cluster_name}/etc'
2041+
instance_template_vars['modulefiles_base_dir'] = f'/opt/slurm/{cluster_name}/config/modules/modulefiles'
20412042

20422043
elif instance_role == 'ParallelClusterComputeNode':
20432044
pass

source/cdk/config_schema.py

+16
Original file line numberDiff line numberDiff line change
@@ -75,6 +75,11 @@
7575
# * Upgrade Pmix from 4.2.6 to 4.2.9.
7676
# 3.9.1:
7777
# * Bug fixes
78+
# 3.9.2:
79+
# * Upgrade Slurm to 23.11.7 (from 23.11.4).
80+
# 3.9.3:
81+
# * Add support for FSx Lustre as a shared storage type in us-iso-east-1.
82+
# * Bug fixes
7883
MIN_PARALLEL_CLUSTER_VERSION = parse_version('3.6.0')
7984
# Update source/resources/default_config.yml with latest version when this is updated.
8085
PARALLEL_CLUSTER_VERSIONS = [
@@ -86,6 +91,8 @@
8691
'3.8.0',
8792
'3.9.0',
8893
'3.9.1',
94+
'3.9.2',
95+
'3.9.3',
8996
]
9097
PARALLEL_CLUSTER_MUNGE_VERSIONS = {
9198
# This can be found on the head node at /opt/parallelcluster/sources
@@ -98,6 +105,8 @@
98105
'3.8.0': '0.5.15', # confirmed
99106
'3.9.0': '0.5.15', # confirmed
100107
'3.9.1': '0.5.15', # confirmed
108+
'3.9.2': '0.5.15', # confirmed
109+
'3.9.3': '0.5.15', # confirmed
101110
}
102111
PARALLEL_CLUSTER_PYTHON_VERSIONS = {
103112
# This can be found on the head node at /opt/parallelcluster/pyenv/versions
@@ -109,6 +118,8 @@
109118
'3.8.0': '3.9.17', # confirmed
110119
'3.9.0': '3.9.17', # confirmed
111120
'3.9.1': '3.9.17', # confirmed
121+
'3.9.2': '3.9.17', # confirmed
122+
'3.9.3': '3.9.17', # confirmed
112123
}
113124
PARALLEL_CLUSTER_SLURM_VERSIONS = {
114125
# This can be found on the head node at /etc/chef/local-mode-cache/cache/
@@ -120,6 +131,8 @@
120131
'3.8.0': '23.02.7', # confirmed
121132
'3.9.0': '23.11.4', # confirmed
122133
'3.9.1': '23.11.4', # confirmed
134+
'3.9.2': '23.11.7', # confirmed
135+
'3.9.3': '23.11.7', # confirmed
123136
}
124137
PARALLEL_CLUSTER_PC_SLURM_VERSIONS = {
125138
# This can be found on the head node at /etc/chef/local-mode-cache/cache/
@@ -131,6 +144,8 @@
131144
'3.8.0': '23-02-6-1', # confirmed
132145
'3.9.0': '23-11-4-1', # confirmed
133146
'3.9.1': '23-11-4-1', # confirmed
147+
'3.9.2': '23-11-7-1', # confirmed
148+
'3.9.3': '23-11-7-1', # confirmed
134149
}
135150
SLURM_REST_API_VERSIONS = {
136151
'23-02-2-1': '0.0.39',
@@ -140,6 +155,7 @@
140155
'23-02-6-1': '0.0.39',
141156
'23-02-7-1': '0.0.39',
142157
'23-11-4-1': '0.0.39',
158+
'23-11-7-1': '0.0.39',
143159
}
144160
PARALLEL_CLUSTER_ALLOWED_OSES = [
145161
'alinux2',

source/resources/playbooks/inventories/group_vars/all

+43-43
Original file line numberDiff line numberDiff line change
@@ -6,58 +6,58 @@ ansible_ssh_user: ec2-user
66

77
ansible_ssh_common_args: "-o StrictHostKeyChecking=no -o LogLevel=ERROR -o UserKnownHostsFile=/dev/null"
88

9-
ansible_architecture: "{{ansible_facts['architecture']}}"
10-
distribution: "{{ansible_facts['distribution']}}"
11-
distribution_major_version: "{{ansible_facts['distribution_major_version']}}"
12-
distribution_version: "{{ansible_facts['distribution_version']}}"
13-
kernel: "{{ansible_facts['kernel']}}"
14-
memtotal_mb: "{{ansible_facts['memtotal_mb']}}"
9+
ansible_architecture: "{{ ansible_facts['architecture'] }}"
10+
distribution: "{{ ansible_facts['distribution'] }}"
11+
distribution_major_version: "{{ ansible_facts['distribution_major_version'] }}"
12+
distribution_version: "{{ ansible_facts['distribution_version'] }}"
13+
kernel: "{{ ansible_facts['kernel'] }}"
14+
memtotal_mb: "{{ ansible_facts['memtotal_mb'] }}"
1515

1616
# Derived facts
17-
Architecture: "{%if ansible_architecture == 'aarch64'%}arm64{%else%}{{ansible_architecture}}{%endif%}"
18-
amazonlinux2: "{{distribution == 'Amazon' and distribution_major_version == '2'}}"
19-
alma: "{{distribution == 'AlmaLinux'}}"
20-
alma8: "{{alma and distribution_major_version == '8'}}"
21-
centos: "{{distribution == 'CentOS'}}"
22-
centos7: "{{centos and distribution_major_version == '7'}}"
23-
rhel: "{{distribution == 'RedHat'}}"
24-
rhel7: "{{rhel and distribution_major_version == '7'}}"
25-
rhel8: "{{rhel and distribution_major_version == '8'}}"
26-
rhel9: "{{rhel and distribution_major_version == '9'}}"
27-
rocky: "{{distribution == 'Rocky'}}"
28-
rocky8: "{{rocky and distribution_major_version == '8'}}"
29-
rocky9: "{{rocky and distribution_major_version == '9'}}"
30-
rhelclone: "{{alma or centos or rocky}}"
31-
rhel8clone: "{{rhelclone and distribution_major_version == '8'}}"
32-
rhel9clone: "{{rhelclone and distribution_major_version == '9'}}"
33-
centos7_5_to_6: "{{distribution in ['CentOS', 'RedHat'] and distribution_version is match('7\\.[5-6]')}}"
34-
centos7_5_to_9: "{{distribution in ['CentOS', 'RedHat'] and distribution_version is match('7\\.[5-9]')}}"
35-
centos7_7_to_9: "{{distribution in ['CentOS', 'RedHat'] and distribution_version is match('7\\.[7-9]')}}"
17+
architecture: "{%if ansible_architecture == 'aarch64'%}arm64{%else%}{{ ansible_architecture }}{%endif%}"
18+
amazonlinux2: "{{ distribution == 'Amazon' and distribution_major_version == '2' }}"
19+
alma: "{{ distribution == 'AlmaLinux' }}"
20+
alma8: "{{ alma and distribution_major_version == '8' }}"
21+
centos: "{{ distribution == 'CentOS' }}"
22+
centos7: "{{ centos and distribution_major_version == '7' }}"
23+
rhel: "{{ distribution == 'RedHat' }}"
24+
rhel7: "{{ rhel and distribution_major_version == '7' }}"
25+
rhel8: "{{ rhel and distribution_major_version == '8' }}"
26+
rhel9: "{{ rhel and distribution_major_version == '9' }}"
27+
rocky: "{{ distribution == 'Rocky' }}"
28+
rocky8: "{{ rocky and distribution_major_version == '8' }}"
29+
rocky9: "{{ rocky and distribution_major_version == '9' }}"
30+
rhelclone: "{{ alma or centos or rocky }}"
31+
rhel8clone: "{{ rhelclone and distribution_major_version == '8' }}"
32+
rhel9clone: "{{ rhelclone and distribution_major_version == '9' }}"
33+
centos7_5_to_6: "{{ distribution in ['CentOS', 'RedHat'] and distribution_version is match('7\\.[5-6]') }}"
34+
centos7_5_to_9: "{{ distribution in ['CentOS', 'RedHat'] and distribution_version is match('7\\.[5-9]') }}"
35+
centos7_7_to_9: "{{ distribution in ['CentOS', 'RedHat'] and distribution_version is match('7\\.[7-9]') }}"
3636

3737
# Create separate build and release dirs because binaries built on AmazonLinux2 don't run on CentOS 7
38-
SlurmBaseDir: "{{FileSystemMountPath}}"
39-
SlurmSbinDir: "{{SlurmBaseDir}}/sbin"
40-
SlurmBinDir: "{{SlurmBaseDir}}/bin"
41-
SlurmScriptsDir: "{{SlurmBaseDir}}/bin"
42-
SlurmRoot: "{{SlurmBaseDir}}"
38+
slurm_base_dir: "{{ file_system_mount_path }}"
39+
slurm_sbin_dir: "{{ slurm_base_dir }}/sbin"
40+
slurm_bin_dir: "{{ slurm_base_dir }}/bin"
41+
slurm_scripts_dir: "{{ slurm_base_dir }}/bin"
42+
slurm_root: "{{ slurm_base_dir }}"
4343

4444
# Cluster specific directories
45-
SlurmConfigDir: "{{SlurmBaseDir}}/config"
46-
SlurmEtcDir: "{{SlurmBaseDir}}/etc"
47-
SlurmLogsDir: "{{SlurmBaseDir}}/logs"
48-
SlurmrestdSocketDir: "{{SlurmBaseDir}}/com"
49-
SlurmrestdSocket: "{{SlurmrestdSocketDir}}/slurmrestd.socket"
50-
SlurmSpoolDir: "{{SlurmBaseDir}}/var/spool"
51-
SlurmConf: "{{SlurmEtcDir}}/slurm.conf"
45+
slurm_config_dir: "{{ slurm_base_dir }}/config"
46+
slurm_etc_dir: "{{ slurm_base_dir }}/etc"
47+
slurm_logs_dir: "{{ slurm_base_dir }}/logs"
48+
slurmrestd_socket_dir: "{{ slurm_base_dir }}/com"
49+
slurmrestd_socket: "{{ slurmrestd_socket_dir }}/slurmrestd.socket"
50+
slurm_spool_dir: "{{ slurm_base_dir }}/var/spool"
51+
slurm_conf: "{{ slurm_etc_dir }}/slurm.conf"
5252

53-
ModulefilesBaseDir: "{{SlurmBaseDir}}/modules/modulefiles"
53+
modulefiles_base_dir: "{{ slurm_base_dir }}/modules/modulefiles"
5454

55-
PCModulefilesBaseDir: "{{SlurmConfigDir}}/modules/modulefiles"
56-
SubmitterSlurmBaseDir: "{{SlurmBaseDir}}/{{ClusterName}}"
57-
SubmitterSlurmConfigDir: "{{SubmitterSlurmBaseDir}}/config"
58-
SubmitterModulefilesBaseDir: "{{SubmitterSlurmConfigDir}}/modules/modulefiles"
55+
pc_modulefiles_base_dir: "{{ slurm_config_dir }}/modules/modulefiles"
56+
submitter_slurm_base_dir: "{{ slurm_base_dir }}/{{ cluster_name }}"
57+
submitter_slurm_config_dir: "{{ submitter_slurm_base_dir }}/config"
58+
submitter_modulefiles_base_dir: "{{ submitter_slurm_config_dir} }/modules/modulefiles"
5959

60-
SupportedDistributions:
60+
supported_distributions:
6161
- AlmaLinux/8/arm64
6262
- AlmaLinux/8/x86_64
6363
- Amazon/2/arm64

source/resources/playbooks/roles/ParallelClusterCreateUsersGroupsJsonConfigure/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -4,14 +4,14 @@ ParallelClusterCreateUsersGroupsJsonConfigure
44
Configure the server that is periodically updating the users_groups.json file.
55
Creates the file and a cron job that refreshes it hourly.
66

7-
* Mounts the cluster's /opt/slurm export at /opt/slurm/{{ClusterName}}
7+
* Mounts the cluster's /opt/slurm export at /opt/slurm/{{ cluster_name }}
88
* Updates the /etc/fstab so that the mount works after a reboot.
9-
* Creates a crontab to refresh /opt/slurm/{{ClusterName}}/config/users_groups.json is refreshed hourly.
9+
* Creates a crontab to refresh /opt/slurm/{{ cluster_name }}/config/users_groups.json is refreshed hourly.
1010

1111
Requirements
1212
------------
1313

1414
This is meant to be run on a server that is joined to your domain so that it
1515
has access to info about all of the users and groups.
1616
For SOCA, this is the scheduler instance.
17-
For RES, this is the {{EnvironmentName}}-cluster-manager instance.
17+
For RES, this is the {{ EnvironmentName }}-cluster-manager instance.

source/resources/playbooks/roles/ParallelClusterCreateUsersGroupsJsonConfigure/tasks/main.yml

+11-11
Original file line numberDiff line numberDiff line change
@@ -4,29 +4,29 @@
44
- name: Show vars used in this playbook
55
debug:
66
msg: |
7-
ClusterName: {{ ClusterName }}
8-
Region: {{ Region }}
9-
SlurmConfigDir: {{ SlurmConfigDir }}
7+
cluster_name: {{ cluster_name }}
8+
region: {{ region }}
9+
slurm_config_dir: {{ slurm_config_dir }}
1010
11-
- name: Add /opt/slurm/{{ ClusterName }} to /etc/fstab
11+
- name: Add /opt/slurm/{{ cluster_name }} to /etc/fstab
1212
mount:
13-
path: /opt/slurm/{{ ClusterName }}
14-
src: "head_node.{{ ClusterName }}.pcluster:/opt/slurm"
13+
path: /opt/slurm/{{ cluster_name }}
14+
src: "head_node.{{ cluster_name }}.pcluster:/opt/slurm"
1515
fstype: nfs
1616
backup: true
1717
state: present # Should already be mounted
1818

19-
- name: Create {{ SlurmConfigDir }}/users_groups.json
19+
- name: Create {{ slurm_config_dir }}/users_groups.json
2020
shell: |
2121
set -ex
2222
23-
{{ SlurmConfigDir }}/bin/create_or_update_users_groups_json.sh
23+
{{ slurm_config_dir }}/bin/create_or_update_users_groups_json.sh
2424
args:
25-
creates: '{{ SlurmConfigDir }}/users_groups.json'
25+
creates: '{{ slurm_config_dir }}/users_groups.json'
2626

27-
- name: Create cron to refresh {{ SlurmConfigDir }}/users_groups.json every hour
27+
- name: Create cron to refresh {{ slurm_config_dir }}/users_groups.json every hour
2828
template:
29-
dest: /etc/cron.d/slurm_{{ ClusterName }}_update_users_groups_json
29+
dest: /etc/cron.d/slurm_{{ cluster_name }}_update_users_groups_json
3030
src: etc/cron.d/slurm_update_users_groups_json
3131
owner: root
3232
group: root
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
MAILTO=''
2-
PATH="{{SlurmConfigDir}}/bin:/sbin:/bin:/usr/sbin:/usr/bin"
3-
50 * * * * root {{SlurmConfigDir}}/bin/create_or_update_users_groups_json.sh
2+
PATH="{{ slurm_config_dir }}/bin:/sbin:/bin:/usr/sbin:/usr/bin"
3+
50 * * * * root {{ slurm_config_dir }}/bin/create_or_update_users_groups_json.sh

source/resources/playbooks/roles/ParallelClusterCreateUsersGroupsJsonDeconfigure/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,8 @@ Deconfigure the server that is periodically updating the users_groups.json file.
55
Just removes the crontab entry on the server.
66

77
* Copies ansible playbooks to /tmp because the cluster's mount is removed by the playbook.
8-
* Remove crontab that refreshes /opt/slurm/{{ClusterName}}/config/users_groups.json.
9-
* Remove /opt/slurm/{{ClusterName}} from /etc/fstab and unmount it.
8+
* Remove crontab that refreshes /opt/slurm/{{ cluster_name }}/config/users_groups.json.
9+
* Remove /opt/slurm/{{ cluster_name }} from /etc/fstab and unmount it.
1010

1111
Requirements
1212
------------

0 commit comments

Comments
 (0)