The OpenCHAMI cloud-init service retrieves detailed inventory information from SMD and uses it to create cloud-init payloads customized for each node in an OpenCHAMI cluster.
- About / Introduction
- Build / Install
- Running the Service
- Testing the Service
- Group Handling and Overrides
- More Reading
The OpenCHAMI Cloud-Init Service is designed to generate cloud-init configuration for nodes in an OpenCHAMI cluster. The new design pushes the complexity of merging configurations into the cloud-init client rather than the server. This README provides instructions based on the Demo.md file for running and testing the service.
This service provides configuration data to cloud-init clients via the standard nocloud-net datasource. The service merges configuration from several sources:
- SMD data (or simulated data in development mode)
- User-supplied JSON (for custom configurations)
- Cluster defaults and group overrides
Cloud-init on nodes retrieves data in a fixed order:
/meta-data
– YAML document with system configuration./user-data
- a document which can be any of the user data formats/vendor-data
– Vendor-supplied configuration via include-file mechanisms./network-config
– An optional document in one of two network configuration formats. This is only requested if configured to do so with a kernel parameter or through cloud-init configuration in the image. NB: OpenCHAMI doesn't support deliveringnetwork-config
via the cloud-init server today
This project uses GoReleaser for building and releasing, embedding additional metadata such as commit info, build time, and version. Below is a brief overview for local builds.
To include detailed metadata in your builds, set the following:
- GIT_STATE:
clean
if your repo is clean,dirty
if uncommitted changes exist - BUILD_HOST: Hostname of the build machine
- GO_VERSION: Version of Go used (for consistent versioning info)
- BUILD_USER: Username of the person/system performing the build
export GIT_STATE=$(if git diff-index --quiet HEAD --; then echo 'clean'; else echo 'dirty'; fi)
export BUILD_HOST=$(hostname)
export GO_VERSION=$(go version | awk '{print $3}')
export BUILD_USER=$(whoami)
-
Install GoReleaser following their documentation.
-
Run in snapshot mode to build locally without releasing:
goreleaser release --snapshot --clean
-
Check the
dist/
directory for compiled binaries, which will include the injected metadata.
Note
If you encounter errors, ensure your GoReleaser version matches the one used in the Release Action.
Each instance of cloud-init is linked to a single SMD and operates for a single cluster. Until the cluster name is automatically available via your inventory system, you must specify it on the command line using the -cluster-name
flag.
Example:
-cluster-name venado
For development purposes, you can run the cloud-init server without connecting to a real SMD instance. By setting the environment variable CLOUD_INIT_SMD_SIMULATOR
to true
, the service will generate a set of simulated nodes.
Example command:
CLOUD_INIT_SMD_SIMULATOR=true dist/cloud-init_darwin_arm64_v8.0/cloud-init-server -cluster-name venado -insecure -impersonation=true
By default, the service determines what configuration to return based on the IP address of the requesting node. For testing, impersonation routes can be enabled with the -impersonation=true
flag.
Sample commands:
curl http://localhost:27777/cloud-init/admin/impersonation/x3000c1b1n1/meta-data
cloud-init=enabled ds=nocloud-net;s=http://192.0.0.1/cloud-init
The following testing steps (adapted from Demo.md) help you verify that the service is functioning correctly.
Example:
CLOUD_INIT_SMD_SIMULATOR=true dist/cloud-init_darwin_arm64_v8.0/cloud-init-server -cluster-name venado -insecure -impersonation=true
curl http://localhost:27777/cloud-init/meta-data
You should see a YAML document with instance information (e.g., instance-id, cluster-name, etc.).
curl http://localhost:27777/cloud-init/user-data
For now, this returns a blank cloud-config document:
#cloud-config
curl http://localhost:27777/cloud-init/vendor-data
Vendor-data typically includes include-file directives pointing to group-specific YAML files:
#include
http://192.168.13.3:8080/all.yaml
http://192.168.13.3:8080/login.yaml
http://192.168.13.3:8080/compute.yaml
The service supports advanced configuration through group handling and instance overrides.
This example sets a syslog aggregator via jinja templating. The group data is stored under the group name and then used in the vendor-data file.
curl -X POST http://localhost:27777/cloud-init/admin/groups/ \
-H "Content-Type: application/json" \
-d '{
"name": "x3001",
"description": "Cabinet x3001",
"data": {
"syslog_aggregator": "192.168.0.1"
},
"file": {
"content": "#template: jinja\n#cloud-config\nrsyslog:\n remotes: {x3001: {{ vendor_data.groups[\"x3001\"].syslog_aggregator }}}\n service_reload_command: auto\n",
"encoding": "plain"
}
}'
To add more sophisticated vendor-data (for example, installing the slurm client), you can encode a complete cloud-config in base64. (See the script in Demo.md for a complete example.)
curl -X POST http://localhost:27777/cloud-init/admin/cluster-defaults/ \
-H "Content-Type: application/json" \
-d '{
"cloud-provider": "openchami",
"region": "us-west-2",
"availability-zone": "us-west-2a",
"cluster-name": "venado",
"public-keys": [
"ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEArV2...",
"ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEArV3..."
]
}'
curl -X PUT http://localhost:27777/cloud-init/admin/instance-info/x3000c1b1n1 \
-H "Content-Type: application/json" \
-d '{
"local-hostname": "compute-1",
"instance-type": "t2.micro"
}'