-
Notifications
You must be signed in to change notification settings - Fork 26
/
Copy path.codespell_exclude_lines.txt
48 lines (48 loc) · 3.32 KB
/
.codespell_exclude_lines.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
# Include whole lines that have codespell-recognized typos.
# This is better than accepting a typo for ask someplace random.
# End the file with a blank line.
Approaches for Working with Azure AKS
You can approach running workloads in Azure AKS with NVIDIA GPUs in at least two ways.
Default AKS configuration without the GPU Operator
By default, you can run Azure AKS images on GPU-enabled virtual machines with NVIDIA GPUs,
AKS images include a preinstalled NVIDIA GPU Driver and preinstalled NVIDIA Container Toolkit.
`Use GPUs for compute-intensive workloads on Azure Kubernetes Services <https://learn.microsoft.com/en-us/azure/aks/gpu-cluster>`__
The images that are available in AKS always include a preinstalled NVIDIA GPU driver
After you start your Azure AKS cluster, you are ready to install the NVIDIA GPU Operator.
GPU Operator with Azure AKS <microsoft-aks.rst>
* Added support for running the Operator with Microsoft Azure Kubernetes Service (AKS).
You must use an AKS image with a preinstalled NVIDIA GPU driver and a preinstalled
Create AKS Cluster with a Node Pool to Skip GPU Driver installation
command-line argument to the ``az aks nodepool add`` command.
$ az aks nodepool add --resource-group <rg-name> --name gpunodes --cluster-name <cluster-name> \
`Skip GPU driver installation (preview) <https://learn.microsoft.com/en-us/azure/aks/gpu-cluster?source=recommendations&tabs=add-ubuntu-gpu-node-pool#skip-gpu-driver-installation-preview>`__
After you start your Azure AKS cluster with an image that includes a preinstalled NVIDIA GPU Driver
Azure AKS <microsoft-aks.rst>
.. |prod-name-short| replace:: MKE
Mirantis Kubernetes Engine (MKE) gives you the power to build, run, and scale cloud-native
* - MKE 3.6.2+ and 3.5.7+
* A running MKE cluster with at least one control plane node and two worker nodes.
* A seed node to connect to the MKE instance, with Helm 3.x installed on the seed node.
* The kubeconfig file for the MKE cluster on the seed node.
You can get the file from the MKE web interface by downloading a client bundle.
Alternatively, if the MKE cluster is a managed cluster of a Mirantis Container Cloud (MCC) instance,
In this case, the MKE web interface can be accessed from the MCC web interface.
* You have an MKE administrator user name and password, and you have the MKE host URL.
Perform the following steps to prepare the MKE cluster:
#. MKE does not apply a label to worker nodes.
$ export MKE_USERNAME=<mke-username> \
MKE_PASSWORD=<mke-password> \
MKE_HOST=<mke-fqdn-or-ip-address>
#. Get an API key from MKE so that you can make API calls later:
'{"username":"'$MKE_USERNAME'","password":"'$MKE_PASSWORD'"}' \
https://$MKE_HOST/auth/login | jq --raw-output .auth_token)
#. Download the MKE configuration file:
$ curl --silent --insecure -X GET "https://$MKE_HOST/api/ucp/config-toml" \
#. Upload the edited MKE configuration file:
https://$MKE_HOST/api/ucp/config-toml
The MKE cluster is ready for you to install the GPU Operator with Helm.
Refer to the MKE product documentation for information about working with MKE.
* https://docs.mirantis.com/mke/3.6/overview.html
$ cat <<EOF > nvidia-container-microshift.te
$ checkmodule -m -M -o nvidia-container-microshift.mod nvidia-container-microshift.te
2023/06/22 14:25:38 Retreiving plugins.