You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+16-24Lines changed: 16 additions & 24 deletions
Original file line number
Diff line number
Diff line change
@@ -77,9 +77,9 @@ We will be editing the docker daemon config file which is usually present at `/e
77
77
> *if `runtimes` is not already present, head to the install page of [nvidia-docker](https://github.com/NVIDIA/nvidia-docker)*
78
78
79
79
80
-
### Configure scheduler
80
+
### Configuration
81
81
82
-
update the scheduler configuration:
82
+
You need to enable vgpu in volcano-scheduler configMap:
83
83
84
84
```shell script
85
85
kubectl edit cm -n volcano-system volcano-scheduler-configmap
@@ -111,7 +111,17 @@ data:
111
111
- name: binpack
112
112
```
113
113
114
-
Customize your installation by adjusting the [configs](doc/config.md)
114
+
### Sharing Mode
115
+
116
+
Volcano-vgpu supports two types of device-sharing: `HAMi-core` and `dynamia-mig`, A node can either using `HAMi-core`, or `Dynamic-mig`. Heterogeneous is supported(a part of node using HAMi-core, the other using Dynamic-mig)
117
+
118
+
A brief introduction about these two modes:
119
+
120
+
HAMi-core is a user-layer resource isolator provided by HAMi community, works on all types of GPU.
121
+
122
+
Dynamic-mig is a hardware resource isolator, works on Ampere arch or later GPU.
123
+
124
+
You can set the sharing mode and customize your installation by adjusting the [configs](doc/config.md)
115
125
116
126
117
127
### Enabling GPU Support in Kubernetes
@@ -130,28 +140,7 @@ Check the node status, it is ok if `volcano.sh/vgpu-number` is included in the a
130
140
```shell script
131
141
$ kubectl get node {node name} -oyaml
132
142
...
133
-
status:
134
-
addresses:
135
-
- address: 172.17.0.3
136
-
type: InternalIP
137
-
- address: volcano-control-plane
138
-
type: Hostname
139
-
allocatable:
140
-
cpu: "4"
141
-
ephemeral-storage: 123722704Ki
142
-
hugepages-1Gi: "0"
143
-
hugepages-2Mi: "0"
144
-
memory: 8174332Ki
145
-
pods: "110"
146
-
volcano.sh/vgpu-memory: "89424"
147
-
volcano.sh/vgpu-number: "10"# vGPU resource
148
143
capacity:
149
-
cpu: "4"
150
-
ephemeral-storage: 123722704Ki
151
-
hugepages-1Gi: "0"
152
-
hugepages-2Mi: "0"
153
-
memory: 8174332Ki
154
-
pods: "110"
155
144
volcano.sh/vgpu-memory: "89424"
156
145
volcano.sh/vgpu-number: "10" # vGPU resource
157
146
```
@@ -166,6 +155,8 @@ apiVersion: v1
166
155
kind: Pod
167
156
metadata:
168
157
name: gpu-pod1
158
+
annotations:
159
+
volcano.sh/vgpu-mode: "hami-core" # (Optional, 'hami-core' or 'mig')
169
160
spec:
170
161
schedulerName: volcano
171
162
containers:
@@ -188,6 +179,7 @@ You can validate device memory using nvidia-smi inside container:
188
179
> **WARNING:** *if you don't request GPUs when using the device plugin with NVIDIA images all
189
180
> the GPUs on the machine will be exposed inside your container.
190
181
> The number of vgpu used by a container can not exceed the number of gpus on that node.*
182
+
> You can specify the mode of this task by assigning `volcano.sh/vgpu-mode` annotations, If not, both modes are possible.
0 commit comments