Description
Description
Let me first describe my use case. I want to be able to mount various places of interest on my host (such as ~/.ssh
, ~/.aws
, ~/.kube
and so on) into my containers. There is a great deal of tooling that may be built around that ability, especially for folks like me who develop and run infrastructure code such as Terraform/CDK/Helm and whatnot, and need to be able to run these tools in a predictable reproducible environment. As a matter of fact one such tool I've open sourced - its readme probably gives away the use case in much more details, but in the nutshell - I want to be able to schedule a pod like this:
apiVersion: v1
kind: Pod
metadata:
name: runtainer-297b5df7
namespace: default
spec:
volumes:
- name: runtainer-c8e3292f
hostPath:
path: /Users/me/.aws
containers:
- name: runtainer
image: <some image>
command:
- cat
volumeMounts:
- name: runtainer-c8e3292f
mountPath: /home/me/.aws
securityContext:
supplementalGroups:
- 20 # my primary gid on the host
fsGroup: 20 # my primary gid on the host
And then - be able to exec
into that container and run AWS command line with exactly the same access as I have on my host. Or even better - have a container with AWS federation helper, that would populate my host ~/.aws
for the host as well as other containers to consume.
With both Docker Desktop as well as sshfs based Lima implementation - the hostPath
mounts were making it possible. Consider this on the host:
ls -la ~/.aws/config
-rw------- 502 20 /home/terraform/.aws/config
502:20
is my uid:gid
on the host. Within Lima VM, it all stays the same, obviously. Except for the fact that my Lima user, for some reason, is not getting assigned to group 20
, which I think it should, but that is not important here (just a side note).
Now when I schedule a non-root container like above in my k3s:
terraform@runtainer-37fff85a:~$ id
uid=1000(terraform) gid=1000(terraform) groups=1000(terraform),20(dialout)
terraform@runtainer-37fff85a:~$ ls -la ~/.aws/config
-rw------- 502 20 /home/terraform/.aws/config
terraform@runtainer-37fff85a:~$ cat ~/.aws/config
[profile dmykry]
...
As you can see, despite the fact that accordingly to the file system attributes and my in-container uid:gid
I should not be able to read that file - I still can. I can confirm that it was working in the same way with Docker implementation as well (which I believe was fuse/osxfs?). Through my conversation with @jandubois yesterday - I learned that it is made possible with the use of allow_other
on the sshfs mounts, as discussed here #247.
This does not work in the same way with 9p. I have tried (accordingly to #726 (comment)) the options mapped-xattr
and none
and neither works the same way. My non-root container user is not able to access that file.
Any chance this this behavior can be backported from sshfs if and before 9p will be made default?
Probably much better and secure option would be if k8s would allow to set not only fsGroup
but also fsUser
, as well as runAsUser
and runAsGroup
would not only blindly change container uid:gid
in runtime, but actually change existing user uid:gid
(because many containers rely on $HOME
and /etc/passwd
being available for the current user). But that is a much longer story.