Although there is an accepted answer it does not work for me. I needed the same uid:gid for all the files mounted (with bind) from the host file system in the docker container. It took me nearly 2 days to figure out why docker can't do this and how I can do this nevertheless. I leave it here if someone faces the same issues:
host directory mounts in docker and why they do not work (edit: in docker desktop)
Docker volume/mounted binds have a significant downside: They statically change ownership: user(1000) becomes root(0) and www-data(33) becomes nobody(65534). And they do not allow to change ownership in the container. Thus access conflicts are mandatory.
This is known as user namespacing. It is a security feature to prevent privilege-escalation attacks. It can be disabled for containers via the --userns=host. However:
There is a side effect when using this flag: user remapping will not be enabled for that container but, because the read-only (image) layers are shared between containers, ownership of the containers filesystem will still be remapped.
direct quote from the docker user manual
edit 24.03.24
The wrong mapping occurs on docker desktop only (at least at docker desktop version 4.13.1 (90346)). If I use the docker daemon only, the mapping is always correct.
how host directory mounts work (with sshfs)
To circumvent this issue we can use battle proven linux server mounting programms like sshfs.
First we need to access our host machine from the container via ssh. Therefore, an ssh server has to be installed on the host (with passwd or ssh-keys configured) and an ssh client on the container.
Host:
apt-get update
apt-get install openssh-server -y
Container:
apt-get update
apt-get install ssh sshfs -y
First we check in the container if the ssh connection is working via ssh [email protected].
If it does we configure in the container a sshfs mount.
sshfs -o allow_other,IdentityFile=/home/$(whoami)/.ssh/id_rsa [email protected]:/host/source/path /container/target/path
If you do not want to supply your password you can leave the containers public ssh key on the host e.g. via ssh-copy-id. Then:
sshfs -o allow_other,IdentityFile=/home/$(whoami)/.ssh/id_rsa [email protected]:/host/source/path /container/target/path
The sync is almost instant. And the permission mapping is correct even if you create files from within the container.
Hint: We need to run container with the --privileged flag.
security aspects (warning)
Since we have to run the container in the privileged mode, root in the container is the same as root on the host. The same applies for every other user.
Thus, for everything we run in that container we have to put up the same security measures as it would run directly on the host.
We have to run an ssh-server on the host, which has (of course) to be targeted by security measures.
Best practice: Use mounting of host directories for development only.