1

Well I wish to copy files from the host to a docker container. Many solutions give the command

docker cp ...

However this doesn't work in my case. In my case the one initializing the command is the docker itself. The initial startup triggers a special script in the docker that will copy all files, and initializes the docker. (normally from a git repo, but for debugging I wish to enable copy from file system).

The problem is, inside the docker the docker command doesn't exist. So I can't use docker cp. So how to do this? Just so I can speed up development (instead of having to push each minor update during testing I could then test directly).


For clearing up things, my docker image has as entry point init.sh

This file is hence called each startup, it contains (amongst other setup properties the following:

if [ ! -f /initialized ]; then
  apk add --virtual .init-deps bash
  if [ -z ${FILE} ]; then  
    echo "Building from server"
    apk add --virtual .init-deps git
    bash load_git.sh "${GIT_SERVER}" "${GIT_USERNAME}" "${GIT_PASSWORD}"
    echo clone done
  else
    echo "Building from file"
    bash load_filestystem.sh "${FILE}"
    echo copying done
  fi

  if [ $? -eq 0]; then
    sh copy_code.sh
    if [ $? -eq 0 ]; then
      echo "Build success"
      touch /initialized
    fi
  fi
  apk --purge del .init-deps
fi

load_git.sh contains the following line:

git clone https://${USERNAME}:${PASSWORD}@${REPOSITORY} tmp

It clones the git repository and puts it in the temporary folder "to be copied". Now I wish to make load_filesystem.sh do the "same" except from an external repository I wish it to "clone" from the host system.

This to allow tests & working to continue while the external repository is not available.

Notice that init.sh is run from within the container (it's the entrypoint).


Before people ask: we choose this setup, instead of docker build files since synology NAS servers seems to wish to be served an image file directly. To improve deployment speed we then make images that are generic, and load the code upon first run (or restart with a flag).

So in the end the question is: how to copy files (a repository) not from a git server, but rather from the host operating system's filesystem?

2
  • A bit unclear as to how your current setup works. Can you try adding some more information? Commented Mar 30, 2018 at 20:40
  • @TarunLalwani I tried to clear things up. Is it understandable now? Commented Mar 31, 2018 at 12:31

2 Answers 2

1

Use volumes.

When you start the container, you can decide if you really want to mount some files, or you can just mount an empty directory. Then in the container you can cp * from there, or use the contents of the directory directly.

I wouldn't want to have different behaviour on prod than on test, do exactly the same in both environments.

Sign up to request clarification or add additional context in comments.

2 Comments

The "problem" with a volume is that we use multiple times the same image. However based on environment variables the dockers are "filled" with different data during the initialize script. Volumes would mean that each docker uses the same local data (if configured to use local data).
No, volumes would mean each container would use the data you provide them during runtime, depending what you mount there. It could be anything. So what you would copy from the host, you simply mount, and then copy as if it were a normal file in the container!
0

Update: You have two option in that case:
1. Disable the code which performs the GIT clone (load_git.sh etc) and also remove load_filesystem.sh. Now, write a script to perform the docker build and in this script copy the files (your latest files) to the folder from where you want the build to pick up the files.
2. create a git repo in your local and push your latest to this local repo. Update you git code to point this local repo for your testing. This way you can avoid pushing to the main repo.

I would use the first approach as it is quick and simple.

2 Comments

As this is only for (automated) debugging purposes, it is not the approach I wish. Since using volumes would mean that it's also visible on the production system. (the goal is that I can use the same docker image on production as development). It's also bad as it requires me to setup the host in a specific way - while I wish to only tell the docker about something from the host.
no, if you don't mount anything on prod, the dir will be empty. otherwise you mount some files. it's still the same image but with different config which is normal.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.