0

I am in the process of trying to understand how to use Docker and have now got Docker installed on an Ubuntu 14.04 box. What I want to be able to do is to easily switch between a combination of stacks. Typical stacks

  • Ubuntu + MariaDB + Apache + PHP
  • CentOS + ditto
  • Ubuntu + MongoDB + Nginx + PHP

From my reading of the docs thus far I believe that I can do this in two ways

  1. Loading separate containers for each - in the sensse of 1 for Ubuntu, 1 for MariadB, 1 for Apache + PHP - of the above and linking them together
  2. Defining one container for the whole lot - i.e. one container per distro + db + server...

What I don't quite get yet is this - when I work with such a system and the DB is subjected to changes I would like to be able to have those changes in place the next time I reuse the same configuration. This would require that I save the container as a tar archive and then load it later when required? In that case it would make sense to have atleast those containers that are liable to be modified by the user as separate linked containers?

Finally - suppose I have got the full stack up and running (be it as separate linked containers or as one mega container). And now I browse to the IP address where it is all installed. The base Ubuntu box has no web server installed. Will I reach the Apache instance running inside the Docker container automatically or do I somehow need to tell the system of the need to do this?

I am a Docker beginner so some of my questions are probably rather naive.

2 Answers 2

2

My 2 cents on the matter is that you should work with separate linked containers - that's simply the Docker way. One container typically hosts one app (such as the database or the web server).

When you work with an app that requires persistent data, such as a database the way to go is to mount volumes on the docker container. This can be achieved via the -v flag of the docker run command.

docker run -v /some/local/dir:/some/dir/in/container my/mariadb

This means that the data in the container folder /some/dir/in/container is mapped to a local folder of the host system so when you restart the container the data is still available. There are other best practices that can be used such as data volumes and the --volumes-from flag. All this is described in the docker docs and the docker run reference.

If you start a container with a web server (in your case Apache) the EXPOSE directive can be used to expose e.g. port 80 on the container. To link that to the host system a port linking is required via -p or -P. The -p flag can be used like this:

docker run -p 80:80 my/apache

The command above links port 80 on the host to port 80 on the container. You can also bind to a specific host interface (such as 127.0.0.1) using the -p flag. More info on port mapping can be found in the docker docs and also under the Linking Containers section.

Sign up to request clarification or add additional context in comments.

5 Comments

Thanks. Suppose I want to keep the MariaDB instance fully insulated - different users will be using the system - so I save & load the righ tMariadB container as and when required along with the Apache etc containers? Do you see any issues there? Also given how closely knit PHP is with a server is it OK to put PHP & Apache in the same container or should even they be hosted in their own distinct containers?
I don't see any real issues with a multi-user environment and docker. One important aspect though is that exposing ports on the host should be avoided if that is the case (e.g. port 80 can only be mapped to one apache container). In that case the users should probably just use the apache-containers IP and the port to use for surfing to the web server and then link the web server and database by using container linking. Furthermore, it seems reasonable to put PHP and Apache in the same container.
I am accepting your answer. However, if you wouldn't mind: could you clarify what you mean by apache-containers IP? I can assign a separate IP to each container that is accessible to the outside world?
No, what I mean is that each docker container gets their own IP. The IP is internal and attached to the docker0 interface. So, if a user is logged in and has started an apache-container they can access it using the internal IP (while logged on to the host e.g. using ssh). But, the IP can not be accessed from the outside world. The only way to access the container from the outside world is to do port forwarding.
Port forwarding - I had thought as much. Thanks.
1

Loading separate containers for each of the above and linking them together

This will lead to 3 Dockerfiles, with in each an EXPOSE command, so that, when your containers are up, on yout computer, if you launch http://localhost/1234 (it is an example) you will access yout first container (MariaDB + Apache + PHP), and with http://localhost/2345 you will reach CentOS +ditto, and so on.

Have a look at

https://docs.docker.com/reference/builder/#expose

and look at

docker inspect --format '{{ .NetworkSettings.IPAddress }}' container

1 Comment

Thank you for the answer. I guess my original statement re separate containers was not clear. What I meant was separate containers for distro, db, server... that are then linked together.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.