Updated 7/12/2016: Applying a web server, See end of the post.
Updated 9/29/2016: Mounting Docker so you can edit container using IDE
This week I decided it was high time I learned docker. Below is how I wish a “getting started page” was laid out for me in retrospect; would have saved a lot of time….
At a high-level, Docker is a VM that is more light-weight and easier to install, manage, and customize than others. It is a great way to ensure everyone is deploying their project in the exact same way, and in the exact same environment. (The non-high-level version.)
Until now docker machines were needed to run Docker on a mac. Now you can just install the docker OS X app and run it
the “Quick Start Terminal” to have your environment started properly (Update: The latest mac version runs in the background and adds a docker icon to your Mac menu bar). In short, if you don’t use docker-machine nor the Quick Start Terminal then you will get a “Cannot connect to the Docker daemon. Is the docker daemon running on this host?” error.
First off, here are some very useful commands that keep you aware of the state of Docker’s containers …
#> docker ps
#> docker ps -a
#> docker images
Now, let’s create some containers! A container is an instance of an image that is either in a running or stopped state.
To create a running container that is based on a standard Ubuntu image:
#> docker run -it –name my-container ubuntu
This command will pull the image (if needed) and run the docker container. Once the container is built it will be named “my-container” (based on the –name parameter) and viewable using:
#> docker ps
(Shows all running containers.)
#> docker ps -a
(Shows all containers whether they are running or not.)
If you ever want to interact with your Docker container in Shell you will need to include the “-t” param. It ensures you have TTY setup for interaction. In order to detach from a container, while keeping it running, hit CTRL+Q then CTRL+P. Otherwise the container will stop upon exit.
The -i parameter starts the container “attached”. This means you will immediately be able to use terminal from within the running container. If you do not include the “-t” with the “-i” you will not be able to interact with the attached container in shell.
Alternatively, if you use the -d parameter instead of the -i parameter, your container will be created in “detached” mode. Meaning it will be running in the background. As long as you include “-t” in your “run” command you will be able to attach to your container’s terminal at any time.
An example of running a container in detached mode:
#> docker run -td –name my-container-2 ubuntu
Next, let’s see how container react to a stop command.
Create both containers above and run the “docker ps” and “docker ps -a” commands to see the running containers. Then, stop one of the containers using:
#> docker stop [conatiner id]
… and then run “docker ps” and “docker ps -a” again. Do this over various permutation of the above run commands and parameters; you’ll get the hang of it.
Now that you have a container created based on a standard ubuntu image, let’s see if you can create a custom Image of your own.
A Dockerfile is used to define how the image should be pre-configured once built. Here you can make sure you have all your required packages and structure set up – like a light-weight puppet file. The most simple example of a Dockerfile contents is a single line like this:
Which says build my custom image with the latest ubuntu image. Save that one-liner Dockerfile in a file in your Present Working Directory and call it “Dockerfile”.
That Dockerfile will get the latest Ubuntu image as its only configuration requirement.
To create our custom image based on that Dockerfile:
#> docker build -t my-image .
Here we are asking docker to build an image and give it a name (using the -t parameter) “my-image”. The last parameter “.” tells Docker where the Dockerfile is located – in this case the PWD.
Now you can run …
#> docker images
… to see all the images which should now include “ubuntu” and your newly created “my-image”.
Just as we used Ubuntu as the base image in the beginning of this tutorial, we can now use our custom “my-image” image to to create our new running containers.
#> docker run -it –name my-container-3 my-image
UPDATE: Applying a Web Server
When learning on your own, finding an answer has more to do with knowing the right question than anything else. For a while I kept looking up ways to connect my server’s apache config to the running docker container. I readily found info on mapping ports (for example, “-p 8080:80”), but wanted to know how to point my server’s inbound traffic of 8080 to the container’s localhost port 80’s traffic. This was entirely the wrong way of looking at it.
Docker creates a sort of IP tunnel between your server and the container. There is no need to create any hosts (or vhosts) on your server, or to even setup apache on your server for that matter, to establish the connection to your running container.
That may have seemed obvious to everyone else, but it finally clicked for me today.
This step-by-step tutorial finally nailed it for me:
In short, you will create a container, install apache2 within that container, run apache2 within that container (by mapping your server inbound port to the containers inbound port), and voila – done!
Note: Be sure to use “EXPOSE” in your Dockerfile to open up the port you will be using in your run command. WIthout it you will have connection issues. For example, in your Dockerfile, include:
And then in your run command use:
#> docker run -it -p 8000:8000
Yet another important note: If you decide to run your web server in dev mode, make sure that you bind you container IP as well as your port. Some dev web servers (like django) spin up their web server under port 127.0.0.0, when docker listens on 0.0.0.0. So, in the case of spinning up a django dev server in your conainter, be sure to specify:
#> ./manage.py runserver 0.0.0.0:8000
UPDATE: Mounting Docker to Host to edit Container using IDE
Having to build your docker container every time you want to deploy locally is a PIA. I just learned today that you can mount your local host folders to a mounted volume inside of your docker container. Once you have built your image simply run your docker container using the -v param like so:
#> docker run -it -v /Users/myuser/repos/my-project:/tmp [image-name] /bin/bash
Where “/Users/myuser/repos/my-project” is the folder on your local machine you want to be available inside of your container, and “/tmp” being the directory you can access your volume from within the running container.
Once that is done, just edit the files locally in “/Users/myuser/repos/my-project” and it will be in perfect sync with your docker code!