Serge Koba

Serge Koba

Web, blockchain, mobile and IoT developer

Web Team Lead at MobiDev, who likes learning new stuff non-stop. Main languages are: PHP and Ruby. Doesn't hesitate playing with blockchain, mobile and IoT. We live until we learn something.

Docker in Production using Rancher

30th of December, 2016 00:47

Many developers, I talked to, use Docker only for local development. And are suspicious about Docker when in comes to Production. Today I hope to make some of them feel more confident about Docker and share my experience deploying simple web applications to Production using Docker. 

By the way, current blog uses Docker both locally and in Production. Fork me

For containers orchestration I use Rancher

In short about this article:

  • building Docker images of your web app;
  • installing Rancher on server;
  • adding Docker hosts;
  • creating a new Rancher Stack and application deploy.


Building Docker images

The gist of Docker based deploy is that downloading a new image of your web application from DockerHub should be almost enough to deploy it.

DockerHub is a centralized repository for Docker images. You can find there a huge number of production ready to use images. Do you need a Redis image? Or image, which launches Tomcat? Just search it in Docker Hub, start, and you have a working application or service.

In addition to official images of popular software, anyone can store there his personal private images. Docker images are built and configured via Dockerfile.

It's something like Makefile, but instead of compile instructions it contains container built instructions. You can just write your own Dockerfile and Docker will create an image of your container based on it. Dockerfile allows a developer to create repeatable environment.

What you should do inside Dockerfile? Install required software (for example, Ruby, nodejs, npm), copy application code and configuration files inside your image, install libraries (rubygems, composer, npm, bower).

What you should NOT do inside Dockerfile? Don't run any commands for seeding or migrating database.

When your Dockerfile is ready (as an example we can use Blog's Dockerfile) you can start building your image and uploading it to DockerHub. More about creating Dockerfile you can read in previous post. First signup in Docker Hub and create your personal repository there, mine I called "blog", and my username is "sergkoba"

#!/bin/bash
echo "First time you should login to DockerHub"
docker login
echo "Go to project's folder"
cd PATH_TO_YOUR_PROJECT
echo "Build Docker image"
docker build -f Dockerfile -t sergkoba/blog:latest .
echo "Push image to DockerHub"
docker push sergkoba/blog:latest

Flag -t is used, to create a tag ("latest") for image, which is used as human readable name for image. Flag -f we can skip in current case, it is used to provide path to Dockerfile.

Starting from this point our image is uploaded to DockerHub and is ready to use.


Installing Rancher

To use Rancher  you should have server with Docker and minimum 1GB RAM. The last requirement is pretty strict, you can't install Rancher on server with less than 1GB RAM.

To install Rancher you should connect to your server via ssh and run the next command

$ sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server

It will download Rancher image (rancher/server) and start it on 8080 port. In a minute or two you can visit your host url and 8080 port to see Rancher welcome page.

By default Rancher is started in "public" mode, so right after you see welcome screen go to Admin -> Accounts in menu and create Rancher account for secured access.

Few words about terminology, Stack - is a group of containers, united by some attribute. For example Stack "blog" contains 4 containers: admin - blog's control panel, bloglb - load balancer, db - PostgreSQL Database service, front - web part of blog visible to users

So this is so called "User Stack" - in other words a group of services created by user (You). Also Rancher creates his own stacks "Infrastructure Stacks", which are required to communicate between hosts, health monitoring, scheduling tasks, etc.

Your list of "infrastructure" stacks can differ, Rancher adds them after you created a new host. By default Rancher will create a user stack called "default", you can delete it.


Adding a new host

To allow Rancher create stacks and start containers you should add at least 1 host. Go to Infrastructure -> Hosts -> Add Host. You should have several options you can choose from: manual host addition (Custom) or via host provider API, for example Digital Ocean or Amazon.

If you chose Custom you'll need to connect to host via ssh and run Rancher Agent there. A command for that is provided by Rancher after you select Custom, so read carefully. After that Rancher Agent will ping server that new host appeared and after a minute or so you should be able to see a new host in your Host list.

Saying honestly I was too lazy to setup a host manually, so I selected DigitalOcean and let Rancher to install Docker, Rancher Agent and everything else. The only thing you should provide Rancher in this case is your Digital Ocean API token and choose new host configuration (it depends on what applications are you going to deploy, but for blog the minimum 512mb RAM server is enough). The rest is completely done by Rancher.

Here is my host, on which I have deployed "blog". You can see that Rancher added "infrastructure" stacks for host state monitoring.


Deploy application (Docker Compose)

Now we can create a new stack via nice ui and add several services (containers). But I suggest to automate this part of work. In local Docker development I use Docker Compose. This is a tool for definition of Docker based applications, which consists of many containers (in details about Docker Compose you can read in previous post about Docker). Docker Compose is configured with help of docker-compose.yml file.

The simplified docker-compose.yml for blog looks like that

version: '2'
services:
bloglb:
ports:
- 80:80/tcp
labels:
io.rancher.container.create_agent: 'true'
io.rancher.container.agent.role: environmentAdmin
image: rancher/lb-service-haproxy:v0.4.6
db:
image: postgres:latest
volumes:
- /var/lib/postgresql/data
environment:
POSTGRES_DB: blog
admin:
labels:
io.rancher.container.pull_image: always
image: sergkoba/blog:latest-admin
command: rackup --port 3000 --host 0.0.0.0
links:
- db environment: RACK_ENV: production

front:
labels:
io.rancher.container.pull_image: always
image: sergkoba/blog:latest
command: rackup --port 3001 --host 0.0.0.0
links:
- db environment: RACK_ENV: production

This docker-compose.yml is pre-configured for production, we already use images from Docker Hub (sergkoba/blog:latest and sergkoba/blog:latest-admin). In local environment you'll want Docker to build images from local Dockerfile, and synchronize code changes with container without rebuilding images each time (add app code as volume).

To solve this we can create additional "docker-compose-dev.yml" configuration file

version: '2'
services:
db:
ports:
- "5437:5432"
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: development
admin-bundler-cache:
image: ruby:2.3.1
command: echo 'Data Container for Ruby 2.3.1 bundled gems'
volumes:
- /usr/local/bundle
admin:
build: admin
command: rerun -- rackup --port 3000 --host 0.0.0.0
volumes:
- ./admin:/app
volumes_from:
- admin-bundler-cache
ports:
- "3000:3000"
environment:
RACK_ENV: development
stdin_open: true
tty: true
front-bundler-cache:
image: ruby:2.3.1
command: echo 'Data Container for Ruby 2.3.1 bundled gems'
volumes:
- /usr/local/bundle
front:
build: front
command: rerun -- rackup --port 3001 --host 0.0.0.0
volumes:
- ./front:/app
volumes_from:
- front-bundler-cache
ports:
- "3001:3001"
environment:
RACK_ENV: development
stdin_open: true
tty: true

Now locally we can start our Docker application with following command:

docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d

During startup the settings from docker-compose and docker-compose-dev will be merged and we will get containers with all development "tunings" as the result. A docker-compose.yml file will still be "clean" (without local development settings) and ready for Production. for convenince I created and alias in my .bashrc

alias dcg="docker-compose -f docker-compose.yml -f docker-compose.dev.yml"

Now you can run your app like this

dcg up -d


Deploy application (Rancher Cli)

The gist of Rancher Cli usage is to control your application Stack via docker-compose.yml file. For that purpose Rancher Cli uses additional file rancher-compose.yml, in which you can set the quantity if containers and load balancer settings

version: '2'
services:
bloglb:
scale: 1
lb_config:
certs: []
port_rules:
- hostname: '1devblog.org'
path: ''
priority: 1
protocol: http
service: front
source_port: 80
target_port: 3001
- hostname: 'admin.1devblog.org'
path: ''
priority: 2
protocol: http
service: admin
source_port: 80
target_port: 3000
health_check:
port: 42
interval: 2000
unhealthy_threshold: 3
healthy_threshold: 2
response_timeout: 2000
db:
scale: 1
front:
scale: 1
admin:
scale: 1

Before installing Rancher Cli you should generate API keys on Rancher server, using these API keys Rancher Cli can access you stacks. To do this got to API -> Add Account API Key in Rancher's server UI, generate keys, copy them somewhere.

To install Rancher Cli it's enough to download this executable amd place it somewhere in your PATH for example to /home/USER_NAME/bin. Next you should run

rancher config

enter Rancher server url and generated API keys when Rancher Cli prompts. Now run in project's folder

rancher up -d

this will trigger Rancher Cli to create new user stack and name it according to project's folder name, create containers from rancher-compose.yml, configure them according to docker-compose.yml and launch them. From this point in time your application should be up and running. To update containers the next Rancher Cli command is used

rancher up -d -u -c

which checks whether containers were updated in docker-compose.yml or rancher-compose.yml and updates the changed ones (flag -u) after successful update it replaces the old containers with new ones (flag -c). During the update the old versions of containers continue to work, that is why we can expect the maximum Zero Downtime deploy.

Don't forget to add labels in docker-compose.yml 

labels:
io.rancher.container.pull_image: always

It tells Rancher, that during update it should always download new images from Docker Hub. Another way to achieve this is to add -p flag to update command

rancher up -d -u -c -p


Conclusion

It should be mentioned that all these operations can be done via Rancher's UI.  To update a service in stack you can clicl and up arrow icon

Globally I see the next deploy procedure for Docker applications using Rancher:

  1. Run tests.
  2. Build and push applications image(s) to Docker Hub.
  3. Update Stack in Rancher.
  4. Run Database migration and other required after deploy actions.

These 4 actions can be divided in 2 groups "Build"(1,2) and "Deploy"(3,4). Ideally group "Build" should be automatically triggered after each code push to repository, and "Deploy" manually launched when you need to deploy your app. 

Any of these actions can be run in fully automated mode from your Ci system (Semaphore Ci, Code Ship, Bamboo, Jenkins).

Back