search by tags

for the user

adventures into the land of the command line

my dorky continuous delivery solution with git and travis ci

I’m actually pretty excited about this, even though I know everyone is using containers nowadays. I’ll get up to that soon!

So my stack is:

flask
gunicorn
supervisor
nginx

The workflow or pipeline is:
1. Write some code locally
2. Run unit tests locally
3. Check the app in a browser locally
4. Commit your changes to git
5. Push your changes
6. Travis ci will check out your changes and run the unit tests
7. Travis ci will send a webhook notification to the app
8. The app will check the POST data from travis ci
9. If the message contians success, the app will execute a bash script
10. The bash script will do a git pull
11. The bash script will send a HUP signal to gunicorn processes
12. Supervisor will see that gunicorn process have hung up
13. Supervisor will spawn new ones containing the new code which has just been pulled from git, with no outage.

From git pushing to a successful deployment, it all takes about 1 minute, and when you reload your staging or prod app, it should have the changes, as long as travis builds successfuly.

So these are the ‘extra’ bits in my python web application that are required for all of this to happen.

The folder structure looks something like this:

├── deploy.sh
├── myapp.wsgi
├── .git
├── .gitignore
├── index.py
├── README.md
├── requirements.txt
├── static
│   ├── index-style.css
├── templates
│   └── index.html
├── tests.py
└── .travis.yml

myapp.wsgi, index.py, static/, templates/ are the basic app. The .git folder and .gitignore folders are required for git. tests.py contains unit tests. .travis.yml is required to tell travis ci what to do on a new commit. requirements.txt contains build dependancies which travis ci will use. deploy.sh is a bash script containing two commands.

The contents of the .travis.yml:

sudo: required
language: python

python:
  - '2.7'

compiler:
  - gcc

install:
  - pip install -r requirements.txt

script:
  - python tests.py

notifications:
  webhooks:
    urls:
      - https://www.myapp.com/random_string
    on_success: always
    on_failure: never

This instructs travis ci what environment to set up for your app, what dependencies need installing, where your tests are and how to notify you when the build has finished. With the webhooks instruction, travis will send a http POST to the url you specify if the build was successful. Travis build notification documentation is here. You can read about the format and content of the POST there.

In the app itself, you need to create an endpoint or route that matches the webhook url you’ve told travis to POST to after a successful build. So in index.py, add a new view and some extra libraries:

import json, urllib
from hashlib import sha256
from subprocess import call
.
.
.
@app.route('/random_string', methods=['POST'])
def random_string():
    response = {}
    if request.method == 'POST':
        if request.headers['Authorization'] == sha256('my_git_username/my_git_repo' + my_travis_auth_token).hexdigest():
            response = request.get_data()
            response = urllib.unquote(response).decode('utf8')
            response = response.replace("payload=", "")
            response = json.loads(response)
            if response['result'] == 0:
                if response['result_message'] == 'Passed' or 'Fixed':
                    call("./deploy.sh")
                    resp = Response('{"Status":"Deployed"}', status=200, mimetype='application/json')
                    return resp
                else:
                    logging.error('Build Error: CI build result message was not successful.', extra={'Build Result Message': response['result_message']})
                    resp = Response('{"Status":"Broken"}', status=500, mimetype='application/json')
                    return resp
            else:
                logging.error('Build Error: CI build result was not successful.', extra={'Build Result': response['result']})
                resp = Response('{"Status":"Broken"}', status=500, mimetype='application/json')
                return resp
        else:
            logging.error('Auth Error: Webhook POST request was not from Travis CI.', extra={'Auth Header': request.headers['Authorization']})
            resp = Response('{"Status":"Broken"}', status=500, mimetype='application/json')
            return resp
    else:
        logging.error('HTTP Status Error: Method not allowed.', extra={'HTTP Method': request.method})
        resp = Response('{"Status":"Broken"}', status=500, mimetype='application/json')
        return resp

This view will only accept POST requests, if the request headers include some Authorization string from travis, then load the data and look for the result value and message. If the value is what you expect, then run the bash script. Otherwise log some errors and return an error response.

The bash script contains:

#!/bin/bash

git pull
ps -ef | grep [g]unicorn | awk '{ print $2 }' | xargs kill -HUP

This will do a git pull to the current directory, then it will look for current running gunicorn processes, and then send a hangup signal to each of them.

When the script is triggered to run, this is what gunicorn’s logs report is happening:

[2016-08-22 01:33:30 +0000] [3701] [INFO] Listening at: http://0.0.0.0:5000 (3701)
[2016-08-22 01:33:30 +0000] [3701] [INFO] Using worker: eventlet
[2016-08-22 01:33:30 +0000] [3706] [INFO] Booting worker with pid: 3706
[2016-08-22 01:33:32 +0000] [3701] [INFO] Handling signal: hup
[2016-08-22 01:33:32 +0000] [3701] [INFO] Hang up: Master
[2016-08-22 01:33:32 +0000] [3717] [INFO] Booting worker with pid: 3717

3701 is the master process, and 3706 was a child. We hung up 3706 and then a new one was spawned immediately after, 3717. What’s happening here is supervisor is watching these processes and ensuring that the running processes match it’s configuration. When a process dies or hangs up for whatever reason, it just spawns a new one. You can see the processes being reloaded:

Before:

[[email protected] fromearth]# ps -ef | grep gunicorn
root      3701  3693  0 01:33 pts/0    00:00:01 /usr/bin/python /usr/bin/gunicorn -w 1 -k eventlet -b 0.0.0.0:5000 index:app
root      3706  3701  0 01:34 pts/0    00:00:01 /usr/bin/python /usr/bin/gunicorn -w 1 -k eventlet -b 0.0.0.0:5000 index:app

After:

[[email protected] fromearth]# ps -ef | grep gunicorn
root      3701  3693  0 01:33 pts/0    00:00:01 /usr/bin/python /usr/bin/gunicorn -w 1 -k eventlet -b 0.0.0.0:5000 index:app
root      3717  3701  0 01:34 pts/0    00:00:01 /usr/bin/python /usr/bin/gunicorn -w 1 -k eventlet -b 0.0.0.0:5000 index:app

You can read about how to setup supervisor with gunicorn here. Reload your browser and your staging or prod app will have all the new changes. And that is how my dorky solution works. Next time, containers!