Set up your local kubernetes environment
Add this to your /etc/hosts file somewhere:
IMPORTANT - note that the ip address may be 192.168.99.101 or 192.168.99.102, etc, depending on how many times you have recreated your environment locally. Use the appropriate value.
192.168.99.100 dev.myapp.com
Minikube requirements (one of):
virtualbox (at least version 5) vmwarefusion kvm (driver installation) xhyve (driver installation)
Install kubectl, helm & minikube:
IMPORTANT - please make sure you are using minikube version 0.24.1
### MAC # kubectl curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/darwin/amd64/kubectl; chmod +x ./kubectl; sudo mv ./kubectl /usr/local/bin/kubectl # kubernetes-helm sudo curl -LO https://storage.googleapis.com/kubernetes-helm/helm-v2.7.2-darwin-amd64.tar.gz; sudo tar -zxvf helm-v2.7.2-darwin-amd64.tar.gz; sudo mv darwin-amd64/helm /usr/local/bin/helm # minikube curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.24.1/minikube-darwin-amd64; chmod +x minikube; mv minikube /usr/local/bin/minikube ### Linux # kubectl sudo curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl; sudo chmod +x ./kubectl; sudo mv ./kubectl /usr/local/bin/kubectl # kubernetes-helm sudo curl -LO https://storage.googleapis.com/kubernetes-helm/helm-v2.7.2-linux-amd64.tar.gz; sudo tar -zxvf helm-v2.7.2-linux-amd64.tar.gz; sudo mv linux-amd64/helm /usr/local/bin/helm # minikube sudo curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.24.1/minikube-linux-amd64; sudo chmod +x minikube; sudo mv minikube /usr/local/bin/minikube
Start minikube, sync the project directory’s files into the minikube VM and load the dashboard:
IMPORTANT - please make sure you use kubernetes version v1.8.0
minikube start --memory 4096 --kubernetes-version v1.8.0 rsync -av --numeric-ids --stats -e "ssh -i $(minikube ssh-key)" --rsync-path="sudo rsync" /my/project/dir/ docker@$(minikube ip):/myprojectdir/ minikube dashboard
Add an nginx ingress controller into your local kubernetes cluster, as well as heapster:
minikube addons enable ingress
minikube addons enable heapster
Install tiller into your local kubernetes cluster:
helm init
You just saw this rsync command above:
rsync -av --numeric-ids --stats -e "ssh -i $(minikube ssh-key)" --rsync-path="sudo rsync" /my/project/dir/ docker@$(minikube ip):/myprojectdir/
The workflow here is to sync the files, rather than mount, with the minikube vm. This is because on a mac, mounting is way too slow with docker, especially for node projects. Once the files have been synced into the minikube vm, we can mount them into local docker containers which will be used in the minikube k8s cluster.
Let’s take a node project as an example.
Development Workflow
We will do work on our host, using a ide/text editor on the host and also node/npm on the host. We will use the minikube docker daemon, and we will rsync changed files into the minikube vm.
Change to the root of your app project directory on your host:
cd /my/node/project/dir/
Build this example node application using the npm installed on your host:
npm i; npm run build
Re-sync any changed files into the minikube vm:
IMPORTANT - remember to do this each time, maybe can be added in the package.json (I didn’t make any hot change functionality yet).
rsync -av --numeric-ids --stats -e "ssh -i $(minikube ssh-key)" --rsync-path="sudo rsync" /my/project/dir/ docker@$(minikube ip):/myprojectdir/
Use minikube’s docker daemon:
eval $(minikube docker-env)
Build your dev docker image inside the minikube VM:
docker build -t my-node-app -f Dockerfile.dev .
The dockerfile might look something like:
FROM node:8.9 as builder RUN node -v && npm -v FROM nginx:1.12 COPY ./nginx.conf /etc/nginx/nginx.conf
…and the nginx config might look something like:
worker_processes auto; events { worker_connections 1024; } error_log /dev/stdout debug; http { include mime.types; sendfile off; server_tokens off; server { listen 8080; access_log /dev/stdout; error_log /dev/stdout; client_max_body_size 0; location / { root /var/www; try_files $uri $uri/ =404; index index.html; } } }
Deploy the node app to your minikube kubernetes environment:
helm upgrade --install --values helm/myapp/values/dev.yaml --namespace dev dev helm/myapp
The missing part you might be wondering about is how the docker container is serving the built html/css/js files. It’s in the helm chart, using a persistent volume mount and persistent volume claim:
--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-node-app namespace: {{ .Release.Namespace }} labels: env: {{ .Release.Namespace }} role: my-node-app app: my-node-app spec: replicas: {{ .Values.nodeApp.replicaCount }} strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1 template: metadata: labels: env: {{ .Release.Namespace }} role: my-node-app app: my-node-app spec: containers: - name: my-node-app image: {{ .Values.nodeApp.imageName }}:{{ .Values.nodeApp.dockerImageVersion }} {{ if eq .Values.env "dev" }} volumeMounts: - mountPath: /var/www name: app-volume {{ end }} imagePullPolicy: {{ .Values.imagePullPolicy }} resources: requests: memory: {{ .Values.nodeApp.memory.requests }} cpu: {{ .Values.nodeApp.cpu.requests }} limits: memory: {{ .Values.nodeApp.memory.limit }} cpu: {{ .Values.nodeApp.cpu.limit }} ports: - containerPort: {{ .Values.nodeApp.port }} name: http {{ if eq .Values.env "dev" }} volumes: - name: app-volume persistentVolumeClaim: claimName: app-claim {{ end }} --- apiVersion: v1 kind: PersistentVolume metadata: name: app-volume spec: storageClassName: app-manual accessModes: - ReadWriteOnce capacity: storage: 250Mi hostPath: path: /myprojectdir/web
Values:
env: dev imagePullPolicy: IfNotPresent nodeApp: replicaCount: 1 cpu: requests: 100m limit: 200m memory: requests: 100Mi limit: 200Mi port: 8080 imageName: my-node-app dockerImageVersion: latest
Now every time we edit our application and make a change, we can rebuild it, resync it and redeploy it with helm (which mounts the files into the running pod). I admit this isn’t exactly seemless, but it is possible to work like this in a k8s style environment.
A large upside (from an ops perspective) is that you can then use the same helm chart, with different values files to provision separate environments. You could even package the whole thing as a helm chart to be stored in the chart museum for one line installation of an entire environment.
Each environment will be the same, they will all use the same tooling ecosystem, and your infrastructure changes can be tracked and committed to a git repository. If you want to make sure your dev environment is as close as possible to your qa, staging, prod or any other environments you are running with, this might be a viable skeleton of an option.
Resetting Everything
To remove the app from your minikube kubernetes cluster:
helm del --purge dev
To destroy everything and reset completely:
minikube stop minikube delete rm -rf ~/.minikube rm -f ~/.kube/config