search by tags

for the user

adventures into the land of the command line

helm, the k8s package manager

so i’ve been using kubernetes for a few months now and one of my favourite parts of the sprawling project is helm it’s kinda like a package manager, but if you consider the purpose of k8s, it can be a package manager for an entire environment not just one specific application like nginx, or rabbitmq, but for an ENTIRE project application environment you write it all down in a coellction of template-able yaml files, and save them as “charts” then you can store them in a repository called a “chart museum” it even has the ability to use dependancies from other charts in your own charts

charts, charts, charts…

helm is made up of two parts, the client (helm) and the server (tiller)

background

install helm (mac):

$ sudo curl -LO https://storage.googleapis.com/kubernetes-helm/helm-v2.7.2-darwin-amd64.tar.gz; sudo tar -zxvf helm-v2.7.2-darwin-amd64.tar.gz; sudo mv darwin-amd64/helm /usr/local/bin/helm

install helm (linux):

$ sudo curl -LO https://storage.googleapis.com/kubernetes-helm/helm-v2.7.2-linux-amd64.tar.gz; sudo tar -zxvf helm-v2.7.2-linux-amd64.tar.gz; sudo mv linux-amd64/helm /usr/local/bin/helm

install tiller onto your k8s cluster:

$ helm init

from k8s 1.6+ RBAC was introduced, which requires the use of service accounts, roles with a set of permissions, and binding service accounts to said roles. if the above steps didn’t yield much for you, and there is no service account for tiller, and no roles, and no clusterrolebindings, you can create them like this:

create a service account for tiller:

$ kubectl create serviceaccount tiller --namespace kube-system

bind the service account for tiller to the default k8s cluster superuser role:

$ vim tiller-clusterrolebinding.yaml

    kind: ClusterRoleBinding
      apiVersion: rbac.authorization.k8s.io/v1beta1
      metadata:
        name: tiller-clusterrolebinding
      subjects:
      - kind: ServiceAccount
        name: tiller
        namespace: kube-system
      roleRef:
        kind: ClusterRole
        name: cluster-admin
        apiGroup: ""

$ kubectl create -f tiller-clusterrolebinding.yaml

update the existing tiller deployment:

$ helm init --service-account tiller --upgrade

test the new permissions:

$ helm ls

the helm and tiller versions must be the same or you’ll see:

Error: incompatible versions client[v2.7.2] server[v2.6.2]

$ helm version
Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"}

our versions are different… so upgrade them:

$ helm init --upgrade
Creating /Users/dudemandude/.helm
Creating /Users/dudemandude/.helm/repository
Creating /Users/dudemandude/.helm/repository/cache
Creating /Users/dudemandude/.helm/repository/local
Creating /Users/dudemandude/.helm/plugins
Creating /Users/dudemandude/.helm/starters
Creating /Users/dudemandude/.helm/cache/archive
Creating /Users/dudemandude/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /Users/dudemandude/.helm.

Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!

$ helm version
Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}

good.

community charts

nginx ingress example:

$ helm install stable/nginx-ingress --name this-is-just-nginx-actually --set rbac.create=true --set controller.publishService.enabled=true

TLS with kube-lego for letsencrypt example:

$ helm install stable/kube-lego --name kubelego --set rbac.create=true --set config.LEGO_URL=https://acme-staging.api.letsencrypt.org/directory

consul example:

$ helm install stable/consul
NAME:   solid-snake
LAST DEPLOYED: Thu Jan 11 18:13:32 2018
NAMESPACE: default
STATUS: DEPLOYED
.
.
.

$ helm status solid-snake

(shows the same output as after the install command finishes)

watch all cluster members come up:

$ kubectl get pods --namespace=default -w

test cluster health using Helm test:

$ helm test solid-snake

manually confirm consul cluster is healthy:

$ kubectl exec solid-snake-consul-0 consul members --namespace=default | grep server

tear it all down:

$ helm del --purge solid-snake
$ helm status solid-snake
LAST DEPLOYED: Thu Jan 11 18:13:32 2018
NAMESPACE: default
STATUS: DELETED

custom charts

first we need to understand the folder layout of a helm chart directory:

wordpress/
  Chart.yaml          # A YAML file containing information about the chart
  LICENSE             # OPTIONAL: A plain text file containing the license for the chart
  README.md           # OPTIONAL: A human-readable README file
  requirements.yaml   # OPTIONAL: A YAML file listing dependencies for the chart
  values.yaml         # The default configuration values for this chart
  charts/             # OPTIONAL: A directory containing any charts upon which this chart depends.
  templates/          # OPTIONAL: A directory of templates that, when combined with values,
                      # will generate valid Kubernetes manifest files.
  templates/NOTES.txt # OPTIONAL: A plain text file containing short usage notes

Helm reserves use of the charts/ and templates/ directories, and of the listed file names. While the charts and template directories are optional there must be at least one chart dependency or template file for the chart to be valid. More info from the source

create the directory structure:

$ helm create message_bus
Creating message_bus

$ tree
.
└── do-it
    ├── Chart.yaml
    ├── charts
    ├── templates
    │   ├── NOTES.txt
    │   ├── _helpers.tpl
    │   ├── secret.yaml
    │   ├── service.yaml
    │   └── statefulset.yaml
    └── values.yaml

3 directories, 7 files

check your syntax:

$ helm lint
==> Linting .
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures

a failure might look like this:

==> Linting .
[INFO] Chart.yaml: icon is recommended
[INFO] values.yaml: file does not exist
[ERROR] templates/: render error in "do-it/templates/statefulset.yaml": template: do-it/templates/statefulset.yaml:42:52: executing "do-it/templates/statefulset.yaml" at <.values.messagebus.v...>: can't evaluate field vhost in type interface {}

Error: 1 chart(s) linted, 1 chart(s) failed

try with a test run to see if there will be any errors not picked up by the linter:

$ helm install . --name do-it --values values.yaml --dry-run
NAME:   do-it

now let’s do it for realsies (upgrade –install will install if none exists or upgrade if one exists):

$ helm upgrade --install . --name do-it --values values.yaml
NAME:   do-it
LAST DEPLOYED: Fri Jan 12 14:15:16 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME      TYPE    DATA  AGE
rabbitmq  Opaque  1     10s

==> v1/Service
NAME                 TYPE          CLUSTER-IP    EXTERNAL-IP  PORT(S)                                                        AGE
rabbitmq-management  NodePort      10.0.150.120         15672:31885/TCP                                                10s
rabbitmq             LoadBalancer  10.0.144.214      5672:32181/TCP,4369:30712/TCP,25672:31386/TCP,15674:31736/TCP  9s

==> v1beta1/StatefulSet
NAME      DESIRED  CURRENT  AGE
rabbitmq  2        1        9s

it’s deployed to the default namespace, but what if we want to deploy to a specific namespace? first delete everything:

$ helm del --purge do-it
release "do-it" deleted

make sure in the yaml templates you include this where you want:

namespace: {{ .Release.Namespace }}

then specify the namespace option in the install command:

$ helm upgrade --install . --name do-it --values values.yaml --namespace some-namespace
NAME:   do-it
LAST DEPLOYED: Fri Jan 12 14:21:41 2018
NAMESPACE: some-namespace
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME      TYPE    DATA  AGE
rabbitmq  Opaque  1     9s

==> v1/Service
NAME                 TYPE          CLUSTER-IP    EXTERNAL-IP  PORT(S)                                                        AGE
rabbitmq-management  NodePort      10.0.158.138         15672:32416/TCP                                                9s
rabbitmq             LoadBalancer  10.0.35.110       5672:31763/TCP,4369:31439/TCP,25672:31945/TCP,15674:32242/TCP  9s

==> v1beta1/StatefulSet
NAME      DESIRED  CURRENT  AGE
rabbitmq  2        1        9s

so good.