7 Reasons Why You Shouldn't Use Helm in Production

If you are running Kubernetes in a production environment, running Helm can be quite dangerous. Read this post to find out why.

Helm is billed as “the package manager for Kubernetes”. The goal was to provide a high-level package management-like experience for Kubernetes. This was a goal for all the major containerisation platforms. For example, Apache Mesos has Mesos Frameworks. And given the standardisation on package management at an OS level (yum, apt-get, brew, choco, etc.) and an application level (npm, pip, gem, etc.), this makes total sense, right?

Maybe not.

Value Proposition

Firstly, let’s consider what the value proposition of Helm. Whilst thinking about this, I am talking from the perspective of an Engineer that largely works with operational deployments for private clients (i.e. not an open source software vendor). It allows us to:

  1. install applications
  2. manage the lifecycle of those applications
  3. customise applications through templating

We can achieve 1 through k8s manifests already. 2 is tricky, because there are many k8s components that don’t fit the standard “application” lifecycle. E.g. RBAC, PVCs, Namespaces, ResourceQuotas, etc. And for 3, we can template in various ways, most of which are much simpler.

Helm also encourages (inadvertently) dynamic, manual templating. Remember, the goal is to have everything as code (I won’t go into detail here). Manual templating prevents you from having a static, version controlled statement of what your system should be. This affects your ability to test, recover and ensure that test and production environments are equivalent.

Issues with Helm

So I think the value proposition verses standard k8s manifests is already on shaky ground. To summarise, here are seven reasons why Helm might be a bad choice:

  1. Tiller defaults to storing application secrets inside configmaps (i.e. plaintext). It is possible to override to use k8s secrets, but it is still in beta..

  2. RBAC policies are per Tiller pod, not per user. For example, any constrained user, that has access to tiller, has access to everything tiller has acces to. So this means you need to have a separate helm installation per role/team/etc. which adds significant complexity. See here and here.

  3. Tiller pods are accessible to every other pod in the cluster by default

  4. Helm only adds value when you install community components. Otherwise you need to write yaml anyway (more).

  5. Leads to too much logic in the templates (not good for inexperienced k8s users)

  6. GitOps/Infrastructure as Code best practices are violated, because you version control before it has been templated. So you can’t get true repeatable builds. E.g. consider templating test and prod environments. The separate templating is likely to yield differences between the two environments (bad!).

  7. If you really need to template (because you are supporting multiple clusters, for example), then consider kustomize or another templating tool. You get the benefit of templating, but you keep the benefit of GitOps.

Note that none of this prevents the leveraging of helm. You can still use helm to generate yaml manifests as a static generator.

Also, this neglects what Helm is good at. It you are a developer of a public project and want your users to easily install your k8s application, then a simple helm package command is very attractive. However, there are other methods to provide a oneliner to your users. You could run a script from a hosted url (like Docker) or a oneliner for a single concatenated manifest (like weave scope).

Further Reading

Other Resources you might be interested in: