KubeCon + CloudNativeCon 2019 Barcelona

The end of May 2019 saw 7700 people, myself included, visit Barcelona to attend ‘KubeCon + CloudNativeCon Europe 2019’. That’s a lot of people and a significant increase of previous editions (Copenhagen 2018: 4300, Berlin 2017: 1500, according to this Dutch source). Next year’s edition will be ‘around the corner’ in Amsterdam and is projected to attract at least 10000 visitors. This is quite telling of the increased adoption of Kubernetes and all associated technologies over the past years.

So, what’s the current state of the Cloud-Native ecosystem?

Impressions

Well, for starters, talking about ‘current state’ can’t reasonably done without specifying a pretty concise timeframe, as a lot is moving at a very fast pace. Two quotes mentioned in the keynotes summarize the eco-system quite well in my opinion:

“A platform to build platforms”

In Lego terminology you could see Kubernetes as a base plate. You pick your color (GKE, AKS, EKS, Kops, On-premise, …) and then will need to add things on top, as just a baseplate is of very little use. What to build on top is up to you. What lego do you already have? What do you want it to do? New Lego boxes will be launched regularly but you won’t be able to buy them all. Do you need a new one? Does it nicely supplement the Lego you already have. Is it acutually an upgrade of what you already have?

There's your cluster. Now the building begins.

“Culture eats strategy for breakfast”

A thriving community that grows, engages, participates, contributes, will propel innovation at a more rapid pace than any (single vendor) strategy would be able to accomplish. I suppose that’s what it boils down to. Meaning it’s not a bad thing per se that Istio and Linkerd both offer Service Mesh capabilities. Or that there’s an abundance of Ingress options. Or that Helm v2 installs a cluster component (Tiller) based on a simple security model that is superseded by the later introduced RBAC security model.

Some topics

Service Mesh

Service meshes are a hot topic, and Kubecon witnessed the introduction of the Service Mesh Interface (SMI).

On the other hand, with Istio being around for quite a while (v1 having been announced in July 2018), I was surprised by the small amount of hands going up to the question “who is using a service mesh in production?” at the ‘service mesh breakfast'. It seems that for a lot of people the benefit vs. complexity trade-off is not there yet. Or service meshes aren’t on the top of the wish list. Or people are waiting for more reports from early adopters before wetting their feet themselves. Or a combination of all of the above. Might just be my perspective though…

Loki

Already introduced at KubeCon Seattle, Loki looked very interesting. The ability to handle the high volume of logs in a light-weight manner, integrating well with Grafana and the auto-discovery and labeling of Prometheus, simply sounds very good. Still beta though.

Helm v3

Helm 3 alpha is out now, and the biggest change to v2 is the removal of Tiller from the server. Now keeping Tiller out of a cluster already was possible but in v3 it comes out of the box without needing to handle Tiller in some way anymore. Another notable change is storing of release info into the namespace of the release itself, allowing same-named releases to exist in different namespaces. For more info it’s worth checking out the ‘Charting our future’ series on the Helm blog.

Virtual-kubelet

I was aware of Virtual Kubelet, however missed the v1 anouncement (so many tracks). Being able to run Kubernetes without having to bother about any infrastructure, allowing inifite scale while paying for actual use, would be the holy grail of cloud computing.

However, it looks like the “v1 - We’re ready” mainly applies to Azure (and perhaps other providers) as the warning in the AWS Fargate docs is pretty clear. This Github issue very well illustrates the type of implementation details hidden under the virtual-kubelet abstraction.

Cluster API

Cluster API is relatively new as well and addressed in a keynote and a deep dive talk. This API aims to simplify the creation, configuration, upgrade and teardown of clusters, avoiding the need of tools like Terraform, Ansible and the likes.

This could provide teams with ability to adapt a cluster to their needs without needing to hand out the keys to a cloud account, for example when combined with setting up a service catalog.

Operators and Storage

The increased desire to run stateful applications in Kubernetes, sees challenges in the field of storage and operation of these stateful applications. The latter is what drives the innovation of operators, as illustrated by this list which includes many operators for products like MySQL and PostgreSQL. For the former Kubecon hosted various talks and even a Cloud Native Storage Day. Storage-related topics included CSI (Container Storage Interface), Ceph (Storage provider) and Rook (Storage orchestration).

Kubernetes Secrets Store CSI Driver (Kubecon talk) is an interesting implementation of CSI: Directly mounting secrets from Hashicorp Vault or Azure Key Vault as volumes.

Thoughts

The above list just touches some of the topics on display at KubeCon. The many parallel tracks mean there’s a lot of content, but sadly also that you’ll inevitably going to miss talks you would have liked to attend that are planned in parallel.

At the times the CNCF landscape feels like the Javascript ecosystem, where for every problem multiple packages exist that tackle it in a particular way. There also ‘culture eats strategy for breakfast’ seems to apply. Likewise there’s some strong technologies at it’s core (JS: Nodejs, React, Webpack, etc. CNCF: Kubernetes, Prometheus, etc.) and technology moves fast. (Obvious troll: Even go has it’s own Leftpad).

Staying up-to-date, and keeping your clusters up to date, requires upkeep. So much is certain. Just as the containers they run and the VMs they run on, clusters themselves benefit from being more cattle than pet, to allow blue-green like upgrade processes instead of complex in-place upgrades. A challenge is keeping track of all the alpha- and beta status projects, while at the same time keeping things as small as possible and not becoming distracted. Kubecon has been called ‘the conference for the Sagrada Familia of software’ for a reason.

What’s clear is that the CloudNativeCon part is in the conference title for a reason. “Kubernetes should become boring” was mentioned in a PodCTL podcast I recently listened to. The speaker’s expectation was that the core of Kubernetes will become boring (a ‘solved problem’) in the near future and focus of conferences will shift more and more to what’s built on top and around Kubernetes.

I think becoming boring would imply (require actually) that the basic mechanics and concepts of Kubernetes are known to a large part of developers in ‘DevOps organizations’ (whatever definition you like for that term). Similar to how they know Linux, a kubectl describe deployment myapp would be muscle memory just like systemctl status myapp.

As long as scaling a cluster is not trivial, integrations with cloud vendor functionality requires research and trade-offs (might be just AWS though) and it’s still very well possible to shoot yourself in the foot (Slides, highly recommended), Kubernetes is far from boring. At it’s core it might be a solved problem but around that core there’s quite some challenges that need to be solved in a fast-moving ecosystem.

I suppose as long as that core of solved problems keeps growing and Kubernetes knowledge becomes more and more ‘a given’, we’re heading in a good direction.

Please hold my coffee while I increase our cluster’s DNS resources…