The Top 10 Features in Kubernetes

By Kendrick Coleman, Open Source Technical Product Manager, VMware

Right out of the gate, Kubernetes 1.12 started hot with the feature list growing every day. At its peak, there were a total of 64 features that were being tracked either as new features entering alpha or as features progressing through stages to beta or stable. However, the wishful thinking proved to be a daunting task for a lot of features that still required merges and docs. Over the course of a few weeks, the tracking count decreased to 38–which still makes it one of the largest targets to date.


For the greater Kubernetes community, the highlights over the past few releases focused on user interaction and new capabilities for the end user. However, Kubernetes 1.12 has some backend improvements like scheduler performance with an updated algorithm andpods now optionally passing information directly to CSI drivers. These backend enhancements continue to strengthen the core and give Kubernetes a solid foundation.


An interesting enhancement for developers offers a better way to test out plugins with the new Dry Run alpha feature. This feature will submit a query to the API server to be validated and “processed” but not persisted.


Looking through the lens of a user, rather than a developer, there were notable enhancements focused on usability, storage, networking, security, and VMware functionality.


VMware Functionality

The VMware Cloud Provider has implemented an early phase of zone support, or what’s being referred to as Phase 1. Analogous to AWS availability zones or regions, this inclusion allows you to run a single Kubernetes cluster in multiple failure zones, which in VMware vSphere terminology, is one or more VMware vSphere clusters.


Phase 1 introduced new field labels to the vSphere configuration file of zone and region properties. Label usage is intended for identifying attributes of objects that are relevant to users, but do not directly impact the core system This tagging will be queried by the kubelet VMs during startup, and the kubelet will then auto-label and propagate the labels to the API server. This feature maps vSphere closer to Kubernetes topologies for a better user experience. If users don’t provide a [Labels] field, then the behavior will be the same as the old version.


The future of Kubernetes is extensibility. When third-party developers and vendors can add their own primitives to Kubernetes, it will tailor a custom experience for their end users and allow powerful integrations. Today, users interface with Kubernetes through the kubectl command-line utility, but it’s limited to its core commands and base functionality. In its 1.12 alpha debut, the plugin mechanism for kubectl will allow third-party executables to be dropped into the users PATH without additional configuration. The plugin will be invoked through kubectl and be able to parse any arguments or flags. For instance, the plugin /usr/bin/kubectl-vmware-storagecould be invoked with kubectl vmware storage --flag1.

Today, the Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment, or replica set based on observed CPU utilization. This approach is great for stateless applications that can scale out, but what about the types of applications that need to scale up? The Vertical Pod Autoscaler (VPA) is graduating to beta in Kubernetes 1.12, which gives freedom to request resources for containers. Once a VPA policy is applied, pods can be scheduled onto nodes where appropriate resources are available, and other pods can be adjusted when CPU starvation and out-of-memory (OOM) events occur. The VPA  is a long-awaited feature, especially for those who are responsible for stateful applications.


Building a Kubernetes cluster is one of the biggest hurdles when it comes to learning Kubernetes. There are guides to do things, such as Kubernetes the Hard Way, and lengthydocuments on individual steps. There’s also been lots of attempts to simplify this process with Kubespraykopskube-up, and more.



A storage snapshot is a widely used and adopted feature among storage vendors. It allows persistent disks with critical data to be backed up for events when the data needs to be restored, used for offline development, or replicated and migrated. The initial prototype for snapshots was implemented in Kubernetes 1.8 with in-tree drivers, but with a view toward  the Container Storage Interface (CSI), the implementation shifted to keep the core APIs small and to hand off operations to the volume controller. Therefore, this alpha feature is only supported by CSI Volume Drivers.

continue reading