The cloud report asked five persons from five different companies five questions about container technology:
Why are you using container technology?
Markus Eisele: Cloud-native application development helps build and run applications that take full advantage of the cloud computing model based upon four key tenets: a service-based architecture, API-based communication, DevOps processes, and last but not least a container-based infrastructure. The container is the key factor. Cloud-native applications rely on containers for a common operational model across technology environments and true application portability across different environments and infrastructure, including public, private, and hybrid. Container technology uses operating system virtualization capabilities to divide available compute resources among multiple applications while ensuring applications are secure and isolated from each other. The low overhead and high density of containers, allowing many of them to be hosted inside the same virtual machine or physical server, makes them ideal for delivering cloud-native applications.
Julian Hansert: Container technology is to IT like the steam engine to the industrial revolution. Hidden in the background but enabling a lot of powerful developments: from CI/CD, DevOps and accelerated release cycles to automated operations and hybrid and multi cloud computing. In short: container technology brings about the future of IT.
Hendrik Land: Container technology makes us independent from the underlying infrastructure, including the operating system. That is the base for a true hybrid cloud world, where I can consistently deploy my app in just any environment. The less time we have to spend on details of the specific environment we currently deploy to, the more time we can spend on making the app better. But beyond that obvious benefit, the immutable nature of containers forces us to adopt radically new lifecycle concepts for apps and supporting infrastructure. We no longer maintain what is often called snowflakes – deployments that are supposed to be identical but differ in small details. These are typically due to manual setups or configuration drift that occurs over the long lifetime of a deployment. The container approach of replacing instead of patching ensures that everything always is in a well-defined state. From Infrastructure as Code to Gitops, container technology is really driving and enabling these new approaches.
Simon Pearce: Container technology is playing an increasingly important role in the DevOps sector. Because they run in the context of the host operating system, containers are particularly portable and resource-saving – they do not need to start their own operating system, as would be the case with virtual machines. Through container technology it is possible to focus more on development by making critical operational expenses obsolete. It is a generic way to deploy applications anywhere, including hybrid deployments. For good reasons, Kubernetes is becoming the standard for container orchestration: It enables you to build containerized applications via CI/CD pipelines and ensures stable and high-performance application operations through a range of powerful features that enable automated repair and scaling as well as deployment rollouts and rollbacks.
Brent Schroeder: Enterprises are under pressure to respond faster to customer expectations and economic challenges. Containers, and a Kubernetes platform that orchestrates their deployment and operation, enable enterprises to deliver applications faster across a hybrid ecosystem improving customer experiences.
What is the best invention of the last 12 month?
Markus Eisele: A major step towards more fledged and secure containers was done with Buildah and Podman. Those two projects allow the creation and running of containers without heavyweight and probably unsecure daemon processes. This goes along with the standardization of the container definition through the Open Container Initiative (OCI) which provides standards and the ability to prevent lock into specific vendors. Buildah makes it possible to create containers without using Docker, which means that users can implement Docker and OCI-compliant container images with Buildah without the need for executing a container runtime daemon. In addition to building and operating containers, Buildah offers one more key advantage: It is a command line tool. This means that developers can integrate it into existing pipelines for application creation with much greater ease.
Julian Hansert: It’s not exactly an invention of the last 12 months but definitely one of the hottest topics in the cloud-native ecosystem this year: Kubernetes Operators. Kubernetes Operators extend the operational automation to legacy software, allowing you to manage applications just like a managed cloud service. I am pretty sure that Operators will push enterprise cloud-native adoption a big step forward.
Hendrik Land: Since I’m mostly concerned with stateful apps and their persistence and data management needs in Kubernetes, my vote goes to the release of the Container Storage Interface (CSI). It is an open standard from the Cloud Native Computing Foundation (CNCF) that is adopted by Kubernetes as well as other platforms such as Cloud Foundry. While you can usually recover from a loss of any piece of infrastructure that supports your app, loosing data can threaten the existence of your project or business. So, the persistence layer is crucial, as is the availability of an open standard for it, so we don’t lose the infrastructure independence that Kubernetes has just given us. With CSI, developers and Kubernetes users can dynamically provision storage resources in the same declarative way as they already do for compute, memory or networking. The CSI standard is rapidly evolving. Beyond provisioning storage resources, it now also allows for data management operations such as point-in-time snapshots or clones of a dataset. And that paves the way for robust backup and recovery solutions.
Simon Pearce: There have been many good inventions regarding container technology, therefore it is difficult to only pick one. My top 3 inventions are:
Hashicorp Vault enables to store, secure, and monitor access to tokens, passwords, certificates, and encryption codes to protect secrets and other sensitive data. That means maximum security with a role and permissions module already installed.
Service Mesh is a dedicated infrastructure layer for facilitating service-to-service communications between microservices. The benefits are observability into communication, secure connections, or automated retries and back offs for failed requests.
KubeOne was developed by our partner Kubermatic and is an open source Kubernetes cluster lifecycle management tool. It takes care of installing, configuring, upgrading and maintaining HA Kubernetes clusters. It is a solution that supports HA clusters, follows the Kubernetes best-practices, and comes with a simple and declarative API based on the Kubernetes Cluster-API.
Brent Schroeder: Kubernetes and Cloud Foundry integrations enabling PaaS developer productivity with the power and portability of Kubernetes orchestration. These capabilities are reflected in the Cloud Foundry projects Quarks, Eirini, KubeCF, and Stratos, which together create a Kubernetes-native application platform for developers.
What is your security approach?
Markus Eisele: Container security is the protection of the integrity of containers. This includes everything from the applications they hold to the infrastructure they rely on. Container security needs to be integrated and continuous. In general, continuous container security for the enterprise is about: securing the container pipeline and the application; securing the container deployment environments and infrastructure; integrating with enterprise security tools and meeting or enhancing existing security policies. Regarding the host system, it must be ensured that no unauthorized access is possible between the resources used. It must also be guaranteed that container images are only provided from trustworthy sources. On the application or container side, security measures must be taken, for example for the base images, the build process or the deployment.
Julian Hansert: We do have some big enterprises in our customer base. For them it is essential to have governance, security and control in one central place. Without a centralized approach things have the tendency to grow wild and create loopholes that put security at risk.
Hendrik Land: Security needs to be a constant process and it has to be integrated from the ground up rather than being an afterthought. Security is often unpopular as it adds complexity and cost – not necessarily in hard dollars but in terms of time, usability and other important aspects. But as painful as it might be, we can no longer afford to have insecure defaults or add security later. We can also no longer have apps that rely on perimeter security, e.g. expect that some external system will protect them. Hence the popularity of DevSecOps, where security practices are directly embedded in the DevOps process. And DevOps also helps us in other ways, from consistency in setups to automation that allows to introduce “no admin on machine” policies.
Simon Pearce: We have found and developed a range of measurements and services that we use to protect our infrastructure against external attacks. Our company and our data centers are located in Germany – especially regarding the General Data Protection Regulation (GDPR). We implement secure, exclusive access to the instances of your setup in detail through a VPN, with which applications such as external exchange tools and ERP systems can be connected. Secure operations require services and applications to be accessible at all times, also redundancies at various levels provide a high level of reliability. This is enabled and implemented by two geographically separate data centers, which are connected through our redundant fiber optic ring. We also distribute backups across the data centers to eliminate the risk of data loss. Furthermore, we have a catalog of measurements to prevent and avert DDoS attacks and proactively inform our users about security issues we discover. We also have an automated way of updating/patching software components.
Brent Schroeder: Security is built in from OS to container engine to management, with network isolation, plus control of privileges, secrets, and storage security. Workloads are managed in a secure supply chain, from images to registries to build process. Authentication and rights for users and operators leverage Role Based Access Control.
What is your favorite product in regards of container technology?
Markus Eisele: Red Hat OpenShift offers a consistent hybrid cloud foundation for building and scaling containerized applications. Therefore, users can benefit from streamlined platform installation and upgrades. Red Hat OpenShift comes with a nine-year enterprise support lifecycle from one of the leading Kubernetes contributors. But it does not stop there. It goes all the way to developer-friendly workflows including built-in CI/CD pipelines and our source-to-image capability that enables users to go straight from application code to container. And Red Hat OpenShift also extends to new technologies – including serverless applications with Knative, cloud services through the Red Hat OpenShift cloud service broker and streamlined service communications with Istio and service mesh.
Julian Hansert: This might be a boring answer, but my favorite product is Kubernetes. Without Kubernetes, the cloud-native ecosystem would not be where it is today but lagging at least three years behind. Kubernetes has helped container technologies break through and really pushed cloud-native adoption. Not surprising that half of our team loves their Kubernetes socks and shirts.
Hendrik Land: In the broader sense, that definitely has to be Kubernetes. It is amazing how it has taken the world by storm and quickly became the de-facto standard for everyone running containers at scale. As the saying goes, Kubernetes is the operating system of cloud-native. It is THE common platform available to us in any cloud or our own data center. One open standard, upon which everyone can deliver his own innovation. Speaking of innovation, in a more selfish sense my absolute favorite product is our Trident solution. It provides a common persistence layer to container environments in any cloud and on-premises. This complements the application mobility enabled through Kubernetes by providing a data fabric that makes the data available to the app wherever and whenever it is needed. We really have to think about app and data together, one is of little use without the other. The initial focus of container technology and Kubernetes on stateless apps was a mistake. While an individual microservice might be stateless, an application stack almost always has a need for state and persistence. And Kubernetes is definitely ready for that. With Trident, we provided the very first dynamic storage provisioner in Kubernetes. And we have been leading that space ever since. Trident is our contribution to evolving Kubernetes beyond the stateless mantra of the early days.
Simon Pearce: One of my favourite products is Knative, which is an open source community project that added components to Kubernetes to enable the deployment, execution and management of serverless, cloud-native applications, making it possible to take advantage of both container technology and serverless architectures. While container technology is already well known and highly portable, serverless provides efficiency and automation. It allows developers to focus on their code without worrying about building and deploying. At SysEleven we’re using Knative in our data collection project, and since we’re expecting to receive a variable load of data, Knative features make it easy to scale up our services according to their needs.
Brent Schroeder: Stratos: an extensible multi-cloud UI for Kubernetes and Cloud Foundry. It deploys, manages, and monitors applications and services across multiple clusters and clouds, empowering developers and operators alike to work across a diverse ecosystem.
Caas vs Paas: which solution are you using or providing and why?
Markus Eisele: Red Hat OpenShift can be both, depending on the business focus of a company. The best Dev experience is secured by all the integrated parts which take care of the software development processes and make up a successful Platform as a Service (PaaS) offering, and the best Ops experience for containers is an integrated Container as a Service (CaaS) platform. The term CaaS describes not much more than the deployment of a container execution environment in a public cloud. Red Hat offers OpenShift on Azure together with Microsoft, and OpenShift Dedicated is available on AWS – both as a fully Red Hat managed solution. The advantage for users is that they no longer need to worry about administration or operations.
Julian Hansert: Kubermatic Kubernetes Platform automates thousands of Kubernetes clusters across hybrid and multi-cloud, on-prem and edge environments with unparalleled density and resilience. With this platform we provide an infrastructure-agnostic PaaS solution. IT teams need one central self-service platform across different clouds and regions to really benefit from the promises that Kubernetes and cloud-native technologies hold.
Hendrik Land: Our integration typically is at the CaaS layer. Which then enables PaaS, serverless and everything else that builds on that foundation. While app developers benefit from our solutions, they might not always recognize them if they use PaaS. Persistence and data management should be easy for developers, so in a sense we achieved our goal if they don’t even see our solutions and it automagically just works. At the same time, we have to educate and support developers. While the ease of use in PaaS solutions is a big benefit, it also brings a risk of “easily” making a bad design decision. When it comes to data, this can be very hard to adjust later, especially if lots of data has already been generated. It cannot be easily moved around due to the sheer amount of it. And often data is your most valuable asset, so you don’t want to take any risk either. So, developing a good data strategy is important, no matter if it is CaaS, PaaS, SaaS or any other model.
Simon Pearce: With MetaKube we provide a Kubernetes solution that combines the benefits of managed operations, e.g. lifecycle management, with “as a service” components such as backup & recovery. Our multi-cloud integration guarantees even more flexibility: clusters can be created on the SysEleven OpenStack Cloud, AWS Cloud and Azure Cloud. Furthermore, MetaKube takes care of automated security patching, upgrades, troubleshooting and proactive monitoring and provides automated Day 2 Operations with a variety of cloud native tools, e.g. Prometheus, Vault, LinkerD and NGINX.
Brent Schroeder: SUSE doesn’t believe in one-size fits all. Our flexible solution comprises CaaS and PaaS layers. SUSE CaaS Platform is a Kubernetes-based container management foundation for infrastructure and application administrators, and SUSE Cloud Application Platform adds a high productivity developer experience to Kubernetes.
Side note: The answers are in in alphabetical order of the respondents.
The respondents (in alphabetical order)
Markus Eisele
Is Java Champion, author, speaker at national and international conferences, co-founder of JavaLand and well-known personality in the Enterprise Java community. He leads the Developer Adoption for Red Hat in EMEA.
Julian Hansert
Is a co-founder and COO at Kubermatic (formerly Loodse), an enterprise software company focused on building solutions to automate Kubernetes and cloud-native operations across all infrastructures. With Kubermatic, he wants to empower IT teams everywhere to focus on their core expertise: writing groundbreaking applications, not operations.
Hendrik Land
As a Solution Architect for DevOps at NetApp, Hendrik covers a broad set of topics, from container technology to automation and configuration management – always with a healthy appetite for learning new technologies. Whenever time permits, he looks into solutions for application mobility, delivered by infrastructure agnostic platforms such as Kubernetes and automated by Infrastructure as Code and GitOps principles.
Simon Pearce
Simon Pearce is the Product Owner and MetaKube Team Lead at SysEleven. MetaKube is SysEleven’s answer to managed Kubernetes. He has over 15 years of experience in the web hosting industry. With a focus on building distributed systems on public and private clouds. He is responsible for the Kubernetes service team at SysEleven. Working on improving the experience of running managed Kubernetes clusters on various cloud platforms.
Brent Schroeder
As SUSE CTO, Brent Schroeder is responsible for shaping SUSE’s technology and portfolio strategy. He drives the technology relationship with numerous industry partners, participates in open source communities as well as evangelizes the SUSE vision with customers, press and analysts.