Dynamic Systems and Their Testing

Before software is handed over, it is tested. Before you run it, you want to be sure it‘s doing what it‘s supposed to. But the more agile the process becomes and the more automated it becomes, the faster software is deployed, the less space there is in the process for test- ing. this article shows how testing can be integrated and planned into the agile process from the beginning, so that high-quality results can be delivered.

Software systems are usually constructed so that their function is spread over a set of coupled components. Then, that set of software components forms a logically dependant, but technically isolated, composite, which in it’s entirety forms the application. Such a modular construction of software systems has been an accepted industry standard for some years now and has been grown to new complexity, some might say extremes, with the advent of micro-service architectures.

Adding to the complexity of (multi-tiered) software, application software relies on a number of backend systems behind the application’s “actual backend”, i.e. auxiliary software (message brokers, backup, logging, monitoring) and infrastructure components (that stuff that the application actually runs on, viz. computers, networks, storage systems); and it is – by and large – undisputed that application software cannot run without it(1). For these, distribution of components is de-facto mandatory.

continue reading

The Arrival of Headless Content Management

The content management software space has been strongly affected by the tectonic technological shifts of the last decade: the arrival of smart devices, the rise of Javascript, the ever-growing demand for developer talent, the arrival of 4G, and now, 5G. These factors have set in motion a large migration away from monolith software toward headless content management systems that are built on the principles of microservice architecture. CMSes are the communication highways of companies and staying with legacy technology means opting for massive traffic jams.


The past – web only

Massive .NET, Java, and PHP-based systems have dominated since the time of the dotcom bubble. Open-source heavyweights, WordPress, Joomla and Drupal, have democratized the creation of simple websites making it accessible to even non-tech users. But this was over 15 years ago – before the arrival of the smartphone and before “the cloud” became the standard for application hosting. Traditional content management systems are web-only monoliths. Rigid and inflexible, they can hardly be adapted for modern digital content channels such as mobile, IoT or AR. Even small, unambitious technical changes lead to long implementation times.

“We have to duplicate our content in the CMS for the web, and the CMS for the mobile app.” – a frequent complaint by marketing professionals. “We are held back by frontend template constraints and only fix bugs and release new customer-facing features in 4-6 month cycles.” – a developer team’s daily struggles.

continue reading

The Death of the Developer

Cloud, especially when understood as PaaS (Platform-as-a-Service)-approach changes a lot for developers, technologically and in regard to mindset. In fact, it changes so much, that we as developers need to reinvent ourselves – and actually need to bury our old incarnations. this new series will cover all of these aspects and starts by giving a high-level overview on the growing complexities and challenges, developers are confronted with.

When moving into a cloud environment, we could try to execute as we did in the past. This is embraced by lift & shift-approaches, that basically recreate environments existing in traditional datacenters in clouds.

Although this is a viable first step, it should actually be considered as the first step only, since opportunities and challenges in cloud-environments demand completely new approaches in regard to software, operations and integrations. If adjusted to these approaches, the total-cost-of-ownership (TCO) of a software will dramatically reduce while quality of service increases, and time-to-market will lower significantly.

To understand these correlations, we need to see the bigger picture first.

continue reading

Digital Transformation in Medical Technology

Interview with Jörg Haist, Dentsply Sirona

With the merger of Dentsply and Sirona in 2016, the company has become the world’s largest manufacturer of dental products and technologies in the field of dentistry and dental technology. During the past 3 years, Dentsply Sirona managed the multiple transformation of merging over 16.000 employees in over 40 countries into one single company and realizing an unprecedented digitization journey centers around connecting practitioners, patients’ needs and equipment. Julia Hahn and Felix Evert dis- cussed with Jörg Haist, Vice President of Platform Management Equipment & Instruments, how Dentsply Sirona leveraged the opportunities of digital transformation in medical equipment.

IDS 2019 – Sirona Dentsply

continue reading 


Building the Clouds

Google Cloud has a lot of impressive things to discover – one of the interesting and least known aspects is its integrated build system, Cloud Build. Let’s set it up for hosting a GIt-repository, for building a microservice and for storing the generated artifacts in a Docker registry inside Google Cloud.
First of all, there are some prerequisites to fulfill before using Google Cloud Build:

– Google Cloud account with payment functionality enabled needs to be present

– A Project has to be created inside Google Cloud – our sample project is called cloud-report-project

– The Google Cloud SDK is to be installed on your local machine

If all of these steps are taken, we should have a Microservice we want to build. In this article, we use a Spring Boot based Microservice written in Java (you can find the source code at [1]), but you can of course use a language of your liking – Google Cloud Build actually does not care about a programming language, it refers to builders being supported on the platform.

How does Google Cloud Build work?

Before building anything, we should understand the basic idea behind Google Cloud Build, since it is different to the approaches AWS or Azure have. Essentially, Google Cloud Build does exactly, what the name supposes: It imports sources, builds them, creates container images, stores and – if requested to – deploys them inside of Google’s own cloud environment.

continue reading 



The essential Ingredient for every Enterprise

Less than five years old, Kubernetes has already become the de facto container management system worldwide. In 2018 Forrester’s cloud predictions declared Kubernetes to be the victor in the “war for container orchestration dominance.” Since then its popularity has only continued to grow and CIOs across every sector now consider it tobe the gold standard for container management. In particular when it comes to supporting DevOps within their business.


This popularity is no surprise given the benefits of the technology. Kubernetes groups application containers into logical “packages” for simple, fast management and discovery. It also automates the deployment and scaling of containerised applications. Not quite a true platform in itself, Kubernetes can be combined with additional elements to provide the ease of use of Platform-as-a-Service for developers, with the adaptability of Infrastructure-as-a-Service to make it easier to move workloads across infrastructure providers.                                                                                                                                                                                                                                                                  Modern enterprise use containers as an increasingly important aspect of their business. But there are still those implementing container technology without Kubernetes. CIOs should consider the following three key reasons to embrace the industry standard…

continue reading

Rook more than Ceph

Rook allows you to run Ceph and other storage backends in Kubernetes with ease. Consumption of storage, especially block and filesystem storage, can be consumed through Kubernetes native ways. this allows users of a Kubernetes cluster to consu- me storage easily as in “any” other standard Kubernetes cluster out there. allowing users to “switch” between any Kubernetes offering to run their containerized appli- cations. Looking at the storage backends such as minio and CockroachDB, this can also potentially reduce costs for you if you use rook to simply run the CockroachDB yourself instead of through your cloud provider. Data and Persistence
Aren’t we all loving the comfort of the cloud? Simple back-up and also sharing of pictures as an example. Ignoring privacy concerns for now when using a company for that, instead of e.g., self hosting, which would be a whole other topic. I love being able to take pictures of my cats, the landscape, and my food and sharing the pictures. Sharing a picture with the world or just your family is only a few clicks away. The best of that, even my mother can do it.

continue reading

How to approach Cloud properly

The idea of relying onto Lift & Shift-approaches when considering the move into cloud environments is simple. And it is plain wrong and (financially) risky. And it will most likely be a disappointment in regard to scaling, availability and performance. Instead, one must think completely different of Cloud to actually gain an advantage. Let me explain and let me show you the pillars required for a successful cloud experience. Lift & Shift into a cloud environment Cloud is often understood as a way of thinking of an infrastructure: automated, operated by a cloud provider, easy to set up. While this is true and while it could definitely be the right decision to move to public or private cloud providers, it gives the wrong impression: if you move into a cloud environment only by executing Lift & Shift approaches, you might perhaps save some money on the infrastructural side and perhaps some time on the provisioning side – but you simply exchange one datacenter operator (yourself or your current one) by another, very generic, one (Microsoft, Amazon, Google, Digital Ocean, etc., figure 1). In fact, quite often you do not even save money, since the overwhelming number of offerings and the reduced amount of customization can cause lacks of transparency and might even lead to higher operational costs, as the cloud environments are typically operated on an infrastructural level only, leaving management and operations in your hands. If you only execute a Lift & Shift approach, your software and middleware will not substantially benefit from what cloud actually has to offer: automated scaling, fail-over-functionalities, zero-downtime deployments, and so on. You might mimic these functionalities by bringing in more infrastructure – but at which costs? Or you would use proprietary offerings from these cloud providers. Which will tie you to them and trap you inside their ecosystem. This kind of vendor lock-in is to be considered a major risk for any enterprise and project and should therefore be avoided. So, there must be a better way… continue reading

Agility in the Cloud

Interview with Felix Evert

Cloud computing creates great possibilities which need to be used wisely. Agiliy is one way to use them well. I spoke with Felix Evert, Head of Enterprise and agile Consulting at the Cloudical Deutschland GmbH, about the opportunities an organization can realize taking the agile journey. continue reading

Training result: Gone with the wind?

What is the desired result of training? At least we expect attendees to learn something new they use in their job. As a consequence, the new knowledge must be transferred. this “occurs whenever the effects of prior learning” … in the learning area … “influence the performance of the later activity” … in the practical area. (Holding 1965/1991) But there often seem to be invisible hurdles on that track: less than 10 % of training is transferred into practice. this article identifies solution approaches for your training.

The 10/90 rule

Corporates spend billions of Euros training their employees every year. Some think the more expensive the training, the more significant the effect. And others feel if training at all, then it should be as cheap as possible because it does not effect anything anyway. Maybe they already know that less than 10 % of their expenditures result in transfer to the job. Or to put it another way: 90 % is flying out of the window – money as well as learnings. For any company training transfer matters a lot. And for every attendee training is essential but transfer plays a more significant role. It is the key to change. One single person. A team. A department. A whole organization. continue reading

Design Thinking

Interview with Torben Lohmüller

Dark Horse from Berlin are an innovation agency and offer workshops on New Work in their academy. Dark Horse was founded in 2009 by 30 graduates of the D-School at the Hasso-Plattner-Institute in Potsdam. Based on the experience of how important collaboration and working at eye level is, especially in interdisciplinary teams, they have taken new paths in collaboration from the very beginning. The experiences of the first years can be read in their book „Thank God it‘s Mon- day“, published in 2014. Dark Horse Innovation enables organizations to utilize the market potentials of the digital age. “We create user-centered products and services and transform structures, processes and minds to empower our clients to be more innovative.” Torben Lohmüller has only been a team member at Dark Horse for a relatively short time, but he is convinced of their approaches and is committed to teaching in the workshops offered. He talked to me about the changes in the world of work and New Work. continue reading