Executive Voices 5 mins read

Four principles to simplify your connected world

The world – and especially the IT world – is becoming undeniably complex. Fortunately, solutions are emerging that help simplify this complexity. While they all solve different problems, they are based on similar underlying principles.

Bernd Gross Bernd Gross

If there is one sentence that you have heard too many times, it is that you need to adapt to a world of growing complexity — or else. Growing complexity is deeply woven into our very existence, sociologically, technologically, business-wise. Trivial, in a way.  

Information Technology is an interesting case in this respect. There is no denying the extraordinary growth of complexity, driven by factors like hardware advances, the Internet, and in particular, the Internet of Things. There is less awareness of the growing ability to deal with it. In recent years, trends like microservice architectures, the DevOps paradigm, and API management enjoyed great success. What is their common secret? They show a way to deal with complexity!

“Growing complexity is deeply woven into our very existence…there is less awareness of the growing ability to deal with it.”

These trends solve a specific problem, but they are based on underlying principles that can be applied to tackling complexity in general. Let’s have a look at the four most important ones.

Principle one: Do not boil the ocean

The first principle is not to boil the ocean. Sounds trivial? But why is it then that ocean boiling is still causing a poor cost-benefit ratio in so many IT projects? Many years ago, Service Oriented Architecture (SOA) taught us great (and still valid!) lessons on quality business service definition. But the idea to service-wrap all existing applications in services and hope for re-use benefits hardly ever worked. Hadoop-based data lakes came later, but did not fare any better, for similar reasons. Things live where they live for a reason, and they are in the format that they are for a reason. We need to embrace things where they live, and if possible, work with them from there. Of course, there are still many reasons to wrap functionality in services, or to move data around. But these must follow clear and defined objectives.

Principle two: Embrace change and learn from DevOps

The second principle is to not only embrace where things live, but also the fact that they change. The world of software engineering struggled for decades with that. Agile methods were a first breakthrough. But only DevOps taught us a complete and convincing way to deal with change in software engineering. Agility cannot end at developers handing over a software release for test and rollout, but has to be ingrained in the entire development and operation cycle.

This paradigm can be applied to other objects as well. For example, what if some master data changes? It is quick to update the system of record, but typically the real challenge is to make it effective in all affected target systems in an efficient manner. Not surprisingly, DevOps principles are applied to more and more other objects today. DataOps and ModelOps (for machine learning models) are prime examples.

Principle three: Take advantage of power of policies

The third principle is: Where possible, don’t use commands (imperative) but use policies (declarative). A command executes a function, like ‘start an application’. A declaration/policy describes a state, like ‘two instances of the application have to be running at all times’. To implement the latter, a controller must supervise the status at all times and take action if a deviation from the status is detected. It is not viable to deal with an exploding number of moving parts in an imperative fashion. Policies provide exactly the kind of efficiency gain needed.

An ‘imperative mindset’ has been dominant in IT for decades, with the notable exception of security systems: For half a century, they are the ‘controller’ enforcing access rights that are declared in form of access policies. Also, around for quite a while is API management, where policies for logging, routing, load balancing, and many other purposes are enforced by an API gateway.

Somewhat similar to DevOps, the IT world is still far from using the full potential of policies. For example, Kubernetes, enjoys huge success because it manages resources around containers (pods, nodes, endpoints, …) in a declarative fashion. But beyond that, its Custom Resource Definitions (CRDs) can be used for any other resources, like virtual machines, even hardware. It is exciting to watch how Kubernetes grows to a declarative controller for ‘everything Cloud’.

Principle four: Unleash business knowledge

The final principle: Enable power users from business to express their domain knowledge directly in IT systems. Ideally, their input is immediately consumable by applications. For that to work, IT systems must lower the hurdle for them by offering intuitive, nonprogrammatic, and (again!) declarative ways to express business facts and rules. 

The current ‘tsunami’ of cloud apps needs integration solutions that are built with these principles in mind. Integration tools that can be consumed as Software as a Service, are power user friendly, and allow you to build lightweight and agile solutions. In this way the tooling enables you to operate your business while keeping systems and data where they are, instead of fighting a hopeless centralization battle. And yes, no integration tooling is complete if it isn’t supported by a mature API management suite that gives the power of policies.

Growing complexity is real, but equally growing is our understanding of successful ways to deal with it. Simplify the connected world – with the right paradigms and principles on your side. If you are looking for tooling that adheres to these standards, be invited to have a look at Software AG.

This article was co-written by Software AG’s Bernd Gross, CTO, and Burkhard Hilchenbach, Lead Architect, Hybrid IT.