The fight continues in 2017 for many IT departments trying to attain their rightful status within their organizations. Often they are seen as just another cost factor and treated as a scapegoat when things don’t run perfectly. When things do run smoothly with the various complex and interconnected IT systems, it’s taken for granted. Even in the 21st Century, it’s tough for IT to get respect. For this reason, among many others, IT departments all over the world are having a hard time recruiting sufficient personnel to maintain systems. Unnecessary outages and other problems are the result of inadequate maintenance, as personnel is often overwhelmed with an inordinate amount of work and the fast pace of technological innovation.
Another factor plays a role here: IT systems that have grown up over time. When a lot of time and money is invested in making specific adaptations, it often happens that these systems become interconnected in a myriad of ways. Precisely through the use of interfaces which are originally intended to increase the value of the systems connected, interdependencies occur which are highly specific to the applications they were initially intended for and unusable to others. Updates and replacements of software and systems can become a gargantuan task with administrators faced with pruning a complex monolith that has grown up over many years. The expense and difficulty such a challenge poses often leads to managers choosing to kick the can down the road, putting off important work and prolonging the life of the system. If critical security updates are avoided then the software can eventually become obsolete and vulnerable to attack.
Once started, this dynamic comes to permeate the entire IT department, affecting every team. The monitoring function is particularly susceptible to these deleterious effects, as overburdened staff often tends to view monitoring as a tedious and uninteresting task, which can easily lead to neglect. New devices might not be included in the monitoring until staff finds time, or they do the job incompletely, only providing for the most basic checks.
How do I prepare my IT environment for the future?
What solutions present themselves? If for example you could reduce complexity, then the effort required for necessary tasks would decrease, making the job less onerous for the administrators to take care of. And one way to simplify the job is to employ so-called microservices.
While it’s never wise to generalize, using microservices sensibly can enable organizations to take a first step toward a more secure IT environment, to reduce interdependencies and to reduce the workload of their IT administrators.
Get rid of those monoliths and prosper!
Monoliths are cumbersome, don’t scale correctly, make your life unnecessarily complex, and make you their servant – the classic „all in one“ solution.
Microservices on the other hand are small, simple, and handy. Every job has its own service. Since they require personnel to learn more different tools, the initial effort required could actually be higher. The long-term advantages however outweigh this small extra investment in time. When problems occur with any of your implemented solutions, they can simply be replaced in an environment based on microservices. Microservices scale much better as well, since they can usually be employed in as many instances as are required to get the job done, acting as a cluster to spread out the workload. And with a view toward „continuous integration“, microservices offer the advantage of being easier to develop, test and implement. New employees and developers can more easily be brought up to speed, increasing productivity.
Beyond all these advantages however, microservices give administrators the flexibility of updating to modern software at any time, since old technologies can be replaced by new ones quickly. Every application and job can be developed using the programming language most suited to the task.
The conclusion to be drawn is clear: an architecture based on microservices offers several compelling advantages over the classic model. The trend of making software cloud-compatible is already clear. As scalability requires systems to be provisioned and deleted again, proper monitoring has to change to take this into account. One of the ways open to administrators to handle the problem is described in Part 2 of this blog series: Why it makes sense to migrate from Nagios to Sensu now.
Subscribe to our blog and never miss another article!