Subscribe

Server-Monitoring vs. hybrid Cloud-Monitoring – what changes?

Only a few years ago, simple client/server structures needed to be monitored whereas today it is mainly cloud concepts with cutting-edge hybrid IT environments that cause new monitoring challenges. According to the IDC, 85 percent of the companies use a multi-cloud environment. More than half of them use at least five different public cloud services - and are struggling with increasing complexity and dynamics. The networking of machines (IoT) is also driving this development. - and are struggling with increasing complexity and dynamics. Given these facts, it is insufficient to regard clouds just as an additional data source for central IT monitoring – classic systems reach their limits. Cloud service monitoring entails a series of new requirements for monitoring systems, some of them are explained below.

Monitoring agile workloads

One of the great advantages of cloud hosting is that the underlying infrastructure can be scaled automatically. This includes processing power, storage or network capacities (Infrastructure as a Service). The reason is that many companies have daily or weekly peaks for their applications. After a mail campaign, for example, you can usually expect a stampede for a web shop. Additional servers are used for this purpose that must be monitored without manual configuration. A monitoring solution must therefore auto-scale the provided IT resources. The number of instances may change constantly, and every single one must be monitored.

Monitoring Platform as a Service (PaaS)

PaaS is a type of service providing a programming model and developer tools to create and execute cloud-based applications. However, these options providing applications, such as Azure App Services or Google Cloud Engine, may be a monitoring problem. In the case of Azure for example, there is no full access to the underlying server. The Windows performance indicators are also not available. With an Azure KUDU console, an event viewer, IIS logs, running processes and other information can be accessed. A special web job can act as monitoring agent in such a scenario to ensure that instances are monitored.

Monitoring microservices and containers

One of the most effective approaches for modular applications are microservices. The idea behind it: If applications are designed as service sets, they can be developed, tested and scaled independent of one another. A network of microservices finally constitutes the total system. This approach bears a series of advantages; however, it can be a particular challenge in a microservice architecture to precisely identify the reason of errors or performance bottlenecks. A single user process can affect multiple services. Cluster services can reach their network I/O limits. A chain of comprehensive calls of services can lead to a backlog in the system entailing long waiting times or cascading errors. Microservices typically run in containers. For this reason, companies have to collect monitoring metrics not only on VM level but also on container level. This can be achieved e.g. with the Kubernetes technology, which collects statistics on CPU, RAM, file system and network resources of each container.

Container and Docker monitoring

Docker monitoring is an essential requirement for efficiently monitoring a "containerized world". Docker is an open-source technology for providing applications in an automated manner, which are dynamically organized in a container. Although Docker itself provides rudimentary information on containers; however, comprehensive features for data aggregation and display are required for monitoring such applications. Relevant metrics to be monitored are for example CPU, memory, disk I/O, network I/O, events, i.e. alarms from the Docker container, or services, e.g. number of defined containers. Managing the container environment is much easier with the orchestration software Kubernetes or Openshift. These tools also offer a variety of metrics that must be included in the monitoring of the IT landscape.

E2E digital experience monitoring

An increasing number of digital services are outside company firewalls. The demand for solutions used to monitor end-user experience will increase in the future. They will take care of both active and passive monitoring of a large number of service infrastructures that are outside the IT periphery – with a detailed view on problems that affect user experience and business results. Therefore, those synthetic monitoring technologies are gaining ground that simulate user interaction in more and more complex digital services and in more and more dynamic, distributed and heterogeneous environments.

Predictive monitoring in times of IoT & AI

The data volume to be analyzed for monitoring is growing massively in the IoT age. This adds to the mission-critical importance of IT for the digitization of business models. This entails new challenges for monitoring, e.g. analyzing and correlating large log files. Generally spoken, it is not sufficient to respond very fast to malfunctions and to remedy them rapidly. The target must be to notice imminent malfunctions using early detection and to troubleshoot them. For this purpose, classic monitoring approaches based on average and threshold values should be extended by a method allowing to display data with more information. A successful approach is the statistic method of probability density, which is a method to describe a probability distribution within a given interval. This returns a density instead of a mean value showing how data traffic and amount of data are distributed. In addition to an improved view on the IT, the pertaining systems need also to train themselves per machine learning. If inculcated patterns occur, they can respond extremely fast. Artificial intelligence for monitoring is in its infancy. But the first attempts are very encouraging, and demand in companies is high.

Cloud Monitoring

Cloud Monitoring Software

Business Service Monitoring of the future

Organizations are forced by new technologies such as cloud, microservices or containers to redefine their IT monitoring strategies. The digital transformation in companies requires a new generation of monitoring systems that are capable of monitoring complex and heterogeneous hybrid infrastructures in a both flexible and active manner; they should also take into account real-user experience, i.e. the behavior of services from the point of view of users. In the future, AI-controlled systems will be capable of reading and analyzing huge amounts of log file data in real-time so that they can be used to detect problems at an early stage.

Share article:

More interesting articles