GN Anomaly Detection_1.1

GN Anomaly Detection

Services related to corporate assets

Early detection of anomalies that occur to any business asset is of utmost importance to avoid disruption of connected business processes.
img-1 (3)-2

The Anomaly Detection service is an advanced tool that operates in a standardized way to support companies in the continuous collection and monitoring of anomalies from complex data and contexts, not otherwise manageable by human personnel.

The idea was born as a project presented and then won by GN Techonomy at the Call 2021 SI4.0 - Innovative solutions 4.0 [hyperlink: https://www.gntechonomy.com/news/gn-techonomy-vince-bando-si4-0---di-regione-lombardia-in-collaboration-con-csmt-polo-tecnologico/]. The recognition of the project’s relevance to the needs of the Lombard industrial fabric has quickly led to the first experiments on the monitoring by iot technologies of the performance of the machinery of a prominent customer of GN Techonomy.

The experimental project confirmed the potential and urgency of the solution: the limited offer of the market, the lack of architectural modernity and, the obsolescence of detection techniques, not suitable especially for heterogeneous and scale scenarios, such as those related to Internet of Things applications, have therefore led our qualified technical staff to study a strategy for the Anomaly Detection at the forefront and develop a large-scale, fully in-house software.

The Anomaly Detection service is an Automated Machine Learning tool that ensures the continuous monitoring of time series, that is, a set of variable and time-ordered data. Its purpose is to detect and promptly report any unexpected or unusual phenomenon, through algorithms able to distinguish these situations from the expected trend of the time series under consideration.
img-2 (1)

To do this, we have implemented an automated and multilevel "no-code" approach: an adaptive system supervises what is acquired and selects the best detection strategy according to reliability and accuracy metrics. In the same way that a human observer would consider several factors to decree an anomaly (outlier, drifts in time, temporary instability, context...), GN Techonomy’s approach is also based on different information, both point and context, to recalibrate its detection.

All in real time, without human intervention and for any signal; the algorithms embedded in the system represent the state of the art and are constantly updated and extended to take advantage of the most modern techniques of anomaly detection.

The user does not have to worry about applying specific configurations or tuning for each monitored signal; no expertise is also required in this field because the solution adapts independently to what is monitored.

The flow of information collected is automated in its entirety, the system in fact learns the "normal" trend of the data and identifies in real time the points where the monitored signal is outside an acceptable deviation range.

The flexibility of the tool is also found in the hybrid and scalable execution paradigm it supports: it can run on both powerful cloud computing infrastructures, and field hardware with limited resources, lending itself to both cloud computing and edge computing scenarios.

Powerful and expandable, cloud infrastructure is often a bottleneck, both in terms of performance, cost, and the resilience of the solution as a whole.

By decentralizing the computational load, the solution is also applicable on large installed bases, it saves the hardware already used and does not have "single-point-of-failure". Our dual "cloud and edge" approach allows you to take advantage of both options.

Combining adaptability, scalability and resilience, the Anomaly Detection tool aims to provide several benefits:

  • Continuous monitoring: a virtual control room constantly online and monitored, but applied on a large scale. No anomaly will go unnoticed;
  • Process improvement: One of the large-scale effects of monitoring is the increase in the total quality of the reference process. Alerts can lead to a reduction in costs/reaction times, as well as better planning of maintenance operations;
  • Simplicity: no statistical or machine learning skills are required, the system manages everything independently and is just a click away. No initial or periodic tuning is required, being able to adapt to the operating conditions of the monitored signal, carrying out continuous steady-state checks and recalibrations;
  • Minimize TCO: Leverage existing hardware for edge computing cases or, for cloud computing cases, scale dynamically according to load. The generic approach allows you not to have to develop a specific detection algorithm for each signal/machine combination, avoiding costs and time.
  • Uniformity: Data can be acquired directly in the field, as in the edge case, or it can be taken from a data-lake/historian, as in the cloud case. No matter which signal the detection is applied to or where the data source is located: the generated alerts always contain all the details of the incident so that the anomalies can be viewed in real time or stored in a long-term transparent;
  • Timeliness: the alert is made selectively and immediately on the channels configured by the user (e.g. e-mail, SMS...). The alert module allows you to apply complex routing logic, sorting the notification to staff or specialists.
  • Integrability: APIs provided by the tool make it an integral part of the ecosystem and end-to-end business flows. Anomalies become controllable components of business processes. Very often they will not be inevitable, but their detection and study allow to improve the efficiency and the total quality of the management processes.