From system level to root cause
Easy data access and utilization throughout industrial processes are fundamental elements of Industry 4.0. High-tech companies encounter opportunities of using the emerging data to improve their system life cycle and to define new business cases by transforming themselves into service providing companies. Moreover, high-tech companies also face previously unseen challenges, such as the significant increase in system complexity, customization, continuous evolution and diversity of operational environment.
The traditional system engineering approaches, heavily relying on human experience, are limited in addressing these challenges. The emerging data of high-tech systems bring the opportunity of going beyond the capability of human experience and thus enhancing system engineering.
ESI empowers system engineering by exploiting data insights in a methodological manner to support high-tech systems’ innovation.
Our know-how starts with understanding and translating companies’ various business goals into the relevant data-driven application areas and requirements. For example, product customization could be well defined based on the system usages, which are learnt from the operational data. Next, we enhance system engineering with methodologies, realized as prototypes, that integrate the insights discovered from data and the knowledge of the systems into system-level reasoning. This is achieved by carrying out knowledge engineering to construct domain models, which structurally capture the scattered knowledge across documents, engineering code, or the experts’ brains. In the context of data-driven applications, the knowledge engineering facilitates the effective analysis of operational data. The additional exploitation of data science techniques allows the discovery of operational models.
The integration of these knowledge-driven and data-driven approaches enable continuous system evolution and operational support by re-usage and strengthening of the company knowledge.
We developed a demonstrator to explore and address relevant challenges of putting the methodology into practice. It sketches out the landscape and integration principles of knowledge-assisted data analysis techniques applied to a variety of data streams available within high-tech industry.
As an industrial showcase, our demonstrator presents semi-automatic identification of the main root cause of a factory (system-of-systems) performance degradation through a guided and deep-dive analysis into the level of machine-specific components (in this case a software task).
ESI uses the developed methodologies as a driver to improve company processes and enrich competencies needed to embed the results in the organization, thereby facilitating the innovation of product and service development.
System-of-systems data from a production line and process mining
A production line usually consists of multiple machines operating in a system-of-systems manner. These machines cooperatively process the same objects, and generate operational data. Process mining is employed to analyse the operational data and derive the process flows of objects on the production line. The performance bottlenecks (e.g., object distributions across machines and/or specific machines) can be identified by analysing the extracted flows (see the screenshot). Given the identified performance bottlenecks, next-step analysis, e.g., for specific machines, are recommended.
Machine data and anomaly detection
To diagnose a specific machine issue and narrow down the space of causes, we use the operational data generated by the machine components. Basically, anomalies are identified based on the operational data. This is achieved using unsupervised learning techniques, which are able to detect anomalies without efforts from domain experts. The anomaly detection algorithms used in this demonstrator are provided by Yazzoom.
Machine data & expert system
To help exploring the huge amount of data available, system engineering knowledge is used. By structuring the system engineering knowledge and automating anomaly detection where possible, users are guided through the data during trouble-shooting. Finally, the likely root-causes are found.
Fleet data & probabilistic reasoning
The available information of multiple similar machines can be exploited to prioritize the likely root-causes of a single machine. Particularly, the operational data of a fleet of machines provide evidence whether a likely cause identified on one machine also leads to the same issue on other machines. To reason about such fleet data and knowledge, probabilistic reasoning is used to statistically rank the likely causes and recommend further deep-dive diagnosis of the machine (see the screenshot).
Machine component data & metric temporal logic
The diagnosis of a machine component relies on its operational data, such as the log events. The failures or errors of the component are identified by verifying its performance metrics, such as latency or throughput requirements. This demonstrator provides a glimpse of the TRACE tool developed by ESI. In particular, the metric temporal logic (MTL) is used to formally verify the performance requirements with the log data of the component. If the requirements are not satisfied, violations are visualized, followed by giving an appropriate recommendation for taking actions. Note that the specification of the requirements is described in a dedicated language that engineers can easily understand. The formal language (i.e., the metric temporal logic) backing the analysis is invisible to the engineers.