Large Language Models for Systems Engineering

Large Language Models for Systems Engineering

TNO-ESI and Philips use generative AI to tackle talent gaps in engineering


Large Language Models for Systems Engineering

To counter the pressing issue of talent scarcity, in strategic collaboration, TNO-ESI and Philips are exploring leveraging generative AI to improve processes, manage and share knowledge, facilitate the use of sophisticated toolkits, and provide human-centric solutions.

The high-tech industry is grappling with a severe talent shortage, a challenge exacerbated by the ongoing digital revolution. This shortfall poses not just a hurdle but a significant threat to the growth and sustainability of businesses worldwide. Korn Ferry’s global talent shortage report [1] highlights the widening gap of highly skilled tech talent in the Netherlands. In 2022, the European Labour Authority ranked the Netherlands as having the third-highest skills shortage in Europe. Demographic shifts, with baby boomers retiring and insufficient young talent entering the workforce, are further aggravating the situation.

In today’s socio-economic landscape, addressing this talent gap goes beyond merely recruiting and retaining high-tech personnel. The focus must shift toward seizing opportunities to boost efficiency, which requires enhancing digital capabilities, particularly in complex analysis and decision-making. According to a recent IBM report [2], more than three-quarters (77%) of C-suite executives believe that within the next three years, digital assistants—powered by traditional or Generative AI—will play a pivotal role in supporting the workforce with complex, mission-critical decisions. Automation and AI seem to emerge as the power duo, not only improving job performance and productivity but also helping to address labor shortages and augment essential skills.

In a strategic collaboration, TNO-ESI and Philips are exploring the potential of generative AI for system and software engineering. Our primary objective in this initiative is to tackle the pressing issue of talent scarcity in the high-tech sector by leveraging cutting-edge Generative AI technologies. These tools are designed not to replace professionals but to empower them, enabling faster data-driven insights.

Two key projects in this program are being run in partnership with Philips.

DELPHI

The Corrective and Preventive Action (CAPA) process is crucial for maintaining quality in medical device manufacturing. Improper CAPA procedures consistently rank among the top reasons for FDA Observations and Warning Letters. Traditionally, CAPA inquiries are conducted by experienced domain experts, such as system designers and quality management professionals.

The DELPHI project aims to develop an AI-powered assistant to support the CAPA process. While the ultimate assistant should enhance efficiency by streamlining data analysis and aiding decision-making, a crucial challenge will lay ahead in establishing trust. Given CAPA's highly quality-oriented and regulated nature, confidence in the assistant’s ability to provide reliable, accurate recommendations while adhering to stringent regulatory standards is essential. Transparency, validation, and accountability are critical for its successful adoption.

In its initial phase, the DELPHI project focused on enhancing CAPA database searches, which involve comparing issue data across time periods to identify trends and patterns. This search process is vital throughout the CAPA stages, from initial investigation to root cause analysis. Conducting these searches requires deep cross-domain knowledge and strong analytical skills to translate brief issue descriptions into effective search criteria.

Our AI assistant simplifies this process using a semantic search powered by large language models. It intelligently analyzes issue reports, identifies related cases, and helps engineers efficiently navigate past incidents while ensuring they retain control over the process. This approach has the potential to enhance reliability and facilitate documentation.

LLM4Legacy

Maintaining and understanding legacy code is a significant challenge, yet it remains essential for the operation of complex systems, particularly in the high-tech industry. Static analysis techniques are commonly used to gain insights from codebases. While reliable, these methods require deep parser-specific and domain-specific expertise, creating a steep learning curve that limits their widespread adoption in industrial settings. On the other hand, Large Language Models (LLMs) excel in general human-machine interaction and ease of use, but they often lack the precise domain-specific knowledge necessary for accurate, in-depth code analysis.

To address these limitations, TNO-ESI and Philips are collaborating to develop a hybrid approach that combines traditional static analysis with the capabilities of LLMs. By applying parser-based static analysis, detailed information from the code is extracted and stored in a graph database. LLMs are then enabled to interact with this graph, delivering natural language responses that are both accurate and comprehensive. This hybrid method showcases how LLM accuracy can be enhanced by leveraging traditional static analysis, while traditional software engineering techniques benefit from the abstraction capabilities and user-friendliness of LLMs. As a result, users can obtain insights that neither static analysis tools nor LLMs could independently provide.

More information

Join us at the Bits&Chips event on October 10th, 2024, for an insightful presentation by Nan Yang (TNO-ESI) on Leveraging large language models for legacy software.
Don’t miss this opportunity to explore exciting new ideas and innovations.

Go to this event page
Report

References

1.       The $8.5 Trillion Talent Shortage (kornferry.com)

2.       Seizing the AI and automation opportunity: The moment is now | IBM

Report

  • LLM4Legacy Study Report 2023, see link below

Go to this report
Contact