How to obtain trust in systems with AI components?

Next to the technical complexity of integrating AI into high-tech systems, we have to address the complex task of making the resulting system acceptable by its end-users. This track discusses the possible routes and roadblocks of getting AI explained and accepted. The presentations are based on industrial use cases, including healthcare applications and automotive applications.

Moderation: Michael Borth

Chair: Robert-Jan de Pauw, Philips

How to obtain trust in systems with AI components?

AI has been compared to plastics and electricity by experts to convey the potential it has to transform and improve the lives of millions of people. As an example, research performed in Stanford (Sebastian Thrun) has shown that AI / deep learning consistently outperforms human physicians in diagnosing skin disease. A diagnosis that would normally cost thousands of dollars is reduced to the cost of a single computation (dollars or cents) making it accessible for a large part of the world population.

However, AI performance greatly depends on the availability of high quality data. A lot of applications depend on data originating from and/or containing human subjects. In Europe the GDPR places restrictions on how data is acquired, for what it is used, how long it can be retained, etc. Furthermore, recent events with Facebook and Cambridge Analytica have increased scrutiny of authorities worldwide. Acquiring high quality data can be a huge challenge.

In this talk, I want to discuss acceptability of AI from the perspective of data acquisition and management. How can we ensure a high pace of innovation but ensure that the means by which we create AI is acceptable? What types of systems & processes are needed to comply with regulations, are ethically sound, while sacrificing as little as possible in terms of data accessibility and speed of innovation? What could be our (the industry's) role?

download pdf

Michael Siegel, OFFIS

How to engineer trust into AI-based systems – an automotive perspective

The proliferation of AI across all kinds of services and products is considered a game changer for many industries – and for society and individuals. Representatives from psychology, philosophy, sociology, legal science, political science, etc currently analyze the impact of AI on society and individuals. What makes AI-based systems acceptable for society and individual? In this talk we take a look at the different aspects that influence acceptance of AI and take a closer look from the perspective of the automotive industry. We address the question whether it is possible to engineer trust into AI-based automotive systems and take a look into current basic research topics that address future acceptance of AI.  We claim that we will see a paradigm change in system-engineering to build acceptable AI-based safety-critical systems.

download pdf

Marc Steen, TNO

Human-Centered Artificial Intelligence

Making systems that are acceptable to end-users”. Is that the best we can aspire to? We create systems that people “accept” in their role of “end-users”? In my talk, I will propose to reverse this: let’s start at the human-side; let’s enable and support people to live meaningful and fulfilling lives; and let’s use technologies as tools to empower them. I will draw from Virtue Ethics (which deals with creating conditions for ‘the good life’) and the Capability Approach (which views technologies as tools to extend human capabilities).

download pdf

go to S6: Future systems engineers