Can we trust software systems that learn?

Lugano, Piazza Riforma, 6900 Lugano

03.09.2019, 11:40 - 12:05

Enregistrer dans le calendrier


Mes données

Lugano Living Lab, Città di Lugano

"Deep" artificial neural networks have seen an incredible development in recent years and have achieved high performance in performing traditionally difficult tasks for computers, such as the ability to drive a vehicle independently or to express themselves in natural language. However, the mode of operation of this type of software is radically different from that of traditional software, as the behavior of the software is learned from the data instead of being programmed. It is therefore natural to wonder if we can trust such software systems.

The European project Precrime is inspired by one of the recurring dreams of science fiction: to be able to arrest criminals even before they commit crimes. The project's researchers aim to identify weak signals that predict unwanted behavior of an artificial neural network, in order to intervene on the system and correct the error before it causes damage. Applications for this innovative software testing technology range from robotics to self-driving cars, to include automated trading systems, virtual doctors and customer service chatbots. 

Prof. Paolo Tonella, Software Institute at the USI Faculty of Informatics

USI - Universià della Svizzera italiana
Faculty of Informatics

Photo credits