As computer scientists, we usually put a lot of effort into the delivery of robust AI systems. Robustness is of course a must for a successful AI. However, we must also evaluate that the AI system is #trustworthy, so apart from being robust, it should be lawful and ethical. These are the three components that an AI system should meet in order to be trustworthy. They were presented already in 2019 by the High-Level Expert Group on AI presented Ethics Guidelines for Trustworthy Artificial Intelligence (https://lnkd.in/g4P2PFrb).
At the same time, it is still challenging in practice for individual teams to ensure that your AI system respects ethical principles and guidelines. Thus, I’m really happy that I came across the Z-Inspection® method (https://z-inspection.org/). In the last days Katarzyna Kaczmarek-Majer, CEO of the ITP Foundation participated in the First World Z-inspection® Conference: Ateneo Veneto, March 10-11, 2023, Venice, Italy. – Z-Inspection. It was a truly inspiring meeting.
The key fundament of the Z-Inspection process is the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It can only be achieved with interdisciplinary teams.
We are about to start a Trustworthy AI for Healthcare Lab in Poland. Stay tuned! Assessment of trustworthy AI systems is always a process and as with every journey, it needs to start with the first step. Thus, we are now forming a use case in healthcare that will be our first hands-on experience with the Z-Inspection®. Thank you Roberto V. Zicari for the inspiration to start this journey! To read more on how to assess trustworthy AI in practice, you can download the report: