Do you trust artificial intelligence?
In times of ChatGPT, there is a growing social discussion about the trustworthiness of AI. Trust in AI plays a very important role especially in medical technology. This is about medical support, not whether to believe an AI-generated text post. A physician undergoes years of extremely demanding training, supervised by already experienced physicians, and must prove himself on many patients before he is allowed to treat independently. As the potential of AI grows, so do the possible uses of AI assisted applications in medical technology, taking over more and more tasks that were previously performed by humans.
From a technical perspective, this increases the requirements for AI development to meet the constantly growing demands. A logical step, because no patient wants to be treated by a physician who does not have much experience with other real patients. The same principles apply to AI. The performance of AI usually increases with the number of training data. At the same time, the impact of a possible misdiagnosis of an AI must be taken into account. This must be considered when developing an AI to minimize the risk of potential harm as much as possible. These are just examples of current requirements for AI development with the background of bringing the trustworthiness of an AI application to an acceptable level for the physician and patient.
At Chimaera, we continuously address the latest standards in AI development and help our customers develop AI-based innovative products with high adoption. We provide individual software services for medical and industrial imaging. Our experts use the latest AI and machine learning technologies i.e., to automate quantitative analysis of image data or extract image information for CAD planning. Transparent solutions following good scientific practice are our daily routine.
Jana Kade from the Chair of IT Management at FAU analyses in her bachelor thesis the status quo of trustworthiness of AI applications in a medical industry company against the background of current EU requirements and guidelines. Therefore, she interviewed our CEO Dr.-Ing. Marcus Prümmer and Dr.-Ing. Nina Ebel from our AI systems sales team.
We enjoyed discussing challenges and limitations of AI with Jana and are pleased with her successful thesis. Chimaera supports young scientists and cooperates closely with FAU Erlangen-Nürnberg.
Dr.-Ing. Marcus Prümmer, CEO of Chimaera
Guidelines for trustworthy AI from the European Commission
- Priority of human action and monitoring
AI systems should serve just societies by supporting human agency and upholding fundamental rights. Under no circumstances should they restrict, limit, or misdirect human autonomy.
- Technical robustness and safety
Trustworthy AI requires algorithms that are safe, reliable, and robust enough to handle errors or inconsistencies at all stages of the AI system's lifecycle.
- Privacy and data governance
Citizens should retain full control over their own data, and data concerning them should not be used to harm or discriminate against them.
Traceability of AI systems must be ensured.
- Diversity, Non-Discrimination and Fairness
AI systems should take into account the full range of human abilities, skills, and requirements and ensure accessibility.
- Social and ecological well-being
AI systems should be used to promote positive social change, sustainability, and environmental protection.
Mechanisms should be established to ensure responsibility and accountability for AI systems and their results.
Aspects of trustworthiness of AI at Chimaera
Trustworthiness of AI should be based on a risk-based approach. First of all, data quality is essential for the development of reliable AI applications. The performance of the AI depends entirely on it and directly affects the reliability and non-discrimination of the results. Chimaera developed an intern quality process, to ensure a balanced and representative data selection. Another measure for trustworthy AI is reproducibility and accuracy of the results, as well as robustness of the system against external influences. It all rounds up with transparency of the applied methods. Making the black-box nature of AI understandable allows the user to trust the system.
Jana Kade concludes: „If it is possible to fulfill all components of trustworthiness for AI, the relationship between humans and AI can fundamentally improve and harmonize. Once the "ice has been broken" here, there is nothing standing in the way of innovative collaboration and the progressive further development of artificial intelligence on the one hand and humans on the other.”
At a Glance
State-of-the-Art von vertrauenswürdiger künstlicher Intelligenz – basierend auf einer qualitativen Analyse der Chimaera GmbH
(State-of-the-art of trustworthy artificial Intelligence – based on a qualitative analysis of Chimaera GmbH)
Bachelor thesis from Jana Kade
Chair of IT Management (Prof. Dr. Michael Amberg)
School of Business, Economics and Society at Friedrich-Alexander-Universität Erlangen-Nürnberg