Following a revised draft, with feedback from over 500 contributors, the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG) has published a set of guidelines for developing what it calls “trustworthy AI”. The report outlines three aspects of what the group considers hallmarks of trustworthy artificial intelligence: that it be lawful, ethical, and robust from a technical and social perspective.
So, what does this mean for companies and organisations working on or with AI technologies? They might be interested to know that the guidelines focus mostly on the ethical aspects of AI and are concerned, in particular, with how advancements in this area could affect vulnerable groups such as children, persons with disabilities and marginalised or under-represented groups.
Another problem that was indirectly referenced was “algorithmic injustice”. The report states that stakeholders should “acknowledge that, while bringing substantial benefits to individuals and society, AI systems also pose certain risks and may have a negative impact, including impacts which may be difficult to anticipate, identify or measure”.
Crucially, it also calls for more transparency around AI research, calling for an opening up to questions from the wider public. “Beyond developing a set of rules, ensuring trustworthy AI requires us to build and maintain an ethical culture and mindset through public debate, education and practical learning.”
[ ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-aiOpens in new window ]