News and Views
EU launches next steps in building trust in Artificial Intelligence
The European Commission has launched its next steps for building trust in artificial intelligence by taking forward the work of its High-Level Expert Group in AI.
The Vice-Chair of the expert group was Professor Barry O’Sullivan of the School of Computer Science at UCC, and today the Commission launched a pilot phase to ensure that the ethical guidelines for Artificial Intelligence (AI) development and use can be implemented in practice.
The ethical dimension of AI is not a luxury it is a requirement
The European Union aims at increasing public and private investments in AI to at least €20 billion annually over the next decade, making more data available, fostering talent and ensuring trust.
Announcing the guidelines, the Vice-President for the Digital Single Market Andrus Ansip said: “I welcome the work undertaken by our independent experts. The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies. Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust.”
AI can benefit a wide-range of sectors, such as healthcare, energy consumption, cars safety, farming, climate change and financial risk management. AI can also help to detect fraud and cybersecurity threats, and enables law enforcement authorities to fight crime more efficiently. However, AI also brings new challenges for the future of work, and raises legal and ethical questions.
Commissioner for Digital Economy and Society Mariya Gabriel added at the launch, “today, we are taking an important step towards ethical and secure AI in the EU. We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society. We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI."
Professor Barry O’Sullivan, Vice Chair of the High-Level Expert Group in AI stated that “the ethics guidelines for trustworthy AI are the result of an comprehensive and intensive process involving 52 leading experts over the past nine months. The outcome is unique in the world as it works from fundamental rights, to a set of four ethical principles, to seven key requirements for trustworthy AI, leading to a practical assessment list to operationalise the framework. We now begin the piloting phase of the assessment list across Europe.”
The Commission doubled its investments in AI in Horizon 2020
The Commission doubled its investments in AI in Horizon 2020 and plans to invest EUR 1 billion annually from Horizon Europe and the Digital Europe Programme, in support notably of common data spaces in health, transport and manufacturing, and large experimentation facilities such as smart hospitals and infrastructures for automated vehicles and a strategic research agenda.
Mr Andrus Ansip @Ansip_EU, Vice President of the European Commission giving a great opening speech at #DigitalDay2019 #TrustworthyAI #AI. Our Ethics Guidelines for Trustworthy AI will be formally launched this morning. pic.twitter.com/o4D19g2K6v— Prof. Barry O'Sullivan (@BarryOSullivan) April 9, 2019
The Commission is taking a three-step approach: setting-out the key requirements for trustworthy AI, launching a large scale pilot phase for feedback from stakeholders, and working on international consensus building for human-centric AI.
Following its European strategy on AI, published in April 2018, the European Commission set up the High-Level Expert Group on AI, which consists of 52 independent experts representing academia, industry, and civil society. They published a first draft of the ethics guidelines in December 2018, followed by a stakeholder consultation and meetings with representatives from Member States to gather feedback.