This brand-new report is the third in EIT Digital’s Policy Perspective series and addresses the important topic on how Europe should deal with Artificial Intelligence. It provides business and policy decision makers with a scenario-based impact assessment instrument for AI policy development.
The report explores the impact of Artificial Intelligence in general as well as in more specific application domains strategic for Europe: Health, Manufacturing, Climate, and Mobility. In all of these areas, it identifies both general and sector specific opportunities for and concerns about the further deployment of AI. It concludes with an assessment addressing the impact on innovation potential, fairness, trust, and growth opportunities.
- To ensure effective policy in the area of AI it is necessary to take context (sectors of application) into account.
- Some firm regulation seems needed, for example concerning the handling of machine data, product/service liability for autonomous systems and data protection in the health sector.
- General regulation or policy measures can be considered in relation to algorithm transparency and explainability.
- Regulation should be adaptable and flexible, whilst minimising and mitigating risks and ensuring human rights and European values.
The report is the result of a combined effort by five EIT Knowledge and Innovation Communities - EIT Manufacturing, EIT Urban Mobility, EIT Health, EIT Climate-KIC, and EIT Digital as coordinator.
In addition to above-mentioned recommendations, the following principles contribute to increase the positive impact of AI applications:
1. Privacy management supported by AI
Data sovereignty by subjects could be supported through developing an ecosystem where all transfer of data will be done with guaranteed GDPR compliance, under auditing of the regulator. Personal data will be under control of the subject and stored in a cloud. It can be transferred with consent/contract between the subject and other parties. The consent management will be done with AI support (a personal AI data management guard) which will automatically agree to data exchange if it is standard GDPR compliant and/or agreed once by the subject for such situation. When the AI agent concludes there is doubt, the subject will be warned and gives consent or not, thus teaching the AI agent for the future. R&I projects could be stimulated to develop these ideas.
2. Counterfactual checks and Algorithm explainability
This is an approach proposed by Watcher et al where they say there are three goals: inform and help an affected person understand why a particular decision was reached; provide them with grounds to contest adverse decisions; and help them understand how to achieve an outcome given current decision-making model. This would need regulation as well as policy and R&I actions.
3, Sandbox-based regulation
Regulation should stimulate to use a sandbox approach when deploying AI, similar to phase I to III clinical studies of medical drugs. In a sandbox, potential issues could be identified, and trust built before widespread 5.1deployment. The rules for a sandbox methodology could be different per sector, with in the Health sector a system close to the current stepwise approval processes for medicines and equipment.
Check out the Makers & Shapers video on Artificial Intelligence, featuring prominent experts from business and politics like Philips CEO Frans van Houten, European Parliament Member Eva Kaili, Element AI CEO Jean-François Gagné, and Zelros CEO Christophe Bourguignat.
Author - Peter Strempel