20 Feb Second NOTIONES Whitepaper
In the context of the global race for artificial intelligence, the European Union aims to strengthen its competitive position with respect to its main competitors – the United States and China – by anticipating them in the definition of a comprehensive regulatory framework on trustworthy AI that could become a global standard. Indeed, acting as a tech regulator, the European Commission believes that the Artificial Intelligence Act will become an international point of reference for similar legislation, thanks to its balanced approach between fundamental rights protection and public security. Structured around a risk-based approach, the proposed regulation introduces obligations in proportion to the potential harmful impact of AI applications on humans, where riskier AI systems deserve tighter obligations.
AI systems are efficient tools at the disposal of security practitioners and citizens but it is necessary to safeguard accountability and avoid misuse that can endanger national security and the respect for human rights. In this direction, the AP4AI Project has been established by Europol in February 2022.
The EC proposal for the Artificial Intelligence Act of April 2021 does not address the issue of human training on AI. The proposal simply states that it needs to be adequate for the task, without establishing minimum technical requirements or setting up specific training structures . Therefore, this fundamental aspect will not be regulated in the European framework – meaning, the training of personnel responsible for supervising AI systems may be informally delegated to the structures and initiatives of the Member States.
The NATO approach was described to explore possible parallelisms between defence and civil security with regard to Artificial Intelligence preparedness of human operators. However, related documents are focused on the emergence of new technologies rather than on how military and civilian personnel use them.
The civil security community of NOTIONES expresses concerns about this rather significant gap, especially since the future EU framework on Artificial Intelligence aims to become an international standard and advises the need to promote new tools, strategies and practices to improve the knowledge, expertise and capability of personnel in the supervision of AI systems used for Security, Intelligence and Law Enforcement. From a long-term perspective, it is crucial that the EU – in the search for strategic autonomy – continues to allocate public resources for the development of a leading “AI made in Europe” and favours the creation of a European environment that stimulates private investment.
The white paper is now also available on Zenodo.