The discussions on the proposed adoption of a European Union Artificial Intelligence Act (AI Act) has elicited a lot of concerns. On 8 December 2023, 70 civil society groups and 34 expert individuals sent an urgent letter to the Council of EU Member States, the European Commission and the European Parliament to urge them ?Do not trade away our rights!? in the final trilogue (negotiation) on the landmark Artificial Intelligence (AI) Act.
The European Union?s ambition with this proposal, which is an opportunity to further strengthen the protection of peoples rights. As one of the first major regulatory attempts regarding AI, the AI Act should not only have an impact within the European Union and its Member States but would influence other regulatory frameworks on AI around the world.
While it is true that the United Nations and the European Union share same commitment to respecting, protecting and promoting human rights, international human rights law needs to be the guiding compass at a time when AI applications are becoming increasingly capable and are being deployed across all sectors, affecting the lives of everyone. By firmly grounding new rules for the use of AI in human rights, the European Union would be able to strengthen human rights protection in the face of ever-increasing use of AI applications in our everyday lives.
Africa Tech for Development Initiative-Africa4dev has deemed it worthy of giving a human rights-based analysis of the subject matter as well as relevant recommendations on the way forward in the EU AI Act. These are analyzed under five (5) headings discussed seriatim.
1. High Risk classifications
? The determination of risks in the EU Act should relate to the actual or foreseeable adverse impacts of an AI application on human rights and not be exclusively technical or safety-oriented.
? The EU Act must ensure that AI systems that carry significant risks for the enjoyment of human rights should be considered high-risk, with all associated obligations for their providers and users.
? Companies should not be allowed to self-determine whether their AI system would not be in the high-risk category, and hence opt out of the more stringent requirements for high-risk classes. ? Such a model of self-assessment of risk would introduce considerable legal uncertainty, undercut enforcement and accountability, and thereby eventually risk undermining the core benefits of the AI Act.
2. Stringent limits to the use of biometric surveillance and individualized crime prediction
? Africa4dev support a ban on the use of biometric recognition tools and other systems that process the biometric data of people to categorize them based on the color of their skin, gender, or other protected characteristics.
? Africa4dev supports bans on AI systems that seek to infer people?s emotions, individualized crime prediction tools, and untargeted scraping tools to build or expand facial recognition databases.
? Africa4dev agrees that such tools entail dangerous accuracy issues, often due to a lack of scientific grounding, and are deeply intrusive. They threaten to systematically undermine human rights, in particular due process and judicial guarantees.
3. Fundamental rights impact assessments
? Africa4dev expresses its strong support for the European Parliament?s proposal for comprehensive fundamental rights impact assessments (FRIA). This is is ideal in every sense and would help Technology and AI systems in advancing human rights rather than impugning same.
? A meaningful FRIA should cover the entire AI life-cycle, be based on clear parameters about the assessment of the impact of AI on fundamental rights; transparency about the results of the impact assessments; participation of affected people; and involvement of independent public authorities in the impact assessment.
4. Technical standards
? The role of standard-setting organizations as envisaged in the drafts of the AI Act is complex. There need to be a more unambiguous and precise role for standard setting organizations.
5. Holistic approach to AI harms in all areas
? The exemption from the AI Act by the Council of Europe for AI systems that are developed or used for national security purposes, as well as exceptions from the AI Act for law enforcement and border control would exempt fields of application where AI is widely used, where the need for safeguards is particularly urgent, and where there is evidence that the existing use of AI systems disproportionately targets individuals in already marginalized communities.
? Exemption and exception of these areas would create a substantial and extremely concerning gap in human rights protection under the AI Act.
Conclusion
A holistic approach to ensuring that human rights tenets are protected under the new EU AI Act is not only desirous but extremely crucial. In the wake of human rights violations worldwide we cannot afford to enthrone an era where AI is granted unlimited capability against human rights as this will amount to annexing human rights at the whims and caprice of AI systems. The EU must not only lead by example in creating an AI Act but must do so in protecting human rights for later countries and continents to imitate and adopt.