The AI Act has been a long time coming. Whilst it’s a landmark piece of legislation, unfortunately, it fails to meet the bar on human rights protections, writes Laura Lazaro Cabrera.
Laura Lazaro Cabrera is the Counsel and Director of the Equity and Data Programme at the Centre for Democracy & Technology (CDT).
It can’t be denied that this is a historic moment both in the EU and globally: a law to govern artificial intelligence has been agreed on by the EU. It is the first of its kind in the world. It’s a long-awaited, hard-fought-over and lengthy piece of legislation. But for CDT Europe, it is a mixed bag when it comes to protecting human rights – one of its key aims, after all.
The AI Act’s significance is clear: it will become the benchmark for AI regulation globally in what has become a race against the clock as lawmakers grapple with a fast-moving development of a technology with far-reaching impacts on our basic human rights.
AI is being increasingly used in areas with profound importance to peoples’ lives: selecting which school your child may go to, helping employers select candidates, processing asylum cases… the list goes on.
Legislation is much needed, and the stakes are extremely high. When AI’s deployment goes to the heart of key human rights such as the right to privacy, freedom of assembly and expression, lawmakers have had to strike a difficult balance.
For those such as CDT Europe who have been advocating hard for human rights to be at the core of the AI Act, we had high hopes, but the final text has given away too much in the last-ditch negotiations.
Whilst we can rightly celebrate that privacy and other fundamental rights are foregrounded in the law, there are too many exemptions which could lead to harmful AI posing serious risks to citizens and indeed, often to those in vulnerable situations.
One glaring failure, in our view, is that whilst the Act brings in important limitations on the use of AI by law enforcement, lawmakers did not heed our warning (and that of other civil society organisations) calling for a total ban on untargeted facial recognition by law enforcement.
This goes to the heart of what kind of society we want to live in. The limitations on the use of live facial recognition only apply to law enforcement use in publicly accessible spaces, and explicitly exclude borders, which are known sites of human rights abuse.
This is a law which is supposed to protect people’s most basic human rights and yet it seems to be allowing, through its exemptions, the most nefarious kind of AI, one which invades the right to privacy of often the most marginalised and vulnerable groups.
As always, the devil is in the detail and the many exemptions to what should be laudable provisions in the Act threaten to undermine its purpose. One obvious such exemption is that for national security.
The scope for misuse here is significant: one could easily imagine a scenario in which law enforcement claim a use of AI is in the interests of national security and thus it becomes exempt. Similarly, there’s an exemption to the ban on emotion recognition in the Act, which only applies to education and the workplace, allowing emotion recognition to be deployed elsewhere, such as at the border.
One big “win” for civil society was the Fundamental Rights Impact Assessments (FRIAs) – there will be an obligation for high-risk AI deployers to conduct these assessments. But – and it’s a big ‘but’ – the FRIAs do not always include the private sector, so only those deploying AI in the public sector and a narrow subset of private companies will have to assess the risk to human rights – leaving many people unprotected.
Under the AI Act, a company for example would have no obligation to carry out a FRIA when deploying AI in one of its warehouses to increase the pace of work even if that presents risks to workers.
On top of this, it’s not actually clear if these FRIA will be more than a box-ticking exercise: nothing in the Act makes the deployment of a high-risk AI conditional on the FRIA being reviewed or approved by authorities. So once carried out and reported on, the FRIA does not seem to have any meaningful impact in the roll-out of a high-risk AI. It’s obvious why human rights advocates’ fears about harmful AI are not assuaged by this law.
As the dust settles and everyone turns towards the implementation of the Act, we face the difficult task of unpacking a complex, lengthy and unprecedented law. For us, key to its success – and also to overcoming its pitfalls – will be the close coordination between those implementing the law and experts and civil society. That’s the only way it can be ensured that in practice it is consistent with its own articulated goals: protecting fundamental rights, democracy and the rule of law.