For Katja Rausch, the fact that the giants of generative artificial intelligence have signed the European text is an act of ethics-washing: their systems have been deemed compliant by default. Photo: Nader Ghavami/Archives

For Katja Rausch, the fact that the giants of generative artificial intelligence have signed the European text is an act of ethics-washing: their systems have been deemed compliant by default. Photo: Nader Ghavami/Archives

The new chapter of the AI Act on generative artificial intelligence has finally been “well regarded” by the big players in the sector. But that’s a “twist,” says the director of the House of Ethics, Katja Rausch: it amounts to a favour from the AI Office of the European Commission. Read our hair-raising interview.

Paperjam: What do you think of the text? What are its strengths and weaknesses?

Katja Rausch: When Brussels began drafting and negotiating the AI Act five years ago, the legislators tried to come up with a general text that would strike the right balance between innovations linked to artificial intelligence with a view to the benefits linked to AI and all the risks it would bring. The result is a so-called “risk-based” text with AI systems categorised into four types of risk: minimal or no risk (permitted); “transparency” risk such as chatbots (permitted with information obligations); high risk (permitted with compliance obligations, e.g. the “conformity assessment procedure” to have CE labels; and risk not accepted (e.g. facial recognition, social scoring).

And then there was the shock of the Draghi report, “The Future of European Competitiveness, which concluded that Europe was lagging behind the digital revolution and even holding back innovation with its 100 and some digital laws, listed by Kai Zenner.

As with any piece of legislation, the positive aspects can easily turn into disadvantages depending on the economic context or one’s ethical imperatives. Nevertheless, one of the major and undoubtedly positive aspects of the AI Act is its initial vision. The regulators wanted to translate a European vision by banking on a human-centric approach to AI incorporating democratic values of respect for human rights and fundamental values.

In theory that’s a godsend for Europeans, but on the ground in 2025 it’s a completely different story. In an ultra-competitive context with governments all vying for leadership in AI and with the European Commission aspiring to make the EU the world leader in artificial intelligence (in response to competition from the United States and China), the “AI Continent” action plan was published on 9 April 2025 to recalibrate the AI Act. The 62 mentions of the word “innovation” set the tone: fundamental values and human rights have been subtracted in favour of a semantic field around productivity and competitiveness. Why is this? To free up innovation by simplifying the AI Act (translation: deregulation) so that it is “functional in practice” (translation: competitive). Soft (de)regulation? But regulation nonetheless.

A positive side is its egalitarian approach. It is a text that is aimed at all players regardless of their size, as it is based on the risk that AI systems bring. But underneath this egalitarian regulatory umbrella lies the power and resources of the most powerful players, who prefer to pay hefty bills rather than bring their models into line. As for VSEs and SMEs, they feel overwhelmed (administratively and financially) by the multitude of reports, assessments, analyses and get lost in the bureaucratic maze.

One of the major failings of the AI Act is its linear and non-systemic approach to AI as a disruptive technology. It was conceived in the traditional, compartmentalised, fragmented way that Anglo-Saxons describe as “siloed.” Systemic interconnections are lacking. A company can have risk one, two and three systems. There is a lack of interconnection between systems and developers in order to accentuate the responsibility of the legal entities behind the most dangerous or damaging models. And finally the interconnection between concepts linked to the notion of risk.

We cannot talk about risks as a generic category without nuancing between damage and violation, especially when we are talking about systemic risks to safety, health or human rights.

For systemic risks, damage is devastating and latent. And it is the notion of latency that is important, the invisible but structural damage. This damage is not immediately visible, as it develops in the societal water table, underground--unlike bias, discrimination, fraud, cyber attacks or mental violence, which are quantifiable, visible and measurable.

Can we add blocks to the act ad infinitum until the text becomes indigestible and impractical? Especially when thinking about agentic AI that runs on automation without human intervention or robotic physical AI?

In a deep tech context where the multiplicity of technologies requires a multi-layered interconnected and decentralised framework, the non-systemic approach of the AI Act is problematic.

The AI Act also affects the very notion of governance. Traditional governance is no longer possible with future technologies such as quantum or decentralised architectures. Dynamic, decentralised governance, better able to keep up with the pace and velocity of technological change, will be the future form.

A final failing concerns the types of AI not covered by the AI Act, namely autonomous weapons, autonomous vehicles or surveillance systems produced in Europe and sold outside Europe. It seems hard to see how a manufacturer like Tesla, which sells its Tesla Autopilot--an advanced driver assistance system or “Adas”--can escape the AI Act. According to a recent study by the National Highway Traffic Safety Administration in the United States, the accident rate for Tesla cars is double the average and the number of deaths has reached 713, a record for all categories of car.

So, deregulation as a solution to accelerate innovation in Europe?

Certainly, but not blindly, as the most advanced AI (generative AI, agentic AI, Quantum AI) have ceased to be products to act and transform into systems.

Texts like the AI Act, while not perfect, must see the light of day to remind us that the most important thing for the progress and advancement of our societies must be to respect humans. That human rights and fundamental values must always be a priority. Not forgetting that these systems are first and foremost built to serve humankind, not the other way round.

What about the dance step of the big players who don’t want to take part in it… but do in the end?

Big Tech are champions of the regulatory and ethical twists. On 10 July 2025 (before the fateful date of implementation of the GPAI regulation on 2 August) the newly-formed AI Office threw a lifeline to Big Tech and other generative AI providers: the General Purpose AI Code of Practice. It’s pompous rhetoric for a twist that clears up the AI Act. All signatories to the code must commit bona fide to respecting copyright, transparency and security in the development of their models. Unsurprisingly, Big Tech were the first to sign: OpenAI, Microsoft, Anthropic, Google, Mistral AI, IBM, Humane Technology, Amazon…

Xai from Musk partially signed and Meta refused to sign, explaining that Europe is currently on the wrong track. Funny, from a company that was behind Cambridge Analytica, the biggest data theft scandal--and a company (Facebook) dubbed a “digital gangster” in the report “Disinformation and ‘Fake News’” by The Digital, Culture, Media and Sport Committee (2019).

So why did Big Tech sign? Quite simply because signatories to the code or harmonised (technical) standards will have a year’s reprieve before incurring penalties for non-compliance with the AI Act. They benefit from a “presumption of compliance” that their system complies with the AI Act, while being in flagrant violation in practice.

Not a victory attributed to the collective, nor to innovation, but to lobbying. Virtue-signalling and ethics-washing in grand style. There can be no code of ethics or code of good practice that Big Tech has not signed up to. Let’s just remember that the Holy See’s “Rome Appeal” to regulate Artificial Intelligence was signed by the same players while they nonchalantly continued their unethical and unaccountable practices.

What about ethics in all this?

We always talk about ethics applied to systems, but ethics is first and foremost a matter for humans. Experts, regulators, Big Tech and governments all have demands, requirements, plans and recommendations. And of the main stakeholders, citizens seem to be absent or even muzzled. All speak on behalf of… act for the good of…

The Computer and Communications Industry Association, active and powerful in Europe and the UK, writes: “Allowing competition to flourish in the AI market will be more beneficial to European consumers than additional regulation prematurely being imposed, which would only stifle innovation and hinder new entrants.” Not only Big Tech, but also governments are paternalising and deciding for their citizens how to manage an asset that does not belong to them: citizens’ data.

looked at the voice of the people by asking them if they want regulated AI in six countries (Brazil, Denmark, Japan, the Netherlands, South Africa and the US). The results of the study are unequivocal across all countries: citizens want regulated AI. They want AI that respects three things first: the protection of human rights (in all countries), economic wellbeing and national security.

It’s not just Europeans who are asking for this, but Americans too. The fact is that there is a general feeling of frustration at being reduced to powerless spectators in this frantic political-technological farandole between Big Tech and governments.

However, without buy-in and adoption from users, no system will be productive or effective. The example of elektronische Patientenakten (ePA) in Germany is instructive. Lacking adoption by family doctors, the sick and the elderly (who feel excluded because it is too complicated), ePA is currently being described as a flop by the German press. Or the case of Zuckerberg and his Metaverse: another resounding flop for lack of adoption by users.

How can ethics provide a solution? Then, what form of ethics? Beyond regulations, ethics is about the meaning of actions. The why before tackling the how. And it’s this level of understanding of what’s at stake, of the latitude of risks and benefits, that makes people feel integrated and empowered. A collective, decentralised ethic, not top-down or authoritarian but participatory where those concerned have the choice to exchange, are properly informed, included and can benefit with foresight from technologies or refuse harmful systems.

On the one hand, more power to the Vox Populi and the Agens Populi; and on the other hand, more talk about developers who are responsibly building architectures that protect those who use them and not business machines that colonise and exploit, who serve humans or instead of using humans.

Is this possible? Yes! Switzerland has just demonstrated this by launching its own model, in the style of GPT-4, 8B and 70B parameters (not the best performing, but collectively the most ethical): free, with an integrated ethical design, 100% powered by green energy and multi-lingual (over 1,000 languages). It’s a powerful message and a shattering example of what national sovereignty really means! How to build trust. A model that pursues human values and plays on the collective to the detriment of breakneck technological acceleration. All based on a decentralised, open-source infrastructure with transparent data. A missed AI Act for the many other European countries?

This article in French.