In February 2023, a post on X by Elon Musk--now deleted--went viral. In the space of a few hours, the American billionaire racked up tens of millions of views. But there was nothing particularly exceptional about the content. It only took a few days to discover that X’s algorithm had been manually modified to boost the visibility of Musk’s tweets. A field in X’s code (“author_is_elon”) provides for this treatment.
This is no rumour: the information is in X’s own code, published on GitHub. This move, which is supposed to illustrate the transparency of X, formerly Twitter, raises more questions than it answers:
—Why is only this fragment published?
—What other hidden rules apply to other accounts?
—How are community notes meant to correct misinformation moderated?
—And why do some--often moderate--voices seem invisible, while others--often radical--are omnipresent?
This is not an isolated case. Far from it. It illustrates a wider truth: platforms decide what we see, without us knowing how or why. Behind smooth, colourful interfaces, algorithms rule our attention. And despite promises of transparency, these systems remain black boxes, complex, biased and largely out of reach of the public eye. The European Union’s Digital Services Act (DSA) aims to impose rules. But how far are we from that?
Algorithmic transparency: what are we really talking about?
Publishing a few lines of code on GitHub is not enough. Transparency, real transparency, implies at least three levels:
Public understanding: why did I see this post, this ad, this message?
Access for researchers: how does the machine actually work, from what data?
Auditability for regulators: can we control, verify, sanction in case of manipulation?
Today, none of these three levels is fully achieved. Platforms choose and control what they show.
What X (and others) show… and what they hide
The partial publication of X’s code has highlighted an uncomfortable reality: the internal rules are not the same for everyone. If the boss can boost his own posts with a line of code, who else benefits? And how are rankings, demotions and suspensions decided?
The logic remains the same on other platforms:
—Facebook and Instagram apply “internal classifications” to influential accounts.
—TikTok has been accused of favouring certain creators via manual adjustments (“heating”).
—YouTube never specifies what triggers a demonetisation or “shadow banning.”
And when participatory verification systems come into play (like X’s community notes), we discover even more opaque algorithmic rules, where visible notes are filtered according to logic that only in-house engineers can explain.
What’s missing for true transparency
To get out of opacity, it’s not enough to open a skylight. You have to:
—allow regular independent audits
—give researchers real access, via secure data interfaces
—explain to users why particular content has been shown to them (or not)
—offer a real choice between algorithmic and chronological feed
—document cases of automatic moderation and give an explicit right of appeal
These measures are not utopian. They already exist in other sensitive sectors (banking, health, education). All that’s missing is the political will to impose them on digital.
The DSA, a European step forward that lacks ambition
The Digital Services Act, which came into force in 2024 for very large platforms, is a major step forward. It offers:
—access to data for accredited researchers
—a requirement for external audit
—regular transparency reports
—a duty to justify automated decisions
—a duty to offer a non-algorithmic feed But:
—platforms drag their feet
—researchers struggle to access the promised data
—audits remain too rare and discreet
—and users are often unaware of their rights
The DSA is a good start, but without concrete means of control and sanction it remains partly theoretical.
The myth of transparency versus business
When they have to explain themselves, the platforms almost always put forward the same economic argument: publishing algorithms would harm innovation, security and competitiveness.
This is partly true:
—Groups could manipulate the algorithm (e.g. hashtags, formats, bots), which would further increase the problems.
—Competitors could copy certain logic.
—Dubious practices could be revealed (preferential treatment, internal bias).
But this fear does not justify total opacity. Models of responsible transparency exist in other sectors, including banking, one of the most regulated in the world.
Let’s draw a parallel: banks use algorithms to grant or refuse credit. Here's what’s being imposed on them without requiring them to publish their full source code:
—They must explain what criteria they use: income, debt ratio, banking history.
—They must provide the customer with a clear justification in the event of refusal.
—They can be monitored by an independent regulator, which has access to the internal models and can check that there is no discrimination.
—They are required to test their models regularly to ensure that they do not produce indirect discriminatory effects.
No one is asking a bank to publish its scoring model line by line. But they are being asked to make it understandable, contestable, verifiable and regulated. Why can’t we do the same with the algorithms of digital platforms?
AI is making the problems worse
And all the while… algorithms are skewing the public space. The effects are already visible, massive, sometimes worrying:
—polarising content more visible than nuance
—extremist groups more coordinated and noisy
—moderate or contradictory content less well exposed
—arbitrary or inconsistent moderation decisions
—false impressions of consensus created by algorithmic bubbles
What we see is not reality, but what generates the most engagement. And in this logic, the loudest voices, often the most extreme, take up an inordinate amount of space.
A less publicised aspect is that the merger of X and xAI into a new US holding company further complicates matters for Elon Musk’s platform. Social data feeds the models of Grok, its chatbot; the latter is integrated into X Premium, and will soon be responsible for classifying content and even moderating it; but there is no legal framework guaranteeing the transparency of these interactions: the DSA does not yet cover AI, making any external monitoring very difficult.
What to do, individually or collectively
As users, we should:
—Demand a chronological thread when it exists.
—Point out inconsistencies and ask for explanations.
—Use the personalisation options (where they exist).
—Read community notes critically.
As citizens we should:
—Question elected representatives, MPs, MEPs.
—Support NGOs that defend digital transparency.
—Call for the rigorous application of the DSA and its strengthening.
—Promote ethical, open source or cooperative alternatives.
The algorithm is political. What we see, what we believe, what we become--all this is increasingly captured by opaque systems designed to maximise attention, not truth. And this is precisely where we should be demanding of the regulator or regulators.
This article in French.