The European Union plans to beef up its response to online disinformation, with the Commission saying today it will step up efforts to combat harmful but not illegal content — including by pushing for smaller digital services and adtech companies to sign up to voluntary rules aimed at tackling the spread of this type of manipulative and often malicious content.
EU lawmakers pointed to risks such as the threat to public health posed by the spread of harmful disinformation about COVID-19 vaccines as driving the need for tougher action.
Concerns about the impacts of online disinformation on democratic processes are another driver, they said.
A new more expansive code of practice on disinformation is now being prepared — and will, they hope, be finalized in September, to be ready for application at the start of next year.
The Commission’s gear change is a fairly public acceptance that the EU’s voluntary code of practice — an approach Brussels has taken since 2018 — has not worked out as hope. And, well, we did warn them.
A push to get the adtech industry on board with demonetizing viral disinformation is certainly overdue.
It’s clear the online disinformation problem hasn’t gone away. Some reports have suggested problematic activity — like social media voter manipulation and computational propaganda — have been getting worse in recent years, rather than better.
However getting visibility into the true scale of the disinformation problem remains a huge challenge given those best placed to know (ad platforms) don’t freely open their systems to external researchers. And that’s something else the Commission would like to change.
Signatories to the EU’s current code of practice on disinformation are:
Google, Facebook, Twitter, Microsoft, TikTok, Mozilla, DOT Europe (Former EDiMA), the World Federation of Advertisers (WFA) and its Belgian counterpart, the Union of Belgian Advertisers (UBA); the European Association of Communications Agencies (EACA), and its national members from France, Poland and the Czech Republic — respectively, Association des Agences Conseils en Communication (AACC), Stowarzyszenie Komunikacji Marketingowej/Ad Artis Art Foundation (SAR), and Asociace Komunikacnich Agentur (AKA); the Interactive Advertising Bureau (IAB Europe), Kreativitet & Kommunikation, and Goldbach Audience (Switzerland) AG.
EU lawmakers said they want to broaden participation by getting smaller platforms to join, as well as recruiting all the various players in the adtech space whose tools provide the means for monetizing online disinformation.
Commissioners said they want to see the code covering a “whole range” of actors in the online advertising industry (i.e. rather than the current handful).
It’s certainly notable that the digital advertising industry body Internet Advertising Bureau is not on that list. (We’ve reached out to the IAB Europe to ask if it’s planning to join the code and will update this report with any response.)
In its press release today the Commission also said it wants platforms and adtech players to exchange information on disinformation ads that have been refused by one of them — so there can be a more coordinate response to shut out bad actors.
As for those who are signed up already, the Commission’s report card on their performance was bleak.
Speaking during a press conference, internal market commissioner Thierry Breton said that only one of the five platform signatories to the code has “really” lived up to its commitments — which was presumably a reference to the first five tech giants in the above list (aka: Google, Facebook, Twitter, Microsoft and TikTok).
Breton demurred on doing an explicit name-and-shame of the four others — who he said have not “at all” done what was expected of them — saying it’s not the Commission’s place to do that.
Rather he said people should decide among themselves which of the platform giants that signed up to the code have failed to live up to their commitments. (Signatories since 2018 have pledged to take action to disrupt ad revenues of accounts and websites that spread disinformation; to enhance transparency around political and issue-based ads; tackle fake accounts and online bots; to empower consumers to report disinformation and access different news sources while improving the visibility and discoverability of authoritative content; and to empower the research community so outside experts can help monitor online disinformation through privacy-compliant access to platform data.)
Frankly it’s hard to imagine who from the above list of five tech giants might actually be meeting the Commission’s bar. (Microsoft perhaps, on account of its relatively modest social activity vs the others.)
Safe to say, there’s been a lot of more hot air (in the form of selective PR) on the charged topic of disinformation vs hard accountability from the major social platforms over the past three years.
So it’s perhaps no accident that Facebook chose today to puff up its historical efforts to combat what it refers to as “influence operations” — aka “coordinated efforts to manipulate or corrupt public debate for a strategic goal” — by publishing what it couches as a “threat report” detailing what it’s done in this area between 2017 and 2000.
Influence ops refer to online activity that may be being conducted by hostile foreign governments or by malicious agents seeking, in this case, to use Facebook’s ad tools as a mass manipulation tool — perhaps to try to skew an election result or influence the shape of looming regulations. And Facebook’s ‘threat report’ states that the tech giant took down and publicly reported only 150 such operations over the report period.
Yet as we know from Facebook whistleblower Sophie Zhang, the scale of the problem of mass malicious manipulation activity on Facebook’s platform is vast and its response to it is both under-resourced and PR-led. (A memo written by the former Facebook data scientist, covered by BuzzFeed last year, detailed a lack of institutional support for her work and how takedowns of influence operations could almost immediately respawn — without Facebook doing anything.)
NB: If it’s Facebook’s “broader enforcement against deceptive tactics that do not rise to the level of [Coordinate Inauthentic Behavior]” that you’re looking for, rather than efforts against ‘influence operations’, it has a whole other report for that — the Inauthentic Behavior Report! — because of course Facebook gets to mark its own homework when it comes to tackling fake activity, and shapes its own level of transparency since there are no legally binding reporting rules on disinformation.
Legally binding rules on handling online disinformation aren’t in the EU’s pipeline either — but commissioners said today that they wanted a beefed up and “more binding” code.
They do have some levers to pull here via a wider package of digital reforms that’s coming (aka the Digital Services Act).
The DSA will bring in legally binding rules for how platforms handle illegal content and they intend the tougher disinformation code to plug into that (in the form of what they call a “co-regulatory backstop for the measures that will be included in the revised and strengthened Code”).
It still won’t be legally binding but it may earn compliant platforms wider DSA ‘credit’. So it looks like disinformation-muck-spreaders’ arms are set to be twisted in a pincer regulatory move by making sure this stuff is looped into the legally binding DSA.
Still, Brussels maintains that it does not want to legislate around disinformation.
The risks are that a centralized approach might smell like censorship — and it sounds keen to avoid that charge at all costs.
The digital regulation packages the EU has put forward since the 2019 collage took up its mandate aim generally to increase transparency, safety and accountability online, its values and transparency commissioner, Vera Jourova, said today.
Breton also said that now is the “right time” to deepen obligations under the disinformation code — with the DSA incoming — and also to give the platforms time to adapt (and involve themselves in discussions on shaping additional obligations).
In another interesting remark he also talked about regulators needing to “be able to audit platforms” — in order to be able to “check what is happening with the algorithms that push these practices”. Though quite how audit powers can be made to fit with a voluntary, non-legally binding code of practice remains to be seen.
Discussing areas where the current code has fallen short Jourova pointed to inconsistencies of application across different EU Member States and languages.
She also said the Commission is keen for the beefed up code to do more to enable and empower users to act when they see something dodgy online — such as by providing users with tools to flag problem content.
Platforms should also provide users with the ability to appeal disinformation content takedowns (to avoid the risk of opinions being incorrectly removed).
The focus for the code would be on tackling false “facts not opinions”, she emphasized, saying the Commission wants platforms to “embed fact-checking into their systems” and for the code to work towards a “decentralized care of facts”.
She went on to say that the current signatories to the code haven’t provided external researchers with the kind of data access the Commission would like to see — to support greater transparency into (and accountability around) the disinformation problem.
The code does require either monthly (for COVID-19 disinformation), six monthly or yearly reports from signatories (depending on the size of the entity) but what’s being provided so far doesn’t add up to a comprehensive picture of disinformation activity and platform reaction, she said.
She also warned that online manipulation tactics are fast evolving and highly innovative — while saying the Commission would nonetheless like to see signatories agree on a set of identifiable “problematic techniques” to help speed up responses.
EU lawmakers will be coming with a specific plan for tackling political ads transparency in November, she noted.
They are also, in parallel, working on how to respond to the threat posed to European democracies by foreign interference cyberops — such as the aforementioned influence operations often found hosted on Facebook’s platform.
The commissioners did not give many details of those plans today but Jourova said it’s “high time to impose costs on perpetrators” — suggesting that some interesting possibilities may be being considered, such as trade sanctions for state-backed disops (although attribution would be one challenge).
Breton said countering foreign influence over the “informational space” is important work to defend the values of European democracy.
He also said the Commission’s anti-disinformation efforts would focus on support for education to help equip citizens with the necessary critical thinking capabilities to navigate the huge quantities of variable quality information that now surrounds them.