Promote the need for sincere information for a democracy based on trust
About us
|
OUR PUBLICATIONS
|
Nos activités
|
panorama
Soutenir la fondation
Fondation Descartes

The Fondation Descartes is a citizen-based, non-partisan, and independent European foundation dedicated to information-based issues.

Operation and governance

La Fondation Descartes est animée par trois organes : un Conseil d’Administration présidé par Jean-Philippe Hecketsweiler ; un Conseil Scientifique encadré par Gérald Bronner et d’une équipe permanente, dirigée par Laurent Cordonier, Directeur de la recherche.

Contact us

Vous souhaitez contacter l’équipe permanente de la Fondation Descartes ?

Our reports

La Fondation Descartes est une initiative citoyenne, apartisane, indépendante et européenne dédiée aux enjeux de l’information, de la désinformation et du débat public dans une société démocratique.

Thematic overviews

La Fondation Descartes est animée par trois organes : un Conseil d’Administration présidé par Jean-Philippe Hecketsweiler ; un Conseil Scientifique encadré par Gérald Bronner et d’une équipe permanente, dirigée par Laurent Cordonier, Directeur de la recherche.

Experts' blog

Vous souhaitez contacter l’équipe permanente de la Fondation Descartes ?

Events

The Descartes Foundation lists significant events, in France and internationally, particularly related to the topics of disinformation, interference and media literacy.

Our partnerships

La Fondation Descartes est animée par trois organes : un Conseil d’Administration présidé par Jean-Philippe Hecketsweiler ; un Conseil Scientifique encadré par Gérald Bronner et d’une équipe permanente, dirigée par Laurent Cordonier, Directeur de la recherche.

Our podcasts

Selection of 'must-listen' podcasts on the Descartes Foundation's fields of research, in French, English or German.

Actors

The Descartes Foundation offers you a cartography of the main actors involved in researching on the quality of information, or in fighting against disinformation, in France and throughout the world.

Initiatives

Fact checkers, web extensions, journalistic standards... The Fondation Descartes offers you a map of initiatives in France and around the world involved in asserting the quality of information or in fighting against disinformation.

Our references and resources

The Fondation Descartes' experts select and comment on key publications on disinformation, trust in the media and the Foundation's other research topics. This data base is available online to members of the Foundation and in the Foundation's documentation centre.

Inscrivez-vous à la newsletter

HOW CAN WE REGULATE PLATFORMS TO FIGHT FAKE NEWS ONLINE?

Aurélien Brest
13/04/2020

Digital platforms are often heavily criticised for the way that they manage the transmission of information. Google, Facebook and Twitter in particular have been accused of interfering with democracy in Europe and the United States, by allowing or even promoting fake news and damaging information. Many initiatives are currently being undertaken to prevent the spread of this disinformation, by governments but also by platforms, who have responded to this criticism by implementing tools.

Aurélien Brest
Research manager in cognitive psychology at the Fondation Descartes

Background

For several years now, digital platforms have been denounced for interfering with democracy in the United States and Europe. The 2016 US presidential election set off a barrage of criticism from the Democrats. Facebook was accused of being used by disinformation networks, particularly from Russia, to facilitate Trump's rise to power with aggressive campaigns. Three years later, an attack on a mosque in Christchurch, New Zealand, was broadcast on several platforms, including Facebook and YouTube (a Google subsidiary). These platforms were then incriminated for promoting expressions of terrorism. That same year, a former Google employee accused the company of having a “liberal” bias across the entire organisation. He maintained that his own conservative political beliefs caused him to be fired. Although the critiques are sometimes contradictory, with each side denouncing platforms for being a loudspeaker for the other side, the general feeling is a condemnation of the lax attitude that has prevailed in the digital sphere until now. However, initiatives by companies are very different in Europe and the United States, mainly because the legislation varies greatly between the two continents.

The Section 230 debate in America

In the United States, the platform debate centres around Section 230 of the “Communications Decency Act of 1996”. This section is key for legislating platforms. It specifies that digital platforms cannot be held liable for:

a) the content that they host, or

b) the actions that they take in good faith to moderate the content they host.

Jeff Kosseff, author of a book on Section 230 (The Twenty-Six Words That Created the Internet, Cornell University Press, 2019), explains that this “legal shield”, which prevents platforms from being held liable for hosting and moderating content, was initially created to protect platforms that screen illegal content. It was the 1995 Stratton Oakmon, Inc. v Prodigy Services Co. case that triggered the proposal of this section. In this case, online service provider Prodigy was held liable for defamatory speech that appeared on its bulletin board. Because they were moderating content, the judge considered that they exercised editorial control. This meant that the content distributed by Prodigy was requalified as published content, which opened it up to liability. The verdict gave rise to platforms being discouraged from moderating the content they hosted. Section 230 states that hosting providers hold no liability for the content on their sites, even when they are moderating it.

 Since then, in America it is only legally prohibited to host information that advertises sex trafficking. Facebook, for example, was not held liable for hosting terrorist content. In a lawsuit, its algorithm was accused of facilitating the coordination and visibility of a terrorist group, with its recommendation, suggestion and distribution tools. But the judge cleared Facebook of all liability related to the organisation of terrorist activity. Explaining his verdict, he said: 

Merely arranging and displaying others’ content to users of Facebook through such algorithms – even if the content is not actively sought by those users – is not enough to hold Facebook responsible as the ‘developer’ or ‘creator’ of that content. 1

So, Section 230 provides platforms with a powerful legal tool that has been the cause for a lot of displeasure.

However, criticism of Section 230 varies depending on which political side you're on. Republicans are against platforms moderating content, whereas Democrats focus more on the hosting of content.

In June 2019, Republican Senator Josh Hawley proposed an amendment to Section 230 that would require all platforms to declare their political neutrality when moderating content. This controversial bill has not yet been voted on by the Senate. It would mean that the Federal Trade Commission (FTC) would have to certify all platforms with more than 30 million monthly users that host content. If not given certification (because the FTC does not consider that moderation is neutral), the platform could lose the protection of Section 230 and be subject to prosecution.

Two key Democrats, Nancy Pelosi and Joe Biden, have been advocating for Section 230 to be repealed. In an interview with the New York Times, Biden stated:

that Section 230 should be revoked, immediately should be revoked, number one. For Zuckerberg and other platforms (…) It should be revoked because it is not merely an internet company. It is propagating falsehoods they know to be false, and we should be setting standards not unlike the Europeans are doing relative to privacy. You guys still have editors. I’m sitting with them. Not a joke. There is no editorial impact at all on Facebook. None. None whatsoever. It’s irresponsible.” 2

In contradiction to Section 230, Joe Biden considers that Facebook is a media organisation and that the content it distributes is pure disinformation. Alongside other Democratic candidates, he has been closely watching the measures taken by the European Union to control digital platforms.

Europe's Code of Practice

The e-Commerce Directive, adopted in 2000 by the European Union, protects digital platforms from any liability for illegal content that they may host. This protection only applies if the platform in question is not aware that it is hosting illegal activity. But in practice, digital platforms are completely protected, as they are not legally required to track illegal content and can therefore plead ignorance. Although many discussions have been held on this topic, unlike in the United States there is no official attempt to end this system of non-liability in Europe at present. However, it should be noted that the new European Commission is remaining vague on the subject. On 30 January 2020, Commission Vice-President Věra Jourová advocated for pushing platforms to be “more accountable and responsible”, without specifying the mechanism that would do this. 

For now, the EU's strategy involves, rather, promoting self-regulation.

In 2018, they created a "European Code of Practice on Disinformation”, which Google, Facebook, Twitter, Mozilla and Microsoft have signed on to.

This Code of Practice has eleven measures that each platform must comply with. Every month, each signatory provides a report to the European Commission. The Commission will evaluate this system in its entirety in 2020.

The suggested measures are based on three principles.

The first is fighting against disinformation by making it less financially attractive. Online, the main source of revenue remains advertising. Any advertising space on a website (or on hosted video content, for example) earns revenue via an ad manager that connects the host of the ad with the advertiser who pays for the ad to be distributed. Fake news websites are no exception, as they too have advertising space. Of course, the more a website is regularly visited, the more its advertising revenue increases. This means that fake news websites that are popular (and some certainly are) can generate significant revenue. Several studies have shown that disinformation can actually prove to be lucrative, to the extent that certain disinformation campaigns are mainly driven by financial motivesA report from the Global Disinformation Index found that disinformation sites made 235 million dollars from advertising revenue. Google represents the biggest part of the market, as it is the main advertising manager through its AdSense system, which puts hosts in contact with advertisers. Given that the finding of 235 million dollars was only based on websites sampled by the study, it is very likely that the total amount of profit is actually higher. By demonetising disinformation sites, the Commission is trying to dry them up.

The second principle relies on implementing technical procedures to combat disinformation. Disinformation is particularly difficult to fight because it uses computer programs (bots) to automatically spread content and relies on many different sources of disinformation. This makes it impossible to counter without using computers. A study that investigated 500,000 tweets identified that out of all the accounts that spread fake news, 14% were managed by bots. So, it would clearly be unimaginable to identify such a large volume of data manually. This is why the European Commission is pushing platforms to develop techniques to identify disinformation practices online.

The third principle is founded on what the European Commission calls best practices for information. It calls platforms to be more transparent with targeted advertising and to opt for reliable and authentic information, while respecting the principle of freedom of expression. This project joins other initiatives created to support quality journalism. The Code promotes the idea of “issue-based ads” that take political and also social issues into account, to categorise ads that should be placed as priorities.

The European Commission considered that the priority should be on controlling these issue-based ads and demonetising disinformation content.    

However, the Code established by the European Commission has not escaped criticism. The Sounding Board brought together by the Commission to judge the text, with experts, journalists and representatives from civil society, did not give a favourable response. They criticised the Code for not suggesting a common approach, not providing clear and meaningful commitments and not imposing measurable objectives or KPIs. The general approach was not considered to be self-regulation: the Sounding Board stated that the objective of the Code of Practice cannot be achieved as it stands. The opinion of the Sounding Board is available in its entirety.

The European Commission noted a more positive assessment in their annual report (October 2019), with good performances for the 2019 European elections. The Commission stated that no widespread disinformation campaign had been led. It also congratulated digital platforms for making their targeted advertising campaigns more transparent during these elections.

It particularly commended Facebook, which created an “ad library” on its site where you can freely consult a list of all groups that published advertising on Facebook and for how much. Facebook also took on the concept of issue-based ads in its own way, by offering a search tab dedicated especially for “issue, electoral or political” ads. The Commission is insisting that all other platforms implement an advertising policy with this level of transparency.

However, it should be noted that YouTube, which belongs to Google, was recently the subject of an AVAAZ 3 investigation for its policy that promotes climate disinformation.   

As for Twitter, it has forbidden any political advertising during electoral campaigns. It also provided its data for research, which the Commission applauded. Scientists have difficulty doing research into online platforms as data access is strictly controlled; this policy makes it hard to evaluate objectively how disinformation works online.

Microsoft has also decided to no longer publish political advertising. This measure is part of increased moderation of disinformation content circulating on its networks. 

Despite these steps forward, the European Commission is encouraging digital actors to intensify their actions against disinformation. It recommends making user-aimed tools more widespread, which would allow information on platforms to be identified, contextualised and verified.     

In France

Article 12 of the “Law of 22 December 2018 on combating the manipulation of information” states that online platforms must cooperate with the Conseil supérieur de l’audiovisuel (Higher Audiovisual Council, CSA) to fight the spread of fake news. The CSA has therefore created a team and a committee of experts dedicated to the fight against disinformation online. It has also placed an advisory text online with seven measures that platforms are invited to adopt. These recommendations explicitly work to extend the initiatives created by the EU and the European Commission. Platforms are encouraged to:

  1. Propose tools to allow internet users to act against disinformation: reporting, alerts, easy access to contextual information to identify if the information is reliable, etc.
  2. Make platform algorithms transparent.
  3. Make independently verified information, press content and audiovisual content more visible. The report makes reference to “labelling processes” that allow for more visibility. We recently discussed these labels in an article on our site
  4. Ramp up the fight against disinformation by improving technical detection processes and methods to remove damaging information as well as by facilitating academic research on the subject.
  5. Make the nature of published content transparent and in particular, differentiate between sponsored and advertising content and journalistic content.
  6. Promote all initiatives against disinformation, especially media education.
  7. Provide an annual report to the CSA to give information on how the recommendations have been implemented.

Conclusion

The CSA is using a method that is rather similar to the European Commission's Code of Practice - a list of recommendations and annual monitoring. This shows that practices against disinformation are being harmonised at the EU level. Disinformation is a worldwide yet hazy phenomenon, which means we need to use both large-scale and local approaches. The strategy adopted by the EU and its member countries to regulate digital platforms is being closely watched by the Americans. The 2020 US presidential elections will certainly strengthen the call for platforms to take solid action to fight disinformation. The way that disinformation was prevented during the 2019 European elections could definitely serve as a model for future campaigns.     

For more information:

An interactive map by the site Poynter lists initiatives against disinformation throughout the world.

  1. https://www.reuters.com/article/us-facebook-lawsuit/facebook-defeats-appeal-in-u-s-claiming-it-aided-hamas-attacks-in-israel-idUSKCN1UQ1YR[]
  2. https://www.nytimes.com/interactive/2020/01/17/opinion/joe-biden-nytimes-interview.html[]
  3. AVAAZ is an online platform that aims to bring citizens of the world together to coordinate on political and social projects. The platform is entirely funded by its members, just over 56 million worldwide. []
Language  :  English 
/
Share the article
Suivre les actualités de la désinformation
S'inscrire à la newsletter
Soutenir nos actions
faire un don
Fonds de dotation pour la création de la Fondation Descartes
8, Avenue du Président Wilson 75116 Paris.
Plus d'informations
Mentions légales
Gestion des cookies
Contactez-nous
Copyright © 2024 - Site réalisé par Monsieurcom
crosschevron-downchevron-right linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram