24 January 2018

Fake News: Defining and Defeating



Following U.S. President Donald Trump’s decision to present his media critics with so-called “fake news awards,” it is more important than ever to define what “fake news” actually is, and what it is not. President Trump’s repeated accusations of fake news are dangerous both for journalists around the world, and for the integrity of free and fair democratic discourse in America.

(Source: @realdonaldtrump / Twitter)

Research by the Committee to Protect Journalists shows that “when public figures and political leaders lob insults at the media, they encourage self-censorship and expose journalists to unnecessary risk.”

Internationally, in the words of another pressure group which defends journalists, Reporters Without Borders, “Predators of press freedom have seized on the notion of ‘fake news’ to muzzle the media on the pretext of fighting false information.”

One of @DFRLab’s core functions is to identify, expose, and explain deliberate falsehood online, including what is often termed “fake news”.

We publish our methods, to demonstrate how deliberate falsehood, and deliberate mislabelling by authoritarian regimes, can actually be exposed. With transparent methods, we do not assume the credibility of our reporting; we prove it.
What is “fake news”?

There is no universally-accepted definition of “fake news.” BBC Media Editor Amol Rajan has identified three sorts of fake news. According to Huffington Post blogger Dr. John Johnson, there are five sorts. A list by Dr. Claire Wardle, research director of not-for-profit group First Draft and now of the Harvard Shorenstein Center, features seven sorts.

As a working definition, @DFRLab considers that fake news is “deliberately presenting false information as news.”

We differentiate this from disinformation, which we consider to be “deliberately spreading false information;” fake news is thus a subset of disinformation. We further distinguish it from misinformation, which we take to mean the unintentional spreading of false information.

We prefer the term “disinformation” in our reporting, as being more clearly defined, and less subject to hyperpartisan and authoritarian abuse. Defining these terms allow for a clearer distinction between various sorts of inaccuracy and genres of misleading content polluting the information environment.

Based on the above definitions, two things are necessary to expose and explain either disinformation or fake news:

1. Proof that the claims made are false.

2. Proof that the falsehood was deliberate.

Exposing disinformation is an analytical function. Like any other such function, it requires evidence; and as in any other analytical discipline, accusations of disinformation or fake news without evidence are worthless.

It is extremely concerning that, over the past 18 months, many authoritarian leaders have taken to accusing their critics of “fake news” without providing verifiable or complete evidence (see, for example, the Russian Foreign Ministry here and here, Syrian President Bashar al-Assad here, and Philippines President Rodrigo Duterte here).

The complete rebuttal, by the Russian Foreign Ministry, of a New York Times report: a big red stamp labeled “fake,” and the unevidenced claim that “This article makes false assertions.” (Source: Russian Foreign Ministry)

As U.S. Republican Senator Jeff Flake said in an address on January 17 which directly referenced President Trump’s “fake news” awards:

We are not in a ‘fake news’ era, as Bashar Assad says. We are, rather, in an era in which the authoritarian impulse is reasserting itself, to challenge free people and free societies, everywhere.

Senator Flake, who has announced that he will not stand for re-election later this year, also underscored the importance of factual reporting to democracy.

Without truth, and a principled fidelity to truth and to shared facts, Mr. President, our democracy will not last.

Indeed, invoking “fake news” creates a more favorable environment for falsehood and deception, by devaluing the work of those who report the facts and genuinely expose disinformation. The connotation attached to the term creates a façade of credibility for the asserter by eroding the same credibility of the original source.
Proving inaccuracy

The simpler part of the equation is proving inaccuracy. To qualify as fake news, an article must contain significant inaccuracies; these can be identified and exposed using a range of fact-checking techniques. Such techniques can range from simply searching more than one source on any given topic to more sophisticated verification practices.

Fact checking isn’t the exclusive domain of journalists in a newsroom or engineers at a social media company, it only requires anybody consuming information to have a healthy degree of skepticism.

One important principle to bear in mind is that fact-checking deals with facts. Facts, by definition, are things which have already happened. The root of the word is the Latin “factum”, literally “a thing which has been done”.

Origin of “fact”. (Source: Oxford Dictionaries)

By that definition, opinions and predictions are not facts, even though they are often reported as such. They may be hateful, improbable or nakedly partisan; they may cite, or be based on, a number of verifiable facts; but the opinions themselves are not facts, and therefore cannot be considered as “fake news”.

A number of online platforms provide primers on how to expose factual inaccuracies, and the fake accounts which spread them. The Public Data Lab and First Draft News have published a handbook on identifying fake news. Ukrainian platform StopFake offers a collection of online tools from various sources; the Bellingcat team of investigative journalists shares regular postson more advanced techniques. @DFRLab has offered this post on how to spot fake, automated “bot” accounts on Twitter.

(Source: Public Data Lab / First Draft)
Even without these tools, users can perform a basic check on the reliability of particular article by cross-checking its accuracy via Google, assessing whether it is balanced with voices from both sides, and Googling any quoted sources to establish their credibility. This is the ABC of news analysis.
Correcting the incorrect

Proving deliberate falsehood is more difficult. However, proving intent allows us to better explain disinformation, in addition to identifying and exposing it. For that reason, once we identify factual errors, we ask a number of follow-up questions.

1. Was the error corrected in a reasonable time frame?

2. If not, is there any proof that the author knew it was an error?

3. If not, could the mistake have been avoided by conducting basic research?

4. Is this a consistent pattern of behavior by the author?

Publishing a correction within a reasonable time frame indicates, to us, that the error was not deliberate, and therefore does not qualify as “fake news”. The definition of what time frame is “reasonable” is flexible, and depends on the magnitude and publishing format of the error; in general, a correction issued within 24 hours can be considered reasonable.

(Source: @DFRLab)

For example, on December 8, CNN reported inaccurately that President Trump and his son had advance access to emails hacked from his electoral rival, Hillary Clinton, in 2016. CNN published and broadcast a prominent correction the same day.

CNN’s correction; note that it is in bold and at the top of the story. The accompanying video further explained the error. (Source: CNN)

That can be compared with this article from pro-Kremlin site Donbass News International, which claimed inaccurately the United States was sending “3,600 tanks against Russia.” The figure was wildly exaggerated (the correct number of tanks was 190); over a year later, and despite the fact that @DFRLab debunked the story, the original version remained uncorrected.

The Donbass News International article, screenshot as of January 18, 2018. Note the lack of any correction or acknowledgement of the error. (Source: DNINews.com)

Trump listed the CNN error among his “fake news” awards; however, CNN made a mistake and corrected it, publicly, visibly, and reasonably quickly. A separate CNN error in June 2017 led to three journalists resigning. There are certainly questions to ask about quality control at CNN at the time, but to accuse the network of deliberately spreading false information ignores the fact of how the outlet responded.

By contrast, the DNI error was allowed to stand uncorrected for more than a year, despite being exposed.
Mistake or mischief?

This was not enough, in itself, to qualify the DNI report as fake news or disinformation: to do so, @DFRLab had to establish whether there was reasonable proof that the author knew they published false information.

The U.S. deployment to Europe in January 2017 involved two main military moves. The first featured the 3rd Armored Brigade Combat Team (3rd ABCT) deploying across Eastern Europe.

As we wrote at the time, “According to an official account published by U.S. Army Europe, the 3rd ABCT’s equipment consists of 446 tracked vehicles, 907 wheeled vehicles and 650 trailers, for a total of 2,003 vehicles of all sorts. Among the heavy armor are 87 M1A1 Abrams tanks, 18 Paladin self-propelled howitzers and 144 Bradley Infantry Fighting Vehicles.”

The wheeled vehicles included trucks and Humvees.

Separately, the U.S. Army placed some 1,600 vehicles — again including Abrams tanks, Paladins and Bradleys, trucks, trailers and Humvees — in storage in the Netherlands. This provided the total of 3,600 vehicles referenced in the headline.

The question was whether the Donbass author could credibly have thought that all the vehicles were tanks. The answer was no. Paladins and Bradleys could, at a pinch, be classed as tanks (in that they are armored and carry a main weapon); however, there was no plausible way in which trucks, trailers or Humvees could be mistaken for tanks.



Spot the difference. Above, a Paladin howitzer of the 3rd ABCT arriving in Bremerhaven, January 7, 2017. (Source: EPA / Daily Mail). Below, Humvees of the 3rd ABCT loaded for transit in Poland, January 31, 2017. (Source: army.mil)

Moreover, later in the same article, the author showed an ability to differentiate between tanks, containers and “other vehicles,” in the context of a German deployment.

Quote from the same Donbass News International article. Note the differentiation of vehicles, and the polemic reference to the “war of extermination”. (Source: DNINews.com)

From this, we concluded the author was capable of distinguishing between different types of vehicle, but chose not to. This was therefore a case of deliberately reporting a wildly exaggerated account of the number of U.S. tanks being sent to Europe.
Due diligence (or any diligence at all)

Not all false reports allow us to identify what the author knew with the same degree of certainty. In such cases, the question becomes whether the mistake could have been avoided by basic research and due diligence.

A good example is the report broadcast by Russian state television program “Vesti” on April 15, 2017, claimed that Russia used an “electronic bomb” to disable a U.S. Aegis cruiser in the Black Sea three years earlier. Its only source for the claim was an alleged “social media post” by a U.S. sailor.

Screenshot from the Russian state TV report, showing the alleged social media post. (Source: Vesti)

As @DFRLab revealed, the story was a fake: the “social media post” in question was a poor English translation of a Russian parody piece, which was published on a pro-Kremlin website in 2014.

Posts using “Aegis,” “locator,” “mysticism” and “shame” from a Facebook search. Note how each links to a Russian original. (Source: Facebook.)

Was it deliberate? We do not have the same level of internal evidence in the report to prove whether the broadcaster knew that it was a fake. However, even a basic search would have revealed the origin of the story: the search terms “Aegis,” “locator,” “mysticism”, and “shame” give as unique a fingerprint as a researcher could wish for.

Moreover, the story was debunked long before, by no less an authority than Russian arms manufacturer KRET (Концерн Радиоэлектронные технологии, the Concern of Radio-Electronic Technology), which makes the “Khibiny” weapons system alleged to have knocked out the U.S. ship.

As KRET wrote in October 2016:

By the way, nowadays Khibiny is being installed on Su-30, Su-34 and Su-35, so the famous April attack in the Black sea on USS Donald Cook by Su-24 bomber jet allegedly using Khibiny complex is nothing but a newspaper hoax. The destroyer’s buzzing did take place. This EW system can completely neutralise the enemy radar, but Khibiny are not installed on Su-24.

The failure of due diligence was so catastrophic in this case that it should, if unintentional, have resulted in a public correction at the very least, and the disciplining of all those involved. Instead, as of January 18, 2018, the article was still online on Vesti’s YouTube channel, with no acknowledgement that it was based on utter falsehood.

Screenshot from Vesti’s YouTube channel as of January 18, 2018, when it was archived. Note the lack of anything remotely approaching a correction. (Source: Vesti / YouTube)

Thus, the mistake was one which even a basic exercise in due diligence would have exposed; the report repeated a false story, which was already been debunked by an authoritative source; yet it was not corrected.

We therefore conclude, based on all the available evidence, that this was deliberate disinformation, designed to boost Russian viewers’ perceptions of their military capabilities at the expense of American capabilities.
Conclusion

Exposing disinformation in this way is time-consuming and complex — again, like most analytical disciplines. It requires multiple levels of evidence, to prove both that the facts were wrong, and that the mistake was deliberate. Indeed, it is easier to produce a falsehood than prove a truth.

For this reason, it is likely that many cases of fake news have not been exposed, for lack of conclusive evidence.

However, we find that working in this way is the only way for any researcher to remain credible, and to defend the integrity of the information system, a process which makes the information system more resilient. Accusing an outlet of deliberately presenting false facts is a serious act; doing so irresponsibly, without due levels of evidence, does a grave disservice both to the target of the accusation and to the broader concepts of accuracy and empirical evidence.

Disinformation and “fake news” are genuine problems. Exposing them requires evidence. Those who make the accusation without presenting the evidence may claim to be fighting fake news; in fact, by discrediting the term and everything associated with it, they are making the problem worse. 

@DFRLab will no longer use “fake news” or “#FakeNews” in the headlines or promotion of our research and reporting. The term is imprecise and comes with connotations that dilute the integrity of our work and that of independent journalists and researchers. 

Ben Nimmo is Senior Fellow for Information Defense at the Atlantic Council’s Digital Forensic Research Lab (@DFRLab).

Graham Brookie is Deputy Director and Managing Editor of @DFRLab.

@DFRLab is a non-partisan team dedicated to exposing disinformation in all its forms. Follow along for more from the #DigitalSherlocks.

No comments: