Why AI isn’t going to solve Facebook’s fake news problem

Facebook has many problems at the moment, but one that is definitely not going to disappear in the short term is a false news. As the company's user base has grown to include more than a quarter of the world's population, (understandably) it has struggled to control what everyone publishes and shares. For Facebook, the unwanted content can range from mild nudity to serious violence, but what has proven to be more sensitive and harmful to the company are deception and misinformation, especially when it has a political inclination.

What will Facebook do? about? At the moment, the company does not seem to have a clear strategy. Instead, he's throwing a lot into the wall and seeing what works. More human moderators were hired (as of February this year it had around 7,500); gives users more information on the site about news sources; and in a recent interview, Mark Zuckerberg suggested that the company could establish some kind of independent body to decide what content is kosher. (What could be considered democratic, an abandonment of responsibility or an admission that Facebook is out of reach, depending on your point of view). But one thing that experts say Facebook must be extremely careful about is giving AI all the work.

So far, the company seems to be experimenting with this approach. During and interview with The New York Times about the Cambridge Analytica scandal, Zuckerberg revealed that for Alabama's special elections last year, the company "implemented some new artificial intelligence tools to identify fake accounts and news false. " They specified that they were Macedonian accounts (a center established in the fake news business for profit), and the company later clarified that they had implemented machine learning to find "suspicious behavior without evaluating the content itself".

This is smart because when it comes to false news, AI is not up to the job.

The challenges of building an automated fake news filter with artificial intelligence are numerous. From a technical perspective, AI fails at a series of levels because it simply can not understand human writing in the same way as humans. You can extract certain facts and do a crude feeling analysis (guess if a content is "happy" or "angry" according to the keywords), but can not understand the subtleties of the tone, consider the cultural context or call someone. to corroborate information. And even if he could do all this, which would eliminate the most obvious misinformation and deception, he would eventually run into extreme cases that confuse even humans. If the people on the left and the right can not agree on what they are and are not "false news," there is no way we can teach a machine to make that judgment for us.

In the past, efforts to deal with fake news using AI have quickly come up against problems, such as with the Fake News Challenge, a competition for multi-tasking machine learning solutions held last year. Dean Pomerleau of Carnegie Mellon University, who helped organize the challenge says The Verge that he and his team soon realized that AI could not address this alone.

"In fact, we started with a more ambitious goal of creating a system that could answer the question" Is this news false, yes or no? "We quickly realized that machine learning I just was not up to the task. " [19659010] Pomerleau emphasizes that understanding was the main problem, and to understand why exactly language can be so nuanced, especially online, we can draw on the example set by Tide pods. As Cornell professor James Grimmelmann explained in a recent essay on false news and platform moderation, the embrace of irony on the internet has made it extremely difficult to judge sincerity and intention. And Facebook and YouTube discovered it when they tried to delete the Tide Pod Challenge videos in January of this year.






A YouTube thumbnail of a video that may be backing the Tide Pod Challenge, or warning it, or a combination of both.
Image: YouTube / Leonard

As Grimmelmann explains, when deciding which videos to eliminate, companies would face a dilemma. "It's easy to find videos of people who hold Tide Pods, observing with sympathy how tasty they look and then give a speech with their fingers about not eating them because they are dangerous," he says. "Are these candid anti-pod public service announcements, or are they surfing the wave of interest in eating pods by superficially claiming to denounce it? Both at the same time?"

Grimmelmann calls this effect "kayfabe mimetic", taking borrowed the wrestling term for the voluntary suspension of disbelief on the part of the audience and the fighters. He also says that this opacity in meaning is not limited to the meme culture, and has been adopted by political supporters, often responsible for creating and sharing false news. Pizzagate is the perfect example of this, says Grimmelmann, as it is "both a true conspiracy theory, a cheating farce of a conspiracy theory and a contemptuous meme about conspiracy theories."

So if Facebook had chosen to block any pizzeria During the 2016 elections, they would probably have received complaints not only about censorship, but also protests that such stories were "just a joke". Extremists frequently exploit this ambiguity, as was best demonstrated in the leaked style guide of neo- Nazi Website The Daily Stormer . Founder Andrew Anglin advised aspiring writers: "the uneducated should not be able to tell if we are joking or not," before making it clear that they are not: "This is obviously a ploy and I really want gas kikes. that is neither here nor there. "

Considering this complexity, it is not surprising that the Pomerleau Fake News Challenge ended up asking teams to complete a simpler task: create an algorithm that simply discovers articles on the same subject. Something in which they turned out to be pretty good.

With this tool, a human could label a story as false news (for example, claiming that a certain celebrity died) and then the algorithm would annul any cover that would repeat the lie. "We talked to real-life data verifiers and we realized they would be informed for quite some time," says Pomerleau. "So the best thing we could do in the machine learning community would be to help them do their job."

This seems to be Facebook's preferred approach. For this year's Italian elections, for example, the company hired independent data inspectors to flag false news and deception. Problematic links were not removed, but when shared by a user, they were labeled "Challenged by third-party fact verifiers". Unfortunately, even this approach has problems, with a recent report from Columbia Journalism Review highlighting the numerous frustrations of data inspectors with Facebook. The journalists involved said it was often unclear why Facebook's algorithms told them to verify certain stories, while sites known to spread lies and conspiracy theories (such as InfoWars ) were never reviewed.

However, there is definitely a role for the algorithms in all this. And while AI can not do anything heavy to eliminate fake news, you can filter it in the same way that spam filters out of your inbox. Anything with bad spelling and grammar can be eliminated, for example; or sites that depend on the imitation of legitimate outlets to attract readers. And as Facebook has demonstrated with its goal of Macedonian accounts "that were trying to spread false news" during the special elections in Alabama, it can be relatively easy to point out false news when they come from known trouble spots.

Experts say, however, that is the limit of AI's current capabilities. "This type of whack-a-mole could help filter rich teens into getting rich from Tbilisi, but it is unlikely to affect coherent but large-scale offenders like InfoWars ," Mor Naaman, professor Associate of Information Science at Cornell Tech, says The Verge . He adds that even these simpler filters can create problems. "Classification is often based on language patterns and other simple signals, which can" trap "honest independent and local publishers along with producers of fake news and misinformation," says Naaman.

And even here, there is a potential dilemma for Facebook. Although to avoid accusations of censorship, the social network should be open about the criteria used by its algorithms to detect false news, if also open people could play with the system, working around its filters.

For Amanda Levendowski, a law professor at New York University, this is an example of what she calls the "Valley Fallacy." Speaking The Verge on Facebook's moderation of AI, suggests it's a common mistake, "Where companies start saying: & # 39; We have a problem, we should do something, this is something, so we should do this & # 39; without carefully considering if this could create new or different problems. " Levendowski adds that despite these problems, there are many reasons why technological firms will continue to seek AI moderation, which ranges from "improving user experiences to mitigating the risks of legal liability".

These are surely temptations for Zuckerberg, but even then he seems to lean too much towards AI to solve his problems of moderation would be reckless. And it's not something he would like to explain to Congress next week.

Leave a Reply