AI is an excuse for Facebook to keep messing up

In the course of an accumulated 10 hours spread over two days of hearings, Mark Zuckerberg dodged question after question citing the power of artificial intelligence.

Moderation of hate speech? AI will fix it. Terrorist content and recruitment? AI again. False accounts? AI. Russian misinformation? AI. Racially discriminatory ads? AI. Security? AI.

It's not even clear what Zuckerberg means by "IA" here. He repeatedly mentioned how Facebook's detection systems automatically eliminate 99 percent of "terrorist content" before any type of signaling. In 2017, Facebook announced that it was "experimenting" with AI to detect the language that "could be advocating terrorism", supposedly a deep learning technique. It is not clear that deep learning is actually part of Facebook's automated system. (We sent an email to Facebook to get clarifications and we still have not received a response). But we do know that AI is still in its infancy when it comes to understanding language. As James Vincent of The Verge concludes from his report, AI is not up to the task in terms of the nuances of human language, and that does not even take into account extreme cases in which even humans they disagree. In fact, AI could never be able to deal with certain content categories, such as false news.

Beyond that, the types of content that Zuckerberg focused on were the images and the videos. From what we know about Facebook's automated system, in its essence, it is a search mechanism in a shared database of hashes. If you upload a decapitation video that has previously been identified as terrorist content in the database, by Facebook or one of its partners, it will be automatically recognized and deleted. "It's hard to differentiate between that and the early days of the Google search engine, from a technological perspective," says Ryan Calo, a law professor and director of the Tech Policy Lab at the University of Washington. "If that was AI, then this is AI."

That's the good thing about AI as an excuse: artificial intelligence is a broad term that can include the automation of all varieties, machine learning or, more specifically, deep learning. It is not necessarily wrong to call the AI ​​of the automatic Facebook removal system. But you know that if you say "artificial intelligence" in front of a body of lawmakers, they will begin to imagine AlphaGo or maybe more fantastic, SkyNet and C-3PO will tear down the videos of terrorist decapitation before anyone sees them. None of them is imagining the Google search.

The invocation of AI is a maneuver deployed in a group of laity that, for the most part, unfortunately swallowed it in part. The only exception could have been Senator Gary Peters (D-MI), who followed up with a question about the transparency of AI. "But you also know that artificial intelligence is not without risks, and that you have to be very transparent about how those algorithms are built." Zuckerberg's response was to recognize that it was a "really important" question and that Facebook had a whole AI ethics team working on the issue.

"I do not believe that in 10 or 20 years, in the future that we all want to build, we want to end systems that people do not understand how they are making decisions," Zuckerberg said.

Zuckerberg said again and again in the hearings that in five to ten years he trusted that they would have sophisticated AI systems that were up to the challenge of dealing with even linguistic nuances. Give us five to 10 years, and we will have all this resolved.

But the point is not only that Facebook has not been able to scale for content moderation. It did not detect entire categories of misbehavior to be taken into account, such as intentional disinformation campaigns carried out by nation-states, the diffusion of false reports (either by national states or mere speculators), and leaks of data such as the Cambridge Analytica scandal. He has not been transparent about his moderation decisions even when these decisions are driven by human intelligence. It has not been able to cope with its growing importance in the media ecosystem, it has not safeguarded user privacy, it has not anticipated its role in the genocide in Myanmar, and it may not even have safeguarded American democracy.

Artificial intelligence can not solve the problem of not knowing what the hell it is doing and not worrying about one way or the other. It is not a solution for lack of vision and lack of transparency. It's an excuse that deviates from the question itself: yes and how to regulate Facebook.

In fact, advances in artificial intelligence suggest that the law itself should change to keep pace with it, not justify a non-intervention approach.

Artificial intelligence is just a new tool, which can be used for good and bad purposes, and which also presents new dangers and disadvantages. We already know that, although machine learning has great potential, data sets with ingrained biases will produce partial results: garbage, garbage. The software used to predict recidivism in defendants results in racist results, and more sophisticated artificial techniques will simply make these types of decisions more opaque. This type of opacity is a big problem when machine learning is deployed with the purest of good intentions. It's an even bigger problem when machine learning is being implemented to better attract consumers with ads, a practice that, even without machine learning, allowed Target to discover that a teenager was pregnant before her parents knew it.

"At the same time, it's said that AI changes everything: it changes the way we do everything, it's a game -changer- but nothing should change," says Ryan Calo. "One of these things can not be correct, or everything is publicity and we should not overreact, or it represents a legitimate radical change, it's really misleading to argue that the reason why we should get out of AI's way is that it's very transformer ".

If it was not clear that "wait and see what technological wonders come to mind" is just a post, it is obvious from the privacy approach of Facebook that is more than willing to stagnate forever. In a part of Wednesday's hearing before the Committee of the Chamber of Energy and Commerce, Zuckerberg said in response to a question about privacy: "I think we'll discover what the social norms and rules we want to implement are." Then, in five years time, we will return and have learned more things. And either, that will be only because social norms have evolved and the company's practices have evolved, or we are going to put rules in place. "

Five years? Will we wait five years to discover the user's privacy? It's been 14 years since the founding of Facebook, there are people of voting age who do not remember once before Facebook, Facebook was criticized for privacy error in 2006 when it launched its News Feed without telling users what it would look like and how Privacy settings would affect what his friends saw.In 2007, he launched Beacon, which injected information about user purchases into News Feed, a decision that resulted in a class-action lawsuit that resulted in $ 9.5 million. under a consent decree in 2011 on his privacy flaws, a decree of consent may be in violation due to the Cambridge Analytica scandal.

. In citing the AI ​​excuse, Mark Zuckerberg is simply preparing to stumble from one ethical swamp to another. He did not know what he was doing when he created Facebook, and to be fair, nobody did it. When Facebook launched, it launched itself headlong into a brave new world. Nobody knew that the cost of connecting people around the world to get advertising revenue was going to be Cambridge Analytica.

But the clues were there from the beginning: privacy advocates repeatedly warned against the aggressive and indiscriminate collection of data, others opined about the cumbersome advertising orientation and experts expressed concern about the effects of the social networks in elections.

Facebook of five to 10 years to fix their problems, and within five to 10 years, Mark Zuckerberg will testify before Congress once again about the unintended consequences of his use of artificial intelligence.

Leave a Reply