An AI speed test shows clever coders can still beat tech giants like Google and Intel

There is a common narrative in the AI ​​world that bigger is better. To train the fastest algorithms, they say, you need the most extensive data sets and the most robust processors. Just look at Facebook's announcement last week that it created one of the most accurate object recognition systems in the world using a data set of 3.5 billion images. (All taken from Instagram, of course.) This narrative benefits the tech giants, helping them attract talent and investment, but a recent IA competition organized by Stanford University shows that conventional wisdom is not always true. Quite adequately for the field of artificial intelligence, it turns out that brains can still overcome muscle development .

The test comes from the DAWNBench challenge, which was announced by Stanford researchers last November and the winners declared last week. Think of DAWNBench as an athletics meeting for AI engineers, with obstacles and long jump, replaced by tasks such as object recognition and reading comprehension. Teams and individuals from universities, government departments and industry competed to design the best algorithms, with Stanford researchers acting as adjudicators. Each entry had to meet the basic standards of accuracy (for example, recognizing 93 percent of dogs in a given data set) and judged by metrics such as how long it took to train an algorithm and how much it cost.

These metrics were chosen reflecting the real world demands of AI, explain Matei Zaharia and Cody Coleman of Stanford. "By measuring the cost […] you can find out if it's a smaller group if you need a Google-level infrastructure to compete," says Zaharia The Verge . And by measuring the speed of training, you know how long it takes to implement an AI solution. In other words, these metrics help us judge whether small teams can take on the technological giants.

The results do not give a direct answer, but suggest that the power of raw computing is not the only and final one for the success of AI. Ingenuity in the way of designing algorithms counts at least as much. While large technology companies such as Google and Intel had predictably strong projections in a series of tasks, smaller teams (and even individuals) ranked highly using unusual and unfamiliar techniques.

Take, for example, one of the DAWNBench object recognition challenges, which required teams to train an algorithm that could identify elements in an image database called CIFAR-10. Databases like this one are common in AI and are used for research and experimentation. CIFAR-10 is a relatively old example, but it reflects the kind of data that a real company could hope to deal with. It contains 60,000 small images, only 32 by 32 pixels in size, and each image falls into one of ten categories like "dog", "frog", "ship" or "truck".

In the DAWNBench league tables, the first three places for the fastest and most economical algorithms to train were taken by researchers affiliated with a group: Fast.AI. Fast.AI is not a large research laboratory, but a self-funded group that creates learning resources and is dedicated to making deep learning "accessible to all." Jeremy Howard, co-founder, entrepreneur and data scientist at the institute, tells The Verge that the victory of his students were reduced to think creatively, and that this shows that anyone can "get world-class results using basic resources."

Howard explains that to create an algorithm to solve CIFAR, the Fast.AI group resorted to a technique relatively unknown known as "super convergence." It was not developed by a well-funded technology company or published in a large magazine, but was created and published. each for a single engineer named Leslie Smith. working at the Naval Research Laboratory.

Basically, superconvergence works by slowly increasing the flow of data used to train an algorithm. Think of it this way, if you were teaching someone to identify trees, you would not start by showing them a forest. Instead, you would present them with information slowly, starting with teaching them about the species and the individual leaves. This is a bit simplifying, but the result is that by using superconvergence, the Fast.ai algorithms were considerably faster than those of the competition. They were able to train an algorithm that could order CIFAR with the required precision in just under three minutes. The next fastest team that did not use super convergence took more than half an hour.

Not everything was fast. However, it is a way. In another challenge to use object recognition to order through a database called ImageNet, Google achieved a home, occupying the first three positions in training time, and the first and second in training costs. (Fast.AI ranked third in cost and fourth in time). However, Google's algorithms were all run on the company's customized AI hardware, chips specially designed for the task known as Tensor Processing Units or TPU. In comparison, the Fast.AI entries used normal Nvidia GPUs running on a single standard swamp PC; Hardware that is available to everyone.




Google's Tensor Processing Units (or TPUs) are chips specially available only from Google.
Photo: Google

"The fact that Google has a private infrastructure that can train things easily is interesting, but perhaps not completely relevant," says Howard. "While discovering that you can do almost the same thing with a single machine in three hours for $ 25 is extremely relevant."

These results of ImageNet reveal precisely because they are ambiguous. Yes, Google's hardware reigned supreme, but is it a surprise when we're talking about one of the richest technology companies in the world? And yes, although Fast.AI students presented a creative solution, it's not that Google's was not also ingenious. One of the company's entries made use of what it calls "AutoML": a set of algorithms that searches for the best algorithm for a given task without human direction. In other words, AI that designs AI.

The challenge of understanding these results is simply not a matter of discovering who is the best: they have clear social and political implications. For example, consider the question of who controls the future of artificial intelligence. Will the big technology companies like Amazon, Facebook and Google use artificial intelligence to increase their power and wealth? Or will the benefits be more equitable and available in a more democratic way?

For Howard, these are crucial questions. "I do not want deep learning to remain the exclusive place of a small number of privileged people," he says. "It really bothers me, talking to young practitioners and students, this message that being great is everything." It's a great message for companies like Google because they recruit people because they believe that unless you go to Google you can not do a good job. But it is not true. "

Unfortunately, we can not be AI fortune tellers.Nobody can predict the future of the industry by examining the challenges of the DAWNBench challenge. And, in fact, if the results of this competition show anything, it is that this is a field still in great flux, will small and agile algorithms decide the future of artificial intelligence or will it be the power of unprocessed computing? saying, and expecting a simple answer would be unreasonable anyway.

Zaharia and Coleman, two of the organizers of DAWNBench, say they are happy to see the competition elicit so many responses. "There was a tremendous amount of diversity," he says. Coleman: "I'm not too worried about [one company] taking over the industry simply based on what happened with deep learning. We are still at a time when there is an explosion of frames in motion [and] with many shared ideas. "

The couple points out that, although it was not a criterion for competition, the vast majority of entries to DAWNBench were This means that your underlying code was published online, and that anyone can examine it, implement it and learn from it.Thus, they say, who wins in the challenges of DAWNBench, everyone benefits.

Leave a Comment

Scroll to Top