What is edge computing?

Companies like Amazon, Microsoft and Google have shown us that we can trust them with our personal data. Now is the time to reward that trust by giving them full control over our computers, toasters and cars.

Let me introduce the "edge" computing.

Edge is a buzzword. Like "IoT" and "cloud" before him, edge means everything and nothing. But I've been watching some industry experts on YouTube, listening to some podcasts, and even, on occasion, reading articles on the subject. And I think I've found a useful definition and some possible applications for this buzzword technology.

What is edge computing?

At the beginning, there was a large computer. Then, in the Unix era, we learned how to connect to that computer using silly (non-pejorative) terminals. Then we had personal computers, which was the first time ordinary people really owned the hardware that did the work.

At this time, in 2018, we are firmly in the era of cloud computing. Many of us still own personal computers, but most of us use them to access centralized services such as Dropbox, Gmail, Office 365 and Slack. In addition, devices like Amazon Echo, Google Chromecast and Apple TV work with content and intelligence that are in the cloud, unlike the DVD set of Little House on the Prairie or a CD-ROM copy of Encarta I could have enjoyed it in the era of personal computing.

As centralized as all this sounds, the truly amazing thing about cloud computing is that a large percentage of all Companies around the world now rely on infrastructure, hosting, machine learning and the computing power of a few cloud providers: Amazon, Microsoft, Google and IBM.

Amazon, the biggest by far of these "public cloud" providers (unlike the "private clouds" that companies like Apple, Facebook and Dropbox host themselves) had 47 percent of the market in 2017. [19659010] The advent of cutting edge computing as a buzzword that I should perhaps pay attention to is the understanding of these companies that there is not much growth left in the cloud space. Almost everything that can be centralized has been centralized. Most new opportunities for the "cloud" are on the "edge".

So, what is the edge?

The word edge in this context means literal geographical distribution. Edge computing is computing that is done at or near the origin of the data, instead of relying on the cloud in one of a dozen data centers to do all the work. It does not mean that the cloud will disappear. It means that the cloud is coming to you.

That said, let's get out of the word definition game and try to examine what people mean practically when they exalt computing.

Latency

A great controller for cutting edge computing is the speed of light If a Computer A needs to ask Computer B, half a world away, before it can do anything, the computer user A perceives this delay as latency. The brief moments after clicking on a link before your web browser really starts to show something is due in large part to the speed of light. Multiplayer videogames implement numerous elaborate techniques to mitigate the real and perceived delay between you shooting someone and knowing, with certainty, that you missed.

Voice assistants usually need to resolve their requests in the cloud, and the round trip time can be very noticeable. Your echo has to process your speech, send a compressed representation of it to the cloud, the cloud has to decompress that representation and process it, which may involve pinging another API somewhere, maybe to find out the weather and add more speed of light – delay in the limit – and then the cloud sends its echo response, and finally you can learn that today you should expect a maximum of 85 and a minimum of 42, so you definitely give up on dressing appropriately for the weather.

So, a recent rumor that Amazon is working on its own AI chips for Alexa should not come as a surprise. The more processing Amazon can do on your local Echo device, the less you have to depend on the cloud. It means you get faster answers, Amazon server costs are less expensive, and it is conceivable that if enough work is done locally you could end up with more privacy, if Amazon feels magnanimous.

Privacy and security

It might be strange to think of it this way, but the security and privacy features of an iPhone are well accepted as an example of cutting-edge computing. By simply doing encryption and storing biometric information on the device, Apple downloads a ton of security concerns from the centralized cloud to the devices of its diasporic users.

But the other reason why this seems to me to be cutting-edge computing, not personal computing, is because while the calculation work is distributed, the definition of the computation work is managed centrally. He did not have to manipulate hardware, software and best security practices to keep his iPhone safe. You just paid $ 999 at the cell phone store and trained him to recognize your face.

The management aspect of edge computing is very important for security. Think about how much pain and suffering consumers have experienced with poorly managed Internet of Things devices.

As he said @SwiftOnSecurity:

That's why Microsoft is working on Azure Sphere, which is a managed Linux operating system, a certified microcontroller and a cloud service. The idea is that your toaster is so difficult to hack, and as centrally updated and managed as your Xbox.

I have no idea if the industry will adopt Microsoft's specific solution for the IoT security problem, but it seems like an easy task I guess most of the hardware I buy in a few years will have its software updated automatically and security managed centrally . Because otherwise your toaster and dishwasher will join a botnet and ruin your life.

If you doubt me, just look at the success that Google, Microsoft and Mozilla have had when moving browsers to a "perennial" model.

about it: you could probably tell me which version of Windows is running. But do you know what version of Chrome you have? Edge computing will look more like Chrome, less Windows.

Bandwidth

Security is not the only way that state-of-the-art computing will help solve the problems that IoT introduced. The other example that I see a lot mentioned by edge advocates is the bandwidth saving allowed by edge computing.

For example, if you buy a security camera, you can probably transmit all your footage to the cloud. If you buy a dozen security cameras, you have a bandwidth problem. But if the cameras are smart enough to save only the "important" footage and discard the rest, the Internet pipes will be saved.

Almost any technology that is applicable to the latency problem is applicable to the problem of bandwidth. Running AI on a user's device instead of everything in the cloud seems to be a big focus for Apple and Google at this time.

But Google is also working hard at making even more edge-y websites. Progressive web applications usually have offline functionality. That means you can open a "website" on your phone without an Internet connection, do some work, save the changes locally and synchronize with the cloud when convenient.

Google is also becoming smarter by combining local AI functions with the purpose of privacy and bandwidth savings. For example, Google Clips keeps all your local data by default and performs your magic artificial intelligence inference locally. It does not work very well in its stated purpose of capturing great moments of your life. But, conceptually, it is state-of-the-art computing par excellence.

All previous

Cars without driver are, as far as I know, the best example of edge computing. Due to latency, privacy and bandwidth, you can not feed the cloud to all the numerous sensors in a driverless car and wait for an answer. Your trip can not survive that kind of latency, and even if could the cellular network is too inconsistent to trust it for this kind of work.

But automobiles also represent a complete change in user responsibility for the software they run on their devices. A driverless car of almost has to be managed centrally. You need to receive updates from the manufacturer automatically, you need to send processed data to the cloud to improve the algorithm, and the nightmare scenario of a network of car bots without a driver makes the toaster and dishwasher botnet that we worried about look like a movie from Disney.

What do we give up?

I have some fears about edge computing that are difficult to articulate, and possibly unfounded, so I'm not going to immerse myself in them completely.

But the big picture is that the companies that do it better will control even more of their life experiences than they do at the moment.

When the devices in your home and garage are managed by Google Amazon Microsoft Apple, you do not have to worry about them. security. You do not have to worry about the updates. You do not have to worry about functionality. You do not have to worry about the capabilities. He will simply take what is given to him and use it as best he can.

In this worst-case world, he wakes up in the morning and asks Alexa Siri Cortana Assistant what features his corporate bosses have brought to their toaster, dishwasher, car and telephone during the night. In the era of the personal computer, you would "install" the software. In the era of edge computing, you will only use it.

It's up to big companies to decide how much control they want to get over the lives of their users. But, it could also be a decision of the users to decide if there is another way to build the future. Yes, it's a relief to take your hands off the wheel and let Larry Page guide you. But, what if you do not like where you're going?