Edge Computing is the Future According to Satya Distinguished Lecture

Satya talk

Smartphones and sensors collect more data than current networks can process. To ensure data can be analyzed faster and for less money yet maintain security, the network needs to come closer. Using the edge of the network is known as edge computing, and it’s becoming the new paradigm.

Carnegie Mellon Professor Mahadev (Satya) Satyanarayanan delivered the School of Computer Science Distinguished Lecture on the future potential of edge computing on Friday, Oct. 11. In his talk, Edge Computing: A New Disruptive Force, he spoke to why edge computing is dominating the field, how it can improve people’s lives, and what progress can be expected.

Networks have changed drastically in the past decade. The cloud was seen as the biggest innovation, but now there are networks everywhere, from cloudlets to mobile devices.

Satya broke down the network down into three tiers:

  1. Tier 1: cloud
  2. Tier 2: cloudlets (such as aircraft, vehicles, mounted racks)
  3. Tier 3: Mobile and Internet of Things (IoT) devices (such as smartphones, sensors, augmented and virtual reality, drones, wearable computing)

It’s important to note that while tier 3 collects vast amounts of data, it’s not powerful enough to analyze it on its own. However, when that data is offloaded to tier 2, important advances can be made in the world of edge computing.

The edge could offer many potential advancements, according to Satya:

  1. More bandwidth for edge analytics in IoT, pushing video analytics or sensor data close to collection point
  2. Highly responsive cloud-like services thanks to the lower latency the edge provides
  3. Exposure firewall in IoT or the ability to take sensor data from the source so it removes privacy-sensitive information
  4. Availability of services when cloud goes down in natural disaster or remote military operation
  5. Honor data export restrictions because it pushes compute to domain in which data has been captured

“These are by definition future apps that have to be written that let go of the training wheels and are willing to be totally dependent on the properties that only the edge can provide,” Satya said.

Despite the edge’s potential, many apps running at the edge take very little advantage of it.

“The only way to make edge work is to actually do it,” Satya said.

His research group has been exploring the power of low latency at the edge with augmented reality. Using edge can not only reduce response time and therefore improve battery life, but also improve the user experience.

“It has the look and feel of augmented reality, but with the functionality of artificial intelligence,” he said.

With this in mind, the group has been deploying the edge with wearable cognitive assistance and offloading data on a cloudlet. This enables scene analysis, object/person recognition, speech recognition, language translation, and planning — all applications where low latency is critical.

“Human connection is fast, accurate, and robust, but to be superhuman, we need to beat those speeds,” he said.

The group has created an app that gives task-specific assistance for difficult activities such as cooking or assembling furniture. The real-time context-sensitive feedback the edge offers provides a major value to the user they couldn’t get from just reading instructions or watching a video, Satya said.

The implications are enormous and could improve everything from GPS to elder care.

“This is not some distant future; this is reality,” Satya said. “It may not be commercial reality, but there is no doubt about technical feasibility of systems I am talking about.”

Core Research Areas: 

Tess Malone, Communications Officer