School of Computer Science Assistant Professor Alexandros Daglis received a Google Faculty Research Award for his work in systems. He is one of three College of Computing faculty members to be honored.
For the past 14 years, Google has recognized more than 100 faculty members across the U.S. for cutting-edge research in top computing areas from machine learning to quantum computing. The program funds one student worker and enables Google employees to collaborate with top academic institutions.
“I am particularly excited to receive this award,” Daglis said, “not only because it is the first form of direct support I’ve received from the industry, but also because I see it as a practical recognition of this research direction beyond academia, indicating a potential practical impact on datacenter systems in the near future.”
Daglis’s work focuses on using hardware specialization to make datacenters run more efficiently. By upgrading a datacenter network from a passive to an active system component, Daglis can leverage new relationships between incoming network traffic and CPU processing to improve processing efficiency, response latency, and ultimately create better online services.
Datacenters are fundamental for internet access because they allow millions of users access to online services at scale. Traditionally, thousands of central processing units (CPUs) communicated across a network to create a datacenter. As demand increased, datacenters kept up by growing in size and continuously upgrading their CPUs, but this approach is now hitting performance scaling limitations because of the end of silicon scaling, such as Dennard scaling and slowdown of Moore’s law.
“CPU performance improvements alone are not sufficient anymore to keep up with booming user bases, exploding datasets, and new demanding services,” Daglis said. “This realization has led to the ongoing phenomenon of hardware specialization.”
With this in mind, Daglis’s research is to upgrade the datacenter network from a traditional passive approach that delivers data between two endpoints to an active component that performs smarter data manipulation. His emphasis is on the network’s endpoints – particularly the use of modern programmable network interfaces that can be leveraged to accelerate common inter-CPU communication patterns within the datacenter.
“The main underlying idea is that by judiciously exposing some application-level semantics to the network interface,” Daglis said, “new synergies between the incoming network traffic and the processing performed on the CPUs can emerge.”