Open source software is common in the computer industry. Even many of the big industries rely on open source software. But open source hardware is much harder to find. So three and a half years ago, Assistant Professor Hadi Esmaeilzadeh, the inaugural holder of Allchin Family Early Career Professorship, created a research lab in the School of Computer Science at Georgia Tech to bridge the gap.
“I have taken the red pill; I’d rather see how far I can push the freedom for intelligent system design,” Esmaeilzadeh said, referring to his favorite movie The Matrix. “When it comes to hardware, we have such a monopoly because there is very little open sourced. That’s such a tough barrier to entry. As an academic, I have the luxury of not caring about money, so I can break the cycle and provide open source hardware.”
His Alternative Computing Technologies (ACT) Lab now has one master’s and five Ph.D. students who are an even split in hardware and software. They collaborate to research the whole stack, from hardware to software, and have created six tools. Last May, the team made two of these tools open source: Tabla and DnnWeaver.
Tabla is a framework that generates accelerators for machine learning algorithms. It does this by using a compiler that can turn domain specific languages into deeper abstractions that work closer to the hardware, effectively auto-generating hardware.
“We want to expose people who don’t have enough knowledge about hardware to be able to easily generate hardware for machine learning and robotics,” said Divya Mahajan, a fourth-year School of Computer Science (SCS) Ph.D. student, who worked on the hardware components of Tabla. Her software research counterpart is SCS master’s student Joon Kyung Kim, who started working with Esmaeilzadeh as an undergraduate at Tech.
Tabla has been a springboard for further research into machine learning algorithms. Fifth-year SCS Ph.D. student Jongse Park has been working on a distributed version of the tool that can be used to better train a machine to improve the accuracy of its predictions. He just published the very first compute stack for the scale-out acceleration of machine learning.
DnnWeaver stands for deep neural network weaver. The tool is a framework for accelerating deep neural networks on field programing gate arrays (FPGAs). It can automatically generate customized accelerators — and not just for the hardware, but for the entire stack — for any deep neural network a programmer needs.
“We wanted to not involve programmers in hardware design at all,” said Hardik Sharma, the main researcher, a second-year Electrical College of Engineering (ECE) Ph.D. student. “We do all the work for you, and it’s just a button to push for the programmers.”
The lab has several other projects. Fourth-year SCS Ph.D. student Amir Yazdanbakhsh has worked on both the NPiler (an NPU compilation workflow that automatically converts annotated code to a neural networks) and Axilog (a set language annotations and automatic analysis for approximate hardware design). Third-year ECE Ph.D. student Jake Sacks is currently working on RoboX, which uses a similar framework to Tabla but for robotics applications. They have designed a domain specific language that can tailor the programming for the specific application that autonomous robots need using a programming language and hardware generation similar to Tabla.
These aren’t just individual breakthroughs stuck in an academic silo. Open sourcing ensures others can find the full potential of these frameworks.
“We provided the frontier of machine learning hardware acceleration freely to whoever wants to use it,” Esmaeilzadeh said.
This project has been funded with the support of Air Force Office of Scientific Research, National Science Foundation, Microsoft, Google, Qualcomm, Intel/Altera, and Xilinx . This article does not reflect the opinions of these entities.