Deep learning models, a form of machine learning (ML), are used in decision making for many daily tasks, such as fraud detection, diagnosing early signs of heart failure, and identifying road signs for self-driving cars.
However, due to deep learning models’ predictive nature, they are vulnerable to attack. Adversaries could successfully trick a model by modifying the combinatorial structure of the data it receives from graph-structures, otherwise known as networks.
A team of researchers from Georgia Tech, Ant Financial, and Tsinghua University aims to identify how deep learning over networks could be manipulated by such adversarial attacks. They focus on graph neural network (GNN) models, which are particularly at risk for fraudulent activity.
The premise of any adversarial problem: Learn how a model could be attacked by attacking first, and fix the flaws that were found along the way to reinforce the system.
“What we studied in this paper is an adversarial problem: Given an effective deep learning method over graphs, can we modify the network in an inevitable way, such that the deep learning method fails in this case?” said School of Computational Science and Engineering (CSE) Ph.D. student Hanjun Dai.
In the case of deep learning on graph structures, the need for reinforced buttresses against any potential attacks is critical as the uses are prevalent and wide spread.
“What we show is that we can change the transaction network a little bit, which changes the act of the machine,” CSE Associate Professor and Associate Director of the Machine Learning Center at Georgia Tech Le Song said. “For example, in the case of financial applications, I could transfer some money somewhere else, which changes the ML model to make the wrong prediction.”
Deep learning models are particularly vulnerable to this type of adversarial manipulation – an issue that is currently being addressed across fields and methods for various applications like image recognition. But, according to Dai and Song, until now, little attention has been paid to these models' interpretability, or the mechanisms by which they make their decisions, making it risky for some financial or security-related applications.
According to Dai, “This study is highly related to the robustness and reliability of the deep learning method, and we are the first to study such problem over combinatorial structures, such as networks.”
“I think the networks express rich combinatorial knowledge information about the world,” Dai said. “For example, social networks express the knowledge about user relationships; knowledge graphs tell the logic concepts over entities. On the other hand, deep learning learns the knowledge in a continuous but opaque way. How to combine the clean, hard rules from networks with the black-box deep learning is the future of this direction of research.”
Dai and Song are set to present the findings of this paper, along with five other research papers from Song’s research teams, at the International Conference on Machine Learning (ICML) 2018:
Learning to Explain: An Information-Theoretic Perspective on Model Interpretation
Jianbo Chen (University of California, Berkeley) · Le Song (Georgia Institute of Technology) · Martin Wainwright (University of California at Berkeley) · Michael Jordan (UC Berkeley)
Adversarial Attack on Graph Structured Data
Hanjun Dai (Georgia Tech) · Hui Li (Ant Financial Services Group) · Tian Tian () · Xin Huang (Ant Financial) · Lin Wang () · Jun Zhu (Tsinghua University) · Le Song (Georgia Institute of Technology)
Towards Black-box Iterative Machine Teaching
Weiyang Liu (Georgia Tech) · Bo Dai (Georgia Institute of Technology) · Xingguo Li (University of Minnesota) · Zhen Liu (Georgia Tech) · James Rehg (Georgia Tech) · Le Song (Georgia Institute of Technology)
Learning Steady-States of Iterative Algorithms over Graphs
Hanjun Dai (Georgia Tech) · Zornitsa Kozareva (Amazon) · Bo Dai (Georgia Institute of Technology) · Alex Smola (Amazon) · Le Song (Georgia Institute of Technology)
Bo Dai (Georgia Institute of Technology) · Albert Shaw (Georgia Tech) · Lihong Li (Google Inc.) · Lin Xiao (Microsoft Research) · Niao He (UIUC) · Zhen Liu (Georgia Tech) · Jianshu Chen (Microsoft Research) · Le Song (Georgia Institute of Technology)