Skip to main content
Computer Science
CS
Computer Science
Study
Prospective Students
Current Students
Research
Research Areas
Research Groups
People
All People
Faculty
Affiliate Faculty
Instructional Faculty
Research Scientists
Research Staff
Postdoctoral Fellows
Administrative Staff
Alumni
Students
News
Events
About
CEMSE Division
Apply
interpretation faithfulness
Improving Interpretation Faithfulness for Transformers
Di Wang, Assistant Professor, Computer Science
Nov 20, 11:30
-
12:30
B9 L2 H2 H2
transformers
nlp
interpretation faithfulness
Currently, attention mechanism becomes a standard fixture in most state-of-the-art NLP, Vision and GNN models, not only due to outstanding performance it could gain, but also due to plausible innate explanation for the behaviors of neural architectures it provides, which is notoriously difficult to analyze. However, recent studies show that attention is unstable against randomness and perturbations during training or testing, such as random seeds and slight perturbation of input or embedding vectors, which impedes it from becoming a faithful explanation tool. Thus, a natural question is whether we can find some substitute of the current attention which is more stable and could keep the most important characteristics on explanation and prediction of attention.