The intersection of Interpretability and Fairness
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Richard Diehl Martinez.
A survey of methods of interpretability of neural networks: from gender bias mitigation to interpreting BERT embeddings in a psycholinguistic manner.
Bio:
Giuseppe Attanasio is a postdoctoral researcher affiliated with the Milan Natural Language Processing (MilaNLP) Lab at Bocconi University. His research primarily focuses on large-scale neural architectures for Natural Language Processing.
Attanasio has contributed to various research projects and publications in the field of NLP . Notably, he has worked on topics such as automatic misogyny identification, benchmarking post-hoc interpretability approaches for transformer-based models, and entropy-based attention regularization for bias mitigation.
His work often involves the development and deployment of NLP algorithms to address real-world problems .
This talk is part of the NLIP Seminar Series series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|