Update: I recently left Google DeepMind and started a faculty job in South Korea.
Hi, I'm an Assistant Professor at Yonsei University in the Department of Computer Science and Engineering, where I co-lead the Human-Data Interaction Lab.
My research aims to empower researchers and practitioners to gain insights through interactive data visualization, enabling them to responsibly develop AI systems. To achieve this goal, I build novel visual analytics tools that help these people interactively explore and analyze data in AI systems. The tools I created have been successfully integrated into workflows for error analysis, bias discovery, model evaluation, and interpretation. One such tool, LLM Comparator, has been extensively used internally at Google and was featured at Google I/O as part of the Gemini models and the Responsible AI Toolkit .
Before joining Yonsei, I was a Senior Research Scientist at Google DeepMind in the People + AI Research (PAIR) team and an Assistant Professor at Oregon State University. I received my Ph.D. from Georgia Tech, advised by Polo Chau, along with a Dissertation Award and fellowships from NSF and Google.
Areas of expertise: Visual Analytics, Data Visualization, Responsible AI, Explainable AI, Human-Computer Interaction
Research Interests
My research group works at the intersection of data visualization and responsible AI. In particular, we focus on identifying the challenges practitioners face when working with data in AI development and and design web-based tools to help them uncover new insights from data and models. In doing so, we seek to improve their models, address data biases, and build trust in AI systems.
Potential Research Topics:
- Error Analysis and Evaluation of LLM Outputs: How can we identify categories of prompts where models fail and understand the reasons behind these failures?
- Red Teaming and Safety Testing for AI: What methods can we use to generate adversarial prompts, assess bias and safety risks, and support systematic testing?
- Bias Analysis and Dataset Curation: How can we interactively analyze and iteratively curate datasets to ensure high-quality and diverse datasets for LLM training and evaluation?
- Debugging AI Agents: Can visualizing AI agents' internal reasoning help people understand the sources of unexpected results?
- Video Exploration and Analytics: How can we design interactive tools that enhance large-scale video querying and analysis powered by AI techniques?
Opportunities for Students and Collaborators:
- Students: We are actively looking for undergraduate interns and graduate students interested in joining our research group. For more details, please refer to this link (in Korean).
- Industry/Academic Collaborators: We especially welcome industry researchers and practitioners interested in collaborating with us to tackle your challenges through practical solutions.
Selected Publications (Latest & Greatest)
-
LLM Comparator: Interactive Analysis of Side-by-Side Evaluation of Large Language Models
VIS 2024 (and a preliminary version at CHI 2024 LBW track)
DOI (VIS) PDF (VIS) DOI (CHI LBW) PDF (CHI LBW) Blog Code
Deployed on Google's LLM Evaluation Platforms
Featured at Google I/O on Gemini Open Models and Responsible AI Toolkit -
Adversarial Nibbler: An Open Red-Teaming Method for Identifying Diverse Harms in Text-to-Image Generation
FAccT 2024
arXiv PDF Blog -
Automatic Histograms: Leveraging Language Models for Text Dataset Exploration
CHI 2024 (LBW track)
DOI arXiv PDF Code -
Understanding the Dataset Practitioners Behind Large Language Models
CHI 2024 (LBW track)
DOI arXiv PDF -
VLSlice: Interactive Vision-and-Language Slice Discovery
ICCV 2023
arXiv PDF Talk Website -
Visualizing Linguistic Diversity of Text Datasets Synthesized by Large Language Models
VIS 2023 (Short)
PDF Code -
DendroMap: Visual Exploration of Large-Scale Image Datasets for Machine Learning with Treemaps
VIS 2022
DOI arXiv PDF Twitter Post Demo -
FitVid: Responsive and Flexible Video Content Adaptation
CHI 2022
DOI PDF -
One Explanation is Not Enough: Structured Attention Graphs for Image Classification
NeurIPS 2021
arXiv PDF -
Contrastive Identification of Covariate Shift in Image Data
VIS 2021 (Short)
DOI PDF -
"Why did my AI agent lose?": Visual Analytics for Scaling Up After-Action Review
VIS 2021 (Short)
DOI PDF -
CNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization
VIS 2020
DOI PDF Demo Video -
How Does Visualization Help People Learn Deep Learning? Evaluation of GAN Lab with Observational Study and Log Analysis
VIS 2020 (Short)
PDF -
FairVis: Visual Analytics for Discovering Intersectional Bias in Machine Learning
VIS 2019
DOI PDF Blog Code -
GAN Lab: Understanding Complex Deep Generative Models using Interactive Visual Experimentation
VIS 2018
Open sourced with Google AI
DOI PDF Slides Code Website -
Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers
IEEE Transactions on Visualization and Computer Graphics, 25(8), 2018.
Cited more than 600 times
DOI PDF Website Medium -
ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models
VIS 2017
Deployed on Facebook's ML Platform
DOI PDF Video Slides Website -
Interactive Browsing and Navigation in Relational Databases
VLDB 2016
DOI PDF Slides
Employment
-
Yonsei University, Seoul, South Korea
2025 - present
Assistant Professor, Department of Computer Science and Engineering, College of Computing -
Google, Atlanta, GA
2022-2025
Senior Research Scientist, People+AI Research (PAIR) Team, Google DeepMind -
Oregon State University, Corvallis, OR
2019-2022
Assistant Professor of Computer Science, School of Electrical Engineering and Computer Science -
Google, Cambridge, MA
Summer 2017
Software Engineering Intern, People+AI Research (PAIR) Team, Google Brain -
Facebook, Menlo Park, CA
Summer 2016
Research Intern, Applied ML Research Group -
Facebook, Menlo Park, CA
Summer 2015
Research Intern, Applied ML Research Group
Education
-
Ph.D. in Computer Science,
Georgia Institute of Technology, Atlanta, GA
2013-2019
Thesis: Human-Centered AI through Scalable Visual Data Analytics
Committee: Polo Chau (Advisor), Sham Navathe, Alex Endert, Martin Wattenberg, and Fernanda Viégas -
M.S. in Computer Science and Engineering,
Seoul National University, South Korea
2009-2011
Thesis: Context-Aware Recommendation using Learning-to-Rank (Advisor: Sang-goo Lee) - B.S. in Electrical and Computer Engineering, Seoul National University, South Korea 2005-2009
Awards
- College of Computing Dissertation Award, Georgia Tech 2021
- Finalist, Facebook Research Award 2021
- ACM Trans. Interactive Intelligent Systems (TiiS) 2018 Best Paper, Honorable Mention 2020
- Google PhD Fellowship, Google AI 2018-2019
- Graduate TA of the Year in School of Computer Science, Georgia Tech 2018
- NSF Graduate Research Fellowship, National Science Foundation 2014-2017
- Best Paper Award, PhD Workshop at CIKM 2011
- National Scholarship for Science and Engineering, Korea Student Aid Foundation 2005-2009
Teaching
- Yonsei University CSI 7110. Topics in Responsible AI, 2025
- CAS 4150. Introduction to Data Visualization, 2025
- Oregon State Univ. CS 499/549. Visual Analytics, 2022
- CS 565. Human-Computer Interaction, 2020-2022
- CS 539. Data Visualization for Machine Learning, 2020
Professional Service
-
Conf. Organization
VIS 2024-25 (Publication Chairs)
IDEA@KDD 2018 (Workshop Organiziers)
WSDM 2016 (Web) -
Conf. PC
VIS (2020-present)
IUI (2019-present)
AAAI (2021-22)
SDM (2020)
WSDM (2022 Demo)
CIKM (2019 Demo) -
Journal Reviewers
IEEE Transactions on Visualization and Computer Graphics (TVCG)
ACM Transactions on Interactive Intelligent Systems (TiiS)
ACM Transactions on Computer-Human Interaction (TOCHI)
ACM Transactions on Intelligent Systems and Technology (TIST)
Distill - Conf. Reviewers CHI (2014, 17-18, 21-22, 24-25), UIST (2023), CSCW (2020), VIS (2018-20), EuroVis (2018, 25), KDD (2014-16), SDM (2014, 16-17), IUI (2016), RecSys (2016), SIGMOD (2013), DASFAA (2011)
-
Grant Reviewers
NSF Review Panelists
(CISE III)
Google Academic Research Awards
Bio
Minsuk Kahng is an Assistant Professor in the Department of Computer Science and Engineering
at Yonsei University in South Korea.
His research aims to empower researchers and practitioners to gain insights through
interactive data visualization, enabling the responsible development of AI systems.
To achieve this, he builds novel visual analytics tools that help these people
interpret model behavior and explore large datasets.
Kahng publishes papers at the top venue in the field of data visualization (IEEE VIS),
as well as at premier conferences in the field of AI, Human-Computer Interaction, and Responsible Computing.
His research has led to deployed technologies (e.g., LLM Comparator for Google, ActiVis for Facebook)
and been recognized by prestigious awards,
including a Google PhD Fellowship and an NSF Graduate Research Fellowship,
and supported by NSF, DARPA, Google, and NAVER.
Before joining Yonsei, Minsuk was a Senior Research Scientist at Google DeepMind
in the People + AI Research (PAIR) team and an Assistant Professor at Oregon State University.
He received his Ph.D. from Georgia Tech with a Dissertation Award.
Website: https://minsuk.com
CV: https://minsuk.com/minsuk-kahng-cv.pdf