Skip to content

Labora

Visual Explanations for Machine Learning Models

Interactive visualization techniques that help domain experts understand, audit, and debug machine learning models without requiring ML expertise.

Lead: John Smith

As ML models are increasingly deployed in high-stakes domains — medicine, law, urban planning — the need for interpretable explanations grows. This project builds visual interfaces that expose model internals to domain experts who need to trust, challenge, or correct automated decisions.

Our approach combines dimensionality reduction, attention visualization, and contrastive explanation techniques into unified interactive environments.


Related Publications

Visual Analysis of Urban Mobility Patterns Using GPS Trajectory Data
Visual Analysis of Urban Mobility Patterns Using GPS Trajectory Data

Jane Doe, Carlos Mendes, Ana Lima

IEEE VIS, 2024

PDF visualization urban-computing

Related Software

GraphLens
javascript machine-learning explainability

A browser-based interface for interactively exploring attention maps, feature attributions, and decision boundaries of trained ML models.