Labora
Visual Explanations for Machine Learning Models
Interactive visualization techniques that help domain experts understand, audit, and debug machine learning models without requiring ML expertise.
Lead: John Smith
As ML models are increasingly deployed in high-stakes domains — medicine, law, urban planning — the need for interpretable explanations grows. This project builds visual interfaces that expose model internals to domain experts who need to trust, challenge, or correct automated decisions.
Our approach combines dimensionality reduction, attention visualization, and contrastive explanation techniques into unified interactive environments.
Related Publications
Jane Doe, Carlos Mendes, Ana Lima
IEEE VIS, 2024
Related Software
A browser-based interface for interactively exploring attention maps, feature attributions, and decision boundaries of trained ML models.