Learning Interpretability Tool
Take LIT for a spin!
Get a feel for LIT in a variety of hosted demos.
tabular binary classification
DATA SOURCES
Palmer Penguins
Analyze a tabular data model with LIT, including exploring partial dependence plots and automatically finding counterfactuals.
images multiclass classification
DATA SOURCES
Imagenette
Analyze an image classification model with LIT, including multiple image salience techniques.
BERT binary classification multi-class classification regression
DATA SOURCES
Stanford Sentiment Treebank, Multi-Genre NLI Corpus, Semantic Textual Similarity Benchmark
Use LIT with any of three tasks from the General Language Understanding Evaluation (GLUE) benchmark suite. This demo contains binary classification (for sentiment analysis, using SST2), multi-class classification (for textual entailment, using MultiNLI), and regression (for measuring text similarity, using STS-B).
BERT binary classification notebooks
DATA SOURCE
Use LIT directly inside a Colab notebook. Explore binary classification for sentiment analysis using SST2 from the General Language Understanding Evaluation (GLUE) benchmark suite.
BERT coreference fairness Winogender
DATA SOURCES
Winogender schemas
Use LIT to explore gendered associations in a coreference system, which matches pronouns to their antecedents. This demo highlights how LIT can work with structured prediction models (edge classification), and its capability for disaggregated analysis.
BERT masked language model
DATA SOURCES
Stanford Sentiment Treebank, Movie Reviews
Explore a BERT-based masked-language model. See what tokens the model predicts should fill in the blank when any token from an example sentence is masked out.
T5 generation
DATA SOURCES
CNN / Daily Mail
Use a T5 model to summarize text. For any example of interest, quickly find similar examples from the training set, using an approximate nearest-neighbors index.
BERT salience evaluation
DATA SOURCES
Stanford Sentiment Treebank, Toxicity
Explore the faithfulness of input salience methods on a BERT-base model across different datasets and artificial shortcuts.