Language Interpretability Tool
Take the Language Interpretability Tool for a spin!
Get a feel for LIT in a variety of hosted demos.
BERT binary classification multi-class classification regression
DATA SOURCES
Stanford Sentiment Treebank, Multi-Genre NLI Corpus, Semantic Textual Similarity Benchmark
Use LIT with any of three tasks from the General Language Understanding Evaluation (GLUE) benchmark suite. This demo contains binary classification (for sentiment analysis, using SST2), multi-class classification (for textual entailment, using MultiNLI), and regression (for measuringtext similarity, using STS-B).
BERT coreference fairness Winogender
DATA SOURCES
Winogender schemas
Use LIT to explore gendered associations in a coreference system, which matches pronouns to their antecedents. This demo highlights how LIT can work with structured prediction models (edge classification), and its capability for disaggregated analysis.
BERT masked language model
DATA SOURCES
Stanford Sentiment Treebank, Movie Reviews
Explore a BERT-based masked-language model. See what tokens the model predicts should fill in the blank when any token from an example sentence is masked out.
T5 generation
DATA SOURCES
CNN / Daily Mail
Use a T5 model to summarize text. For any example of interest, quickly find similar examples from the training set, using an approximate nearest-neighbors index.