Language Interpretability Tool
The Language Interpretability Tool (LIT) is an open-source platform for visualization and understanding of NLP models.

overview of LIT

The Language Interpretability Tool (LIT) is for researchers and practitioners looking to understand NLP model behavior through a visual, interactive, and extensible tool.

Use LIT to ask and answer questions like:

  • What kind of examples does my model perform poorly on?
  • Why did my model make this prediction? Can it attribute it to adversarial behavior, or undesirable priors from the training set?
  • Does my model behave consistently if I change things like textual style, verb tense, or pronoun gender?

LIT contains many built-in capabilities but is also customizable, with the ability to add custom interpretability techniques, metrics calculations, counterfactual generators, visualizations, and more.

For a similar tool to explore general-purpose machine learning models, check out the What-If Tool.

Flexible and powerful model probing

Built-in capabilities

Salience maps

Attention visualization

Metrics calculations

Counterfactual generation

Model and datapoint comparison

Embedding visualization

And more...

Supported task types

Classification

Regression

Text generation / seq2seq

Masked language models

Span labeling

Multi-headed models

And more...

Framework agnostic

TensorFlow 1.x

TensorFlow 2.x

PyTorch

Custom inference code

Remote Procedure Calls

And more...

What’s the latest

CODE

Contribute to LIT

LIT is open to anyone who wants to help develop and improve it!
UPDATES

Latest updates

New features, updates, and improvements to LIT.
RESEARCH

Demo Paper at EMNLP ‘20

Read about what went into LIT in our demo paper, presented at EMNLP ‘20.