The Language Interpretability Tool (LIT) is for researchers and practitioners looking to understand NLP model behavior through a visual, interactive, and extensible tool.
Use LIT to ask and answer questions like:
- What kind of examples does my model perform poorly on?
- Why did my model make this prediction? Can it attribute it to adversarial behavior, or undesirable priors from the training set?
- Does my model behave consistently if I change things like textual style, verb tense, or pronoun gender?
LIT contains many built-in capabilities but is also customizable, with the ability to add custom interpretability techniques, metrics calculations, counterfactual generators, visualizations, and more.
For a similar tool to explore general-purpose machine learning models, check out the What-If Tool.
LIT can be run as a standalone server, or inside of python notebook environments such as Colab and Jupyter.
Model and datapoint comparison
Supported task types
Text generation / seq2seq
Masked language models
Custom inference code
Remote Procedure Calls