Learning Interpretability Tool (LIT)#

Welcome to 🔥LIT, the Learning Interpretability Tool!

If you want to jump in and start playing with the LIT UI, check out the hosted demos at https://pair-code.github.io/lit/demos/.


Found LIT useful in your research? Please cite our system demonstration paper!

    title={The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for {NLP} Models},
    author={Ian Tenney and James Wexler and Jasmijn Bastings and Tolga Bolukbasi and Andy Coenen and Sebastian Gehrmann and Ellen Jiang and Mahima Pushkarna and Carey Radebaugh and Emily Reif and Ann Yuan},
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    year = "2020",
    publisher = "Association for Computational Linguistics",
    pages = "107--118",
    url = "https://www.aclweb.org/anthology/2020.emnlp-demos.15",