Getting Started with LIT#


Using pip:

pip install lit-nlp

For more details or to install from source, see GitHub.

Hosted demos#

If you want to jump in and start playing with the LIT UI, check out for links to our hosted demos.

For a guide to the many features available, check out the UI guide or this short video.

LIT with your model #

LIT provides a simple Python API for use with custom models and data, as well as components such as metrics and counterfactual generators. Most LIT users will take this route, which involves writing a short binary to link in Model and Dataset implementations and configure the server. In most cases this can be just a few lines:

  datasets = {
      'foo_data': FooDataset('/path/to/foo.tsv'),
      'bar_data': BarDataset('/path/to/bar.tfrecord'),
  models = {'my_model': MyModel('/path/to/model/files')}
  lit_demo = lit_nlp.dev_server.Server(models, datasets, port=4321)

Check out the API documentation for more, and the demos directory for a wealth of examples. The components guide also gives an overview of interpretability methods and other features available in LIT, and describes how to enable each for your task.

Using LIT in notebooks #

LIT can also be used directly from Colab and Jupyter notebooks, with the LIT UI rendered in an output cell. See LIT_sentiment_classifier.ipynb for an example.

Note: if you see a 403 error in the output cell where LIT should render, you may need to enable cookies on the Colab site, or pass a custom port= to the LitWidget constructor.

Stand-alone components #

Many LIT components - such as models, datasets, metrics, and salience methods - are stand-alone Python classes and can be easily used outside of the LIT UI. For additional details, see the API documentation and an example Colab at LIT_components_example.ipynb.

Run an existing example #

The demos page lists some of the pre-built demos available for a variety of model types. The code for these is under examples ; each is a small script that loads one or more models and starts a LIT server.

Most demos can be run with a single blaze command. To run the default one, you can do:

python -m lit_nlp.examples.glue_demo \
  --quickstart --port=4321 --alsologtostderr

Then navigate to https://localhost:4321 to access the UI.

For most models we recommend using a GPU, though the --quickstart flag above loads a set of smaller models that run well on CPU. You can also pass --warm_start=1.0, and LIT will run inference and cache the results before server start.

For an overview of supported model types and frameworks, see the components guide.