Getting Started with Forge Demos

This document walks you through how to set up to run demo models using TT-Forge. The following topics are covered:

NOTE: If you encounter issues, please request assistance on the TT-Forge Issues page.

NOTE: If you plan to do development work, please see the build instructions for the repo you want to work with.

Setting up a Front End to Run a Demo

This section provides instructions for how to set up your frontend so you can run models from the TT-Forge repo.

Before running one of the demos in TT-Forge, you must:

  1. Determine which frontend you want to use:

  2. Decide what setup you want to use for the frontend:

    • Wheel
    • Docker
  3. Follow the installation instructions from the repo for your selected setup method:

  4. Return to this repo and follow the instructions in the Running a Demo section.

Running a Demo

To run a demo, do the following:

  1. Clone the TT-Forge repo (alternatively, you can download the script for the model you want to try):
git clone https://github.com/tenstorrent/tt-forge.git
  1. Navigate to the folder for the frontend you want:

  2. In this walkthrough, resnet_50_demo.py from the TT-Forge-FE folder is used.

  3. From the TT-Forge-FE folder for models, run the resnet_50_demo.py script. Navigate to the main folder in the TT-Forge repository and run the following commands:

export PYTHONPATH=.
python3 demos/tt-forge-fe/cnn/resnet_50_demo.py

If all goes well, you should see an image of a cat, and terminal output where the model predicts what the image is and presents a score indicating how confident it is in its prediction.

Running Performance Benchmark Tests

To run performance benchmarks for all models, you need to install additional libraries that are not included in the Docker container or the wheel package.

Prerequisites

  1. Install Python Requirements

    Install the required Python packages from the requirements.txt file of the project you wish to run:

    pip install -r benchmark/[project]/requirements.txt
    

    Example:

    If you want to test a model from the TT-Torch project, you would run:

    pip install -r benchmark/tt-torch/requirements.txt
    
  2. Install System Dependencies

    Install the required system libraries for OpenGL rendering and core application support:

    sudo apt update
    sudo apt install libgl1-mesa-glx libgl1-mesa-dev mesa-utils
    
  3. Set up Hugging Face Authentication

    To run models on real datasets, you need to register and authenticate with Hugging Face:

    a. Login or register at Hugging Face

    b. Set up an access token following the User Access Tokens guide

    c. Configure your environment with the token:

    export HUGGINGFACE_TOKEN=[YOUR_TOKEN]
    huggingface-cli login --token $HUGGINGFACE_TOKEN
    

    d. Access the Imagenet dataset here

Running Benchmarks

Once you have completed the prerequisites, you can run the performance benchmarks:

  1. Navigate to the benchmark directory:

    cd benchmark
    
  2. Run the benchmark script with your desired options:

    python benchmark.py [options]
    

    Available Options:

    OptionShortTypeDefaultDescription
    --project-pstringrequiredThe project directory containing the model file
    --model-mstringrequiredModel to benchmark (e.g. bert, mnist_linear). The test file name without .py extension
    --config-cstringNoneModel configuration to benchmark (e.g. tiny, base, large)
    --training-tflagFalseBenchmark training mode
    --batch_size-bsinteger1Batch size, number of samples to process at once
    --loop_count-lpinteger1Number of times to run the benchmark
    --input_size-iszintegerNoneInput size of the input sample (if model supports variable input size)
    --hidden_size-hsintegerNoneHidden layer size (if model supports variable hidden size)
    --output-ostringNoneOutput JSON file to write results to. Results will be appended if file exists
    --task-tsstring"na"Task to benchmark (e.g. classification, segmentation)
    --data_format-dfstring"float32"Data format (e.g. float32, bfloat16)

    Example:

    python benchmark/benchmark.py -p tt-forge-fe -m mobilenetv2_basic -ts classification -bs 8 -df bfloat16 -lp 32 -o forge-benchmark-e2e-tt-forge-fe-mobilenetv2_basic.json
    
  3. Alternatively, you can run specific model tests using pytest:

    python -m pytest [project]/[model_name].py
    

    Example:

    python -m pytest -svv tt-forge-fe/yolo_v8.py