Getting Started

This document walks you through how to set up to run models using tt-torch. The following topics are covered:

Configuring Hardware

This walkthrough assumes you are using Ubuntu 22.04.

Configure your hardware with tt-installer:

  1. Make sure your system is up-to-date:
sudo apt-get update
sudo apt-get upgrade -y
  1. Set up your hardware and dependencies using tt-installer:
/bin/bash -c "$(curl -fsSL https://github.com/tenstorrent/tt-installer/releases/latest/download/install.sh)"

Installing Dependencies

Install additional dependencies that were not installed by the tt-installer script:

sudo apt-get install -y \
    libhwloc-dev \
    libtbb-dev \
    libcapstone-dev \
    pkg-config \
    linux-tools-generic \
    ninja-build \
    libgtest-dev \
    ccache \
    doxygen \
    graphviz \
    patchelf \
    libyaml-cpp-dev \
    libboost-all-dev \
    lcov

Installing CMake 4.0.2

Install CMake 4.0.2:

pip install cmake

Installing Clang 17

This section walks you through installing Clang 17.

  1. Install Clang 17:
wget https://apt.llvm.org/llvm.sh
chmod u+x llvm.sh
sudo ./llvm.sh 17
sudo apt install -y libc++-17-dev libc++abi-17-dev
sudo ln -s /usr/bin/clang-17 /usr/bin/clang
sudo ln -s /usr/bin/clang++-17 /usr/bin/clang++
  1. Check that the selected GCC candidate using Clang 17 is using 11:
clang -v

Look for the line that starts with: Selected GCC installation:. If it is something other than GCC 11, please uninstall that and install GCC 11 using:

sudo apt-get install gcc-11 lib32stdc++-11-dev lib32gcc-11-dev
  1. Delete any non-11 paths:
sudo rm -rf /usr/bin/../lib/gcc/x86_64-linux-gnu/12

Building tt-torch

This section describes how to build tt-torch. You need to build tt-torch whether you plan to do development work, or run models.

  1. Clone the tt-torch repo:
git clone https://github.com/tenstorrent/tt-torch.git
cd tt-torch
  1. Create a toolchain directory and make the account you are using the owner:
sudo mkdir -p /opt/ttmlir-toolchain
sudo chown -R $USER /opt/ttmlir-toolchain
  1. Build the toolchain for tt-torch (this build step only needs to be done once):
cd third_party
cmake -B toolchain -DBUILD_TOOLCHAIN=ON

NOTE: This step takes a long time to complete.

  1. Navigate back to the tt-torch home directory.

  2. Build tt-torch:

source env/activate
cmake -G Ninja -B build
cmake --build build
cmake --install build

NOTE: It takes awhile for everything to build.

Test the tt-torch Build:

You can check that everything is working with a basic unit test:

pytest -svv tests/torch/test_basic.py

NOTE: Any time you use tt-torch, you need to be in the activated virtual environment you created. Otherwise, you will get an error when trying to run a test.

Running the resnet Demo

You can also try a demo:

python demos/resnet/resnet50_demo.py

Compiling and Running a Model

Once you have your torch.nn.Module compile the model:

from tt_torch.dynamo.backend import backend
import torch

class MyModel(torch.nn.Module):
    def __init__(self):
        ...

    def foward(self, ...):
        ...

model = MyModel()

model = torch.compile(model, backend=backend)

inputs = ...

outputs = model(inputs)

Example - Add Two Tensors

Here is an exampe of a small model which adds its inputs running through tt-torch. Try it out!

from tt_torch.dynamo.backend import backend
import torch

class AddTensors(torch.nn.Module):
  def forward(self, x, y):
    return x + y


model = AddTensors()
tt_model = torch.compile(model, backend=backend)

x = torch.ones(5, 5)
y = torch.ones(5, 5)
print(tt_model(x, y))