Introduction
The TT-Forge FE is a graph compiler designed to optimize and transform computational graphs for deep learning models, enhancing their performance and efficiency.
Built on top of the TT-MLIR backend, TT-Forge FE is an integral component of the TT-Forge project, which provides a comprehensive suite of tools for optimizing and deploying deep learning models on Tenstorrent hardware.
Main project goals are:
- Provide abstraction of many different frontend frameworks (PyTorch, TensorFlow, ONNX, etc.)
- Compile many kinds of model architectures without custom modification and with great performance (e.g. Transformers, CNNs, etc.)
- Abstract all Tenstorrent device architectures (e.g. Wormhole, Blackhole, etc.)
Architecture Overview
TT-Forge is a comprehensive compiler designed to facilitate the development and optimization of machine learning models. It encompasses various components, each serving a specific purpose in the compiling and running machine learning pipelines. This document provides an overview of the key components with focus on TT-Forge-FE.
Table of contents
TT-Forge Overview
TT-TVM Overview
TVM IR
Coming soon!
TVM Compile
Coming soon!
Relay Compile Passes
Coming soon!
Forge Compile Passes
Coming soon!
Partition Graph
Coming soon!
Construct Inputs, Constants and Ops
Coming soon!
Generate Forge-FE Module
Coming soon!
Standalone Forge-FE Module
Coming soon!
TT-Forge-FE Overview
Initialize Compile
Coming soon!
Generate Initial Graph (TT-TVM)
Coming soon!
Post Initial Graph passes
Coming soon!
Consteval
Coming soon!
Autograd
Coming soon!
Post Autograd
Coming soon!
Pre Lowering
Coming soon!
Graph Split
Coming soon!
Compiler TTIR
Coming soon!
Output Binary
Coming soon!
Building
Following page describes how to build the project on your local machine.
Prerequisites
Main project dependencies are:
- Clang 17
- Ninja
- CMake 3.20 or higher
- Git LFS
- Python 3.10 or higher
On Ubuntu 22.04 systems, you can install these dependencies using the following commands:
# Update package list
sudo apt update -y
sudo apt upgrade -y
# Install Clang
sudo apt install clang-17
# Install Ninja
sudo apt install ninja-build
# Install CMake
sudo apt remove cmake -y
pip3 install cmake --upgrade
cmake --version
# Install Git LFS
sudo apt install git-lfs
# Check Python version
python3 --version
Build environment
This is one off step to build the toolchain and create virtual environment for tt-forge. Generally you need to run this step only once, unless you want to update the toolchain (LLVM).
First, it's required to create toolchain directories. Proposed example creates directories in default paths. You can change the paths if you want to use different locations (see build environment section below).
# FFE related toolchain (dafault path)
sudo mkdir -p /opt/ttforge-toolchain
sudo chown -R $USER /opt/ttforge-toolchain
# MLIR related toolchain (default path)
sudo mkdir -p /opt/ttmlir-toolchain
sudo chown -R $USER /opt/ttmlir-toolchain
Build FFE environment:
# Inicialize required env vars
source env/activate
# Initialize and update submodules
git submodule update --init --recursive -f
# Build environment
cmake -B env/build env
cmake --build env/build
Build Forge
# Activate virtual environment
source env/activate
# Build Forge
cmake -G Ninja -B build
cmake --build build
You can pass additional options to the cmake
command to customize the build. For example, to build everything in debug mode, you can run:
cmake -G Ninja -B build -DCMAKE_BUILD_TYPE=Debug
cmake --build build
List of commonly used options:
-DCMAKE_BUILD_TYPE=Debug|Release
- Build type (Debug, Release)-DTTMLIR_RUNTIME_DEBUG=ON|OFF
- Build runtime debug tools (more logging, debug environment flags)
Incremental build
If you have made changes to the C++ sources (of the tt-forge-fe
compiler, tt-mlir
or tt-metal
), you might want to do an incremental build to save time. This can be done by running the following command:
# If you are not already inside the virtual environment, activate it
source env/activate
cmake --build build -- install_ttforge
This will build tt-forge-fe
C++ sources and the dependencies (tt-mlir
, tt-metal
) and install them in the virtual environment.
Build docs
To build documentation mdbook
is required, see the installation guide here.
After installing mdbook
, run the following commands to build and serve the documentation:
source env/activate
cmake --build build -- docs
# Serve the documentation
mdbook serve build/docs
Note:
mdbook serve
will by default create a local server athttp://localhost:3000
.
Note: For custom port, just specify
-p
attribute.
E.g.mdbook serve build/docs -p 5005
, and visithttp://localhost:5005
.
Build Cleanup
To ensure a clean build environment, follow these steps to remove existing build artifacts:
-
Clean only Forge FE build artifacts:
rm -rf build
Note: This command removes the
build
directory and all its contents, effectively cleaning up the build artifacts specific to Forge FE. -
Clean all Forge build artifacts:
./clean_build.sh
Note: This script executes a comprehensive cleanup, removing all build artifacts across the entire Forge project, ensuring a clean slate for subsequent builds.
Note:
clean_build.sh
script will not clean toolchain (LLVM) build artifacts and dependencies. -
Clean everything (including environment):
./clean_build.sh rm -rf env/build third_party/tt-mlir/env/build
Note: This should rarely be needed, as it removes the entire build and environment (consequently entire toolchain will need to be rebuilt).
Useful build environment variables
TTMLIR_TOOLCHAIN_DIR
- Specifies the directory where TTMLIR dependencies will be installed. Defaults to/opt/ttmlir-toolchain
if not defined.TTMLIR_VENV_DIR
- Specifies the virtual environment directory for TTMLIR. Defaults to/opt/ttmlir-toolchain/venv
if not defined.TTFORGE_TOOLCHAIN_DIR
- Specifies the directory where tt-forge dependencies will be installed. Defaults to/opt/ttforge-toolchain
if not defined.TTFORGE_VENV_DIR
- Specifies the virtual environment directory for tt-forge. Defaults to/opt/ttforge-toolchain/venv
if not defined.TTFORGE_PYTHON_VERSION
- Specifies the Python version to use. Defaults topython3.10
if not defined.
Run tt-forge-fe using Docker image
We provide two Docker images for tt-forge-fe:
- Base Image: This image includes all the necessary preinstalled dependencies.
- Prebuilt Environment Image: This image also comes with a prebuilt environment, allowing you to skip the environment build step.
ghcr.io/tenstorrent/tt-forge-fe/tt-forge-fe-base-ird-ubuntu-22-04
ghcr.io/tenstorrent/tt-forge-fe/tt-forge-fe-ird-ubuntu-22-04
Note: To be able to build tt-forge-fe inside the docker containers, make sure to set yourself as the owner of tt-forge-fe and tt-mlir toolchain directories:
sudo chown -R $USER /opt/ttforge-toolchain
sudo chown -R $USER /opt/ttmlir-toolchain
Testing
This page describes how to run different kinds of tests in the tt-forge-fe
project. If you haven't built the project yet,
please refer to the Build page.
Unit tests
To build the unit tests, run the following command:
cmake --build build -- build_unit_tests
To run the unit tests (this will also build the tests if they are not built):
cmake --build build -- run_unit_tests
Note: The unit tests are built in the
build/forge/csrc/test
directory. From there, you can run targeted tests directly.
- For example, to run all the tests defined in
forge/csrc/test/passes/
use:./build/forge/csrc/test/test_passes
- You can further filter the tests by using the
--gtest_filter
flag:./build/forge/csrc/test/test_passes --gtest_filter=MMFuseBias/MMFuseBias.mm_fuse_bias/3
End to end tests
For running the end-to-end tests we use the pytest
framework. To run these tests, you need to be on a machine with a Tenstorrent
Wormhole device. Also, we are still in the process of cleaning up the old tests, so not all tests are working. For a list of green
tests, consult pytest.ini
.
Note: Make sure that you have activated the python environment before running the tests.
To run all tests defined in /test/mlir/test_ops.py
use:
pytest -svv forge/test/mlir/test_ops.py
To run a specific test, use the following:
pytest -svv forge/test/mlir/test_ops.py::test_add
- The
-svv
flag is optional and used to display more information about the test run.
Single operator E2E tests
Single operator E2E tests consists of pre configured collections of in-depth tests for each operator according to test plan. Tests include small models consisting of single operator with or without combination with few other operators. More details about test plan available on Test template page
To start interacting with test sweeps framework load helper commands via
source forge/test/operators/pytorch/test_commands.sh
Available commands
Command | Description |
---|---|
print_help | Print commands and current query parameters. |
print_query_docs | Print docs for all available query parameters. |
print_params | Print current query parameters values. |
collect_only_on | Enable only collecting tests by including --collect-only. |
collect_only_off | Remove collect only setup. |
test_plan | Run all tests from test plan. |
test_query | Run subset of test plan based on a query parameters. |
test_unique | Run representative examples of all available tests. |
test_single | Run single test based on TEST_ID parameter. |
Full list of supported query parameters
Parameter | Description | Supported by commands |
---|---|---|
OPERATORS | List of operators | test_plan, test_query, test_unique |
FILTERS | List of lambda filters | test_query |
INPUT_SOURCES | List of input sources | test_query |
INPUT_SHAPES | List of input shapes | test_query |
DEV_DATA_FORMATS | List of dev data formats | test_query |
MATH_FIDELITIES | List of math fidelities | test_query |
KWARGS | List of kwargs dictionaries. | test_query |
FAILING_REASONS | List of failing reasons | test_query |
SKIP_REASONS | List of skip reasons | test_query |
RANGE | Limit number of results | test_query |
TEST_ID | Id of a test containing test parameters | test_single |
To check supported values and options for each query parameter please run command print_query_docs
.
Usage examples
Run all tests
test_plan
Run all tests for few operators
export OPERATORS=add,div
test_plan
Run subset of tests based on query criteria
export OPERATORS=div
export FILTERS=HAS_DATA_FORMAT,QUICK
export INPUT_SOURCES=FROM_HOST,FROM_DRAM_QUEUE
export DEV_DATA_FORMATS=Float16_b,Int8
export MATH_FIDELITIES=HiFi4,HiFi3
export KWARGS="[{'rounding_mode': 'trunc'},{'rounding_mode': 'floor'}]"
print_params
test_query
Print representative tests ids of all operators with examples for kwargs values
collect_only_on
test_unique
collect_only_off
Print representative tests ids of few operators
export OPERATORS=add,div
collect_only_on
test_unique
collect_only_off
Each test can be uniquely identified via a test id. Format of test id is {operator}-{input_source}-{kwargs}-{input_shape}[-{number_of_operands)-]{dev_data_format}-{math_fidelity}
.
Kwarg is a mandatory or optional attribute of an operator. See framework (PyTorch, Forge, ...) operator documentation for each operator or use test_unique
to find examples.
Run single test based on a test id. Test id may be from a test plan or constructed custom by specifying custom values for kwargs and input_shapes.
export TEST_ID='ge-FROM_HOST-None-(1, 2, 3, 4)-Float16_b-HiFi4'
test_single
Pytest
Pytest is a powerful testing framework for Python that simplifies writing and executing test cases. It supports features like test discovery, fixtures, parameterized testing, and detailed assertions. For more details, visit the official Pytest Documentation.
Testing with multiple input sets
The @pytest.mark.parametrize
decorator allows you to run a single test function with multiple sets of inputs.
Example
@pytest.mark.parametrize("arg1, arg2, expected", [
(1, 2, 3),
(2, 3, 5),
(3, 5, 8),
])
def test_addition(arg1, arg2, expected):
assert arg1 + arg2 == expected
Explanation
- This is particularly useful for testing a function with various combinations of arguments
Marking specific parameters
You can use pytest.param
to mark specific parameter combinations with additional metadata, such as expected failures (xfail
).
Example
@pytest.mark.parametrize("inputs", [
pytest.param(
((1, 2, 3), (4, 5, 6)), marks=pytest.mark.xfail(reason="reason"))
])
Explanation
- In this example, the first parameter combination is marked as
xfail
with a reason provided, indicating it is expected to fail. - This is useful when only some parameter sets are failing or not working correctly.
Skipping tests
Use the @pytest.mark.skip
decorator to skip a test.
Example
@pytest.mark.skip(reason="Causes segmentation fault")
def test_future_feature():
assert some_function() == "expected result"
Explanation
- Skipping tests is particularly useful when a test is causing crashes (e.g., segmentation faults) or breaking the CI pipeline.
Marking tests as expected to fail
The @pytest.mark.xfail
decorator marks a test that is expected to fail.
Example
@pytest.mark.xfail(reason="Known bug in version 1.2.3")
def test_known_bug():
assert buggy_function() == "expected"
Explanation
- If the test passes unexpectedly, pytest will flag it as
XPASS
. - If the test
XPASS
, it indicates an unexpected pass and will be reported as an error. - This is helpful when we need a reminder that a particular test is passing, especially in cases where it previously failed and we want to review all related instances or areas that experienced issues.
Avoid adding decorators inside tests
Example
@pytest.mark.parametrize("model_path", ["<path>/model_path1", "<path>/model_path2"])
def test_model(model_path):
if model_path == "<path>/model_path1":
pytest.xfail("reason")
Explanation
- In this example, one of the models fails a test. Using an
if
statement to applyxfail
is problematic because it will always mark the test as failing, even if it passes. - Instead, use
pytest.param
to explicitly define expected outcomes as shown in the recommended approach above. This ensures more accurate and reliable test behavior.
Tools
This section will cover setup of various tools that can help you with development of tt-forge-fe.
Pre-commit
We have defined various pre-commit hooks that check the code for formatting, licensing issues, etc.
To install pre-commit, run the following command:
source env/activate
pip install pre-commit
After installing pre-commit, you can install the hooks by running:
pre-commit install
Now, each time you run git commit
the pre-commit hooks (checks) will be executed.
If you have already committed before installing the pre-commit hooks, you can run on all files to "catch up":
pre-commit run --all-files
For more information visit pre-commit
mdbook
We use mdbook
to generate the documentation. To install mdbook
on Ubuntu, run the following commands:
sudo apt install cargo
cargo install mdbook
NOTE: If you don't want to install
mdbook
via cargo (Rust package manager), or this doesn't work for you, consult the official mdbook installation guide.
Gather Unique Ops Configuration
The model's unique ops configuration can be gathered, and the results can be printed to the console and saved as a CSV/XLSX file.
-
FORGE_EXTRACT_UNIQUE_OP_CONFIG_AT
-
By setting this flag to one of the following options, the model's unique ops configuration can be extracted at a specific compilation stage or across all stages:
-
FORGE_EXTRACT_UNIQUE_OP_CONFIG_AT = ALL
Extracts all the unique ops configurations present in the graph at every compilation stage. -
FORGE_EXTRACT_UNIQUE_OP_CONFIG_AT = {GENERATE_INITIAL_GRAPH / POST_INITIAL_GRAPH_PASS / OPTIMIZED_GRAPH / AUTOGRAD / POST_AUTOGRAD_PASS / PRE_LOWERING_GRAPH}
Extracts the unique ops configuration only at the specified compilation stage.
-
-
-
FORGE_PRINT_UNIQUE_OP_CONFIG
- By setting this flag to
1
, all unique configurations will be printed to the console.
- By setting this flag to
-
FORGE_EXPORT_UNIQUE_OP_CONFIG_FILE_TYPE
- By setting this flag to
csv
orxlsx
, all unique configurations will be exported as CSV or XLSX file. The file can be saved to the default path (i.e., the current directory), or it can be saved to a specific path by setting theFORGE_EXPORT_UNIQUE_OP_CONFIG_DIR_PATH
environment variable.
- By setting this flag to
-
FORGE_EXPORT_UNIQUE_OP_CONFIG_CSV_DELIMITER
- The delimiter for the csv file can be set by using this flag. Default delimiter : slash (i.e
/
)
- The delimiter for the csv file can be set by using this flag. Default delimiter : slash (i.e
Note: The delimiter used in the CSV file will be a slash (
/
) to avoid potential parsing issues. Commas (,
) and hyphen (-
) may appear in the op shapes and attributes, which could lead to misinterpretation of the data.
Cross Correlate Models and Ops and Export Model Variants Unique Op Configuration
The models and ops can be cross-correlated and model variants unique op configuration are exported as xlsx file by running the scripts/export_models_ops_correlation.py
python script.
The script will perform the following tasks:
- Run all models until the compile depth specified by the user.
- Export unique op requirements to a file (each model variants has its own directory, in that directory each compile depth has its own file).
- Parse those unique op requirements and create a xlsx file that can be loaded into a google sheet.
- The xlsx file will contain list of models on X axis (i.e. columns) and list of ops on Y axis (i.e. rows/indices).
- Elements in between will contain a checkmark if the desired op from the Y axis (i.e., rows/indices) exists in the model on X axis (i.e., columns).
- Models will be sorted alphabetically.
- Ops will be sorted by the number of occurrences in the models.
Usage
To run the script, use the following command:
python scripts/export_models_ops_correlation.py
Required Options:
Option | Description |
---|---|
-c , --compile_depth (GENERATE_INITIAL_GRAPH, PRE_LOWERING_PASS, etc.) | Choose the compilation depth for extracting ops configuration for the models present in pytest_directory_path . |
-i , --pytest_directory_path | Specify the directory path containing models to test. |
Optional Options:
Option | Description |
---|---|
--cross_correlation_output_file_name | Specify the output xlsx file name for saving the cross correation data between model variants and unique ops. |
--models_unique_op_configs_output_file_name | Specify the output xlsx file name for saving the Models unique op configurations. |
-o , --output_directory_path | Specify the output directory path for saving the xlsx/csv file. |
--export_unique_op_config_file_type (csv, xlsx) | Specify the export unique op configuration file type |
Example:
python scripts/export_models_ops_correlation.py --compile_depth GENERATE_INITIAL_GRAPH --pytest_directory_path forge/test/model_demos/high_prio/nlp/pytorch
How to run standalone MLIR, based on generated Forge-FE MLIR graphs
-
Change Directory to tt-mlir repo in tt-forge-fe third parties
$ cd tt-forge-fe/third_party/tt-mlir
-
Build TTRT (once) - (Inside tt-mlir repo)
$ pip install patchelf $ cmake --build build -- ttrt
-
Save system descriptor artifacts file. For more info, refer ttrt docs
$ ttrt query --save-artifacts
-
Convert TTIR MLIR to TTNN MLIR
-
Save ttir mlir from logs in <some_name>_ttir.mlir . Ex: softmax_check_ttir.mlir
-
The first line of TTIR MLIR should be like below.
module attributes {} {
Ex. softmax_check_ttir.mlir
module attributes {} { func.func @forward(%arg0: tensor<13x89x3xf32> {ttir.name = "x"}, %arg1: tensor<13x89x3xf32> {ttir.name = "y"}, %arg2: tensor<1x89x3xf32> {ttir.name = "input_0_multiply_1"}, %arg3: tensor<1x89x3xf32> {ttir.name = "input_0_reciprocal_0"}) -> (tensor<13x89x3xf32> {ttir.name = "ModelConstEvalPass.output_add_3"}) { %0 = tensor.empty() : tensor<1x89x3xf32> %1 = "ttir.reciprocal"(%arg3, %0) <{operandSegmentSizes = array<i32: 1, 1>}> : (tensor<1x89x3xf32>, tensor<1x89x3xf32>) -> tensor<1x89x3xf32> %2 = tensor.empty() : tensor<1x89x3xf32> %3 = "ttir.multiply"(%arg2, %1, %2) <{operandSegmentSizes = array<i32: 2, 1>}> : (tensor<1x89x3xf32>, tensor<1x89x3xf32>, tensor<1x89x3xf32>) -> tensor<1x89x3xf32> %4 = tensor.empty() : tensor<13x89x3xf32> %5 = "ttir.add"(%arg0, %arg1, %4) <{operandSegmentSizes = array<i32: 2, 1>}> : (tensor<13x89x3xf32>, tensor<13x89x3xf32>, tensor<13x89x3xf32>) -> tensor<13x89x3xf32> %6 = tensor.empty() : tensor<13x89x3xf32> %7 = "ttir.add"(%3, %5, %6) <{operandSegmentSizes = array<i32: 2, 1>}> : (tensor<1x89x3xf32>, tensor<13x89x3xf32>, tensor<13x89x3xf32>) -> tensor<13x89x3xf32> return %7 : tensor<13x89x3xf32> } }
-
Generate TTNN MLIR from TTIR MLIR
- Replace path to
system_desc.ttsys
to your corresponding path.
$ ./build/bin/ttmlir-opt --ttir-load-system-desc="path=/proj_sw/user_dev/akannan/forge/tt-forge-fe/third_party/tt-mlir/ttrt-artifacts/system_desc.ttsys" --ttir-to-ttnn-backend-pipeline softmax_check_ttir.mlir -o softmax_check_ttnn.mlir
- Replace path to
-
-
Create Flatbuffers Serialized Binary
- Generate flatbuffer binary from TTNN MLIR
$ ./build/bin/ttmlir-translate --ttnn-to-flatbuffer softmax_check_ttnn.mlir -o softmax_check.ttnn
- Generate flatbuffer binary from TTNN MLIR
-
Run TTNN Binary
$ ttrt run softmax_check.ttnn
Verification
General Overview
When comparing our compiled model
with the framework model
(e.g., PyTorch
model running on host), we aim to verify whether the output from the compiled model
is sufficiently similar to the output from the framework model
(where required degree of similarity is configurable).
So generally we want to perform the following steps:
- Create a framework model.
- Run a forward pass through the framework model.
- Compile the framework model using
Forge
. - Run a forward pass through the compiled model.
- Compare the outputs.
Most of the above steps verify()
function does for us:
- Handles forward passes for both framework and compiled models
- Compares results using a combination of comparison methods
- Supports customization through the
VerifyConfig
class.
Example of usage
def test_add():
class Add(nn.Module):
def __init__(self):
super().__init__()
def forward(self, a, b):
return a + b
inputs = [torch.rand(2, 32, 32), torch.rand(2, 32, 32)]
framework_model = Add()
compiled_model = forge.compile(framework_model, sample_inputs=inputs)
verify(inputs, framework_model, compiled_model)
Notes:
- If you only want to compile model and perform forward pass without comparing outputs you can just:
framework_model = Add()
compiled_model = forge.compile(framework_model, sample_inputs=inputs)
fw_out = framework_model(*inputs)
co_out = compiled_model(*inputs)
Verify Config Overview
If VerifyConfig
isn't passed as a param, default one will be used. Currently through VerifyConfig
you can disable/enable:
Feature | Name | Enabled (default) |
---|---|---|
Verification as a method | enabled | True |
Number of output tensors check | verify_size | True |
Output tensor data type check | verify_dtype | True |
Output tensor shape check | verify_shape | True |
For more information about VerifyConfig
you can check forge/forge/verify/config.py
.
Example of usage
framework_model = Add()
compiled_model = forge.compile(framework_model, sample_inputs=inputs)
verify(inputs, framework_model, compiled_model, VerifyConfig(verify_dtype=False))
Besides that, config also includes value checker. There are 3 types of checker:
AutomaticValueChecker
(default)AllCloseValueChecker
FullValueChecker
For more information about Checkers you can look at forge/forge/verify/value_checkers.py
.
AutomaticValueChecker
This checker performs tensor checks based on the shape and type of tensor (e.g. for scalars it will perform torch.allclose
as pcc
shouldn't be applied to the scalars)
For this checker you can set:
pcc
rtol
atol
dissimilarity_threshold
Example of usage:
# default behavior
verify(inputs, framework_model, compiled_model)
# this will result same as the default behavior
verify(inputs, framework_model, compiled_model, VerifyConfig(value_checker=AutomaticValueChecker())
# setting pcc and rtol
verify(inputs, framework_model, compiled_model, VerifyConfig(value_checker=AutomaticValueChecker(pcc=0.95, rtol=1e-03)))
AllCloseValueChecker
This checker checks tensors using torch.allclose
method.
For this checker you can set:
rtol
atol
Example of usage:
# setting allclose checker with default values
verify(inputs, framework_model, compiled_model, VerifyConfig(value_checker=AllCloseValueChecker()))
# setting allclose checker with custom values
verify(inputs, framework_model, compiled_model, VerifyConfig(value_checker=AllCloseValueChecker(rtol=1e-03)))
FullValueChecker
This checker is combination of AutomaticValueChecker
and AllCloseValueChecker
.
For this checker you can set:
pcc
rtol
atol
dissimilarity_threshold
Examples of usage:
# setting full checker with default values
verify(inputs, framework_model, compiled_model, VerifyConfig(value_checker=FullValueChecker())
# setting full checker with custom values
verify(inputs, framework_model, compiled_model, VerifyConfig(value_checker=FullValueChecker(pcc=0.95, rtol=1e-03)))