Note
Certain TT-NN tutorials currently work on Grayskull only. Please check the specific pages of tutorials below for more information.
Tutorials
This is a collection of tutorials written with Jupyter Notebooks to help you ramp up your skillset for using tt-metal. These notebooks can be found under https://github.com/tenstorrent/tt-metal/tree/main/ttnn/tutorials.
These tutorials assume you already have a machine set up with either a grayskull or wormhole device available and that you have successfully followed the instructions for installing and building the software from source.
From within the ttnn/tutorials directory, launch the notebooks with: jupyter lab --no-browser --port=8888
Hint: Be sure to always run the cells from top to bottom as the order of the cells are dependent.
-
Tensor and Add Operation
-
Tensor and Add Operation
- Creating a tensor
- Host Storage: Borrowed vs Owned
- Data Type
- Layout
- Device storage
- Open the device
- Initialize tensors a and b with random values using torch
- Add tensor a and b
- Inspect the output tensor of the add in ttnn
- Convert to torch and inspect the attributes of the torch tensor
- Close the device
-
Tensor and Add Operation
- Matmul Operation
-
Multi-Head Attention
-
Multi-Head Attention
- Enable program cache
- Write Multi-Head Attention using ttnn
- Configuration
- Initialize activations and weights using torch
- Convert activations and weights to ttnn
- Run the first iteration of Multi-Head Attention
- Run a subsequent iteration of Multi-Head Attention
- Write optimized version of Multi-Head Attention
- Pre-process the parameters of the optimized model
- Run the first iteration of the optimized Multi-Head Attention
- Run a subsequent iteration of the optimized Multi-Head Attention
- Check that the output of the optimized version matches the output of the original implementation
- Close the device
-
Multi-Head Attention
- ttnn Tracer
- ttnn Profiling
- Resnet Basic Block
- Graphing Torch DiT_XL_2 With TTNN