Troubleshooting
Code Generation Fails
Symptom:
ERROR: tt-alchemist generatePython failed
Cause: Code generation process encountered an error
Solutions:
-
Check export path is writable:
mkdir -p <export_path> touch <export_path>/test && rm <export_path>/test
-
Verify TTIR was generated:
ls -lh <export_path>/ttir.mlir
If
ttir.mlir
is missing or empty (0 bytes), compilation failed before code generation. -
Check for compilation errors: Review the full output for errors before the "generatePython failed" message.
-
Try with minimal model: Test with a simple model to isolate the issue:
class MinimalModel(torch.nn.Module): def forward(self, x): return x + 1
Export Path Not Set
Symptom:
Compile option 'export_path' must be provided when backend is not 'TTNNFlatbuffer'
Cause: The export_path
option is missing
Solution: Add export_path
to your compiler options:
options = {
"backend": "codegen_py",
"export_path": "./output" # ← Add this
}
Generated Code Execution Fails
Symptom: Errors when running generated Python code via ./run
Possible Causes & Solutions:
-
TT-XLA not built:
cd /path/to/tt-xla cmake --build build
-
Hardware not accessible:
tt-smi # Should show your Tenstorrent devices
-
Wrong hardware configuration:
- Verify generated code matches your hardware setup
- Check device IDs and chip counts
- Rebuild TT-XLA if hardware configuration changed
-
Missing dependencies:
source venv/activate # Ensure virtual environment is active
Generated C++ Code Won't Compile
Symptom: C++ compilation errors in generated code
Solutions:
-
Check TT-NN headers are available:
find /opt/ttmlir-toolchain -name "ttnn*.h"
-
Verify C++ compiler version: Generated code requires C++17 or later
-
Link against TT-NN library: Ensure your build system links the TT-NN library correctly