tt-torch

tt-torch is a PyTorch2.0 and torch-mlir based front-end for tt-mlir.

tt-torch uses venv to keep track of all dependencies. After compiling you can activate the venv by running from the project root directory:

source env/activate

The currently supported models can be found here. There is a brief demo showing how to use the compiler in demos/resnet/resnet50_demo.py

The general compile flow is:

  1. Pytorch model -> torch.compile which creates an fx graph
  2. Several compiler passes on the fx graph including consteval and dead code removal
  3. Conversion to torch-mlir -> torch-backend-mlir -> stableHLO through torch-mlir
  4. Conversion to TTIR -> TTNN -> flatbuffer through tt-mlir
  5. Creating executor with flatbuffer and passing back to user
  6. Copying inputs to device and executing flatbuffer through tt-mlir on each user invocation

In order to speed up model bring-up, users have the option of compiling models op-by-op. This allows in-parallel testing of the model since compilation does not stop at the first error. If enabled, see Controlling Compilation, after step 2, compilation stops and the fx graph is passed to the executor which is returned to the user. Upon execution, whenever a new, unique op is seen (based on op-type and shape on inputs), a new fx graph is created with just one operation, inputs and outputs. This small graph then proceeds through steps 3-4 and is executed in place.

Results of each unique op execution are stored in a json file to be later parsed into either a spreadsheet, or uploaded to a database.

Op-by-op execution is currently performed on the pytorch fx graph, we'll be adding support for op-by-op on the stableHLO graph soon to allow op-by-op bringup of onnx models.

The repository uses pre-commit, read more about it here.

Getting Started

System Dependencies

tt-torch requires the python 3.10 dev package, as well as the venv package. If not already installed, please run the following:

sudo apt-get install python3.10-dev python3.10-venv

Creating a Virtual Environment (skip if you already have one)

Create a virtual environment if you do not already have one in your project:

python3.10 -m venv myvenv

This will create a virtual environemnt in the folder myvenv in the current directory.

Activate the environemnt:

source myvenv/bin/activate

Installing tt-torch

Installation Notes

  • tt-torch requires a pytorch installation that ships with their ABI.
    • The tt-torch wheel lists the following version of torch as an installation requirement: torch@https://download.pytorch.org/whl/cpu-cxx11-abi/torch-2.5.0%2Bcpu.cxx11.abi-cp310-cp310-linux_x86_64.whl
    • This will be installed by pip upon installing the tt-torch wheel
  • The tt-torch wheel contains a fork of torch-mlir. Please ensure that torch-mlir has not been installed in your venv before installing the tt-torch wheel.

Torchvision Install (Required if you need to install torchvision)

If you intend to use torchvision in your project then this step must be done before installing the tt-torch wheel

You will need to build the torchvision wheel yourself with certain build flags. This is because torchvision does not publish a wheel which uses the PyTorch CXX11 ABI.

To install torchvision:

git clone https://github.com/pytorch/vision.git
cd vision
git checkout v0.20.0 # tt-torch requires PyTorch 2.5.0. torchvision 0.20 is the latest version of torchvision that is compatible with PyTorch 2.5.0
pip uninstall -y torchvision # Ensure torchvision is not in your virtual environment
pip install wheel
pip install torch@https://download.pytorch.org/whl/cpu-cxx11-abi/torch-2.5.0%2Bcpu.cxx11.abi-cp310-cp310-linux_x86_64.whl
TORCHVISION_USE_VIDEO_CODEC=0 TORCHVISION_USE_FFMPEG=0 _GLIBCXX_USE_CXX11_ABI=1 USE_CUDA=OFF python setup.py bdist_wheel
pip install dist/torchvision*.whl --force-reinstall

If the install was successful then there's no need to keep the torchvision source around:

cd ..
rm -rf vision

Installing the tt-torch wheel

Download a tt-torch wheel from here

Install the wheel:

pip install <PATH_TO_TT_TORCH_WHEEL>.whl

Updating PYTHONPATH

In addition to the tt-torch python library that gets installed in <YOUR_ENV_ROOT>/lib/python3.x/site-packages, some binaries will be installed in <YOUR_ENV_ROOT>/lib, and some files from tt-metal will be installed under <YOUR_ENV_ROOT>/tt-metal. Python needs to see these installations and so you should update your PYTHONPATH environment variable to include them:

export PYTHONPATH=$PYTHONPATH:<YOUR_ENV_ROOT>:<YOUR_ENV_ROOT>/lib

Compiling and Running a Model

Once you have your torch.nn.Module compile the model:

from tt_torch.dynamo.backend import backend
import torch

class MyModel(torch.nn.Module):
    def __init__(self):
        ...

    def foward(self, ...):
        ...

model = MyModel()

model = torch.compile(model, backend=backend)

inputs = ...

outputs = model(inputs)

Example - Add Two Tensors

Here is an exampe of a small model which adds its inputs running through tt-torch. Try it out!

from tt_torch.dynamo.backend import backend
import torch

class AddTensors(torch.nn.Module):
  def forward(self, x, y):
    return x + y


model = AddTensors()
tt_model = torch.compile(model, backend=backend)

x = torch.ones(5, 5)
y = torch.ones(5, 5)
print(tt_model(x, y))

Prerequisites:

Main project dependencies are:

  • clang 17
  • Ninja
  • CMake >= 3.30
  • python 3.10

On Ubuntu 22.04 systems these can be installed using the following commands:

# Update package list
sudo apt update -y
sudo apt upgrade -y

# Install Clang
sudo apt install clang-17

# Install Ninja
sudo apt install ninja-build

# Install CMake
sudo apt remove cmake -y
pip3 install cmake --upgrade

Ensure cmake can by found in this path pip installed it to. E.g. by adding PATH=$PATH:$HOME/.local/bin to your .bashrc file, and verify installation:

cmake --version

This project requires the GCC 11 toolchain. To check which GCC toolchain is currently in use, run:

clang -v

Look for the line that starts with: Selected GCC installation:. If it is something other than GCC 11, please uninstall that and install GCC 11 using:

sudo apt-get install gcc-11 lib32stdc++-11-dev lib32gcc-11-dev

The project also requires a toolchain build. By default, the toolchain is built in /opt/ttmlir-toolchain. This path is controlled by the TTMLIR_TOOLCHAIN_DIR environment variable.

The toolchain installation only needs to be done once, by running the following commands:

# Create toolchain dir
sudo mkdir -p /opt/ttmlir-toolchain
sudo chown -R $USER /opt/ttmlir-toolchain


# Build environment
cd third_party
export TTMLIR_TOOLCHAIN_DIR=/opt/ttmlir-toolchain/
cmake -B toolchain -DBUILD_TOOLCHAIN=ON
cd -

For more information see tt-mlir build steps.

Compile Steps:

Run the following commands to compile. Profiling builds require an extra step1:

source env/activate
cmake -G Ninja -B build
cmake --build build
cmake --install build

Run a basic test to verify:

pytest tests/torch/test_basic.py
1

For a profiling build, cmake build files should be generated with an extra directive, as cmake -G Ninja -B build -DTT_RUNTIME_ENABLE_PERF_TRACE=ON. Refer to profiling docs for more information.

tt-torch uses pytest for all unit and model tests.

Tests are organized into unit tests for pytorch (tests/torch), unit tests for onnx (test/onns) and models (tests/models). They can be run locally by running:

source env/activate
pytest -svv tests/torch

Model tests (tests/models) have the option to run op-by-op, see overview. This allows for faster model bring-up as it allows users to find any potential issues in parallel. This is controlled by the --op_by_op_torch or --op_by_op_stablehlo flags. Example:

pytest -svv tests/models/albert --op_by_op_torch

Controlling Compiler Behaviour

You can use the following environment variables to override default behaviour:

Environment VariableBehaviourDefault
TT_TORCH_COMPILE_DEPTHSets the maximum compile depth, see tt_torch/tools/utils.py for options.EXECUTE
TT_TORCH_VERIFY_OP_BY_OPSets whether to verify the output of each compiled op against pytorch when running with compile depth EXECUTE_OP_BY_OP.False
TT_TORCH_VERIFY_INTERMEDIATESSets whether to verify runtime intermediates during execution.False
TT_TORCH_CONSTEVALEnables evaluation of constant expressions (consteval) in the Torch FX graph prior to compilation.False
TT_TORCH_CONSTEVAL_PARAMETERSExtends consteval to include parameters (e.g., model weights) as well as embedded constants.False
TT_TORCH_INLINE_PARAMETERSInlines parameters in the MLIR module (and thus flatbuffer executable) rather than requiring them as inputs. NOTE: The maximum size of a flatbuffer is 2GB so this will cause compilation to fail for sufficiently large modelsFalse
TT_TORCH_IR_LOG_LEVELEnables printing MLIR from Torch to TTNN. It supports two modes; INFO and DEBUG. INFO prints MLIR for all conversions steps (Torch, StableHLO, TTIR and TTNN MLIR graphs). DEBUG prints intermediate MLIR for all passes (IR dump before and after each pass) additionally. Be warned, DEBUG IR printing forces single core compile, so it is much slower.Disable

Controlling Compiler Behaviour Programatically

Instead of using the above environment variables, compiler behaviour can be configured programatically as well.

Here is an example of enabling consteval:

from tt_torch.dynamo.backend import backend, BackendOptions
from tt_torch.tools.utils import CompilerConfig
import torch

class MyModel(torch.nn.Module):
    def __init__(self):
        ...

    def foward(self, ...):
        ...

model = MyModel()

cc = CompilerConfig()
cc.enable_consteval = True
cc.consteval_parameters = True # This will enable constant folding on the parameters in addition to any constants

options = BackendOptions()
options.compiler_config = cc
model = torch.compile(model, backend=backend, options=options)

inputs = ...

outputs = model(inputs)

Pre-Commit

Pre-Commit applies a Git hook to the local repository, ensuring linting is checked and applied on every git commit action. Install it from the root of the repository using:

source env/activate
pre-commit install

If you have already made commits before installing the pre-commit hooks, you can run the following to “catch up”:

pre-commit run --all-files

For more information visit pre-commit

Profiling

Introduction

tt-torch uses the tt-metal Tracy fork to collect profiling data. Tracy is a single process profiler, and uses a client-server model to trace both host calls and on-device operation performance. tt-torch implements a wrapper called profile.py with custom orchestration logic to handle the spawning of the Tracy capture server and the client workload to be profiled, as well as report generation and data postprocessing functionality.

The output of profile.py is a CSV report displaying a table of operations executed on device and rich timing, memory usage and configuration data associated with them.

Note: Paths in this document are given relative to the repo root.

Prerequisites

In the tt-torch building step (Building), it is required to configure your cmake build with the additional cmake directive TT_RUNTIME_ENABLE_PERF_TRACE=ON (i.e. run: cmake -G Ninja -B build -DTT_RUNTIME_ENABLE_PERF_TRACE=ON).

Usage

The profile.py tool is the recommended entrypoint for profiling workloads in tt-torch.

profile.py [-h] [-o OUTPUT_PATH] [-p PORT] "test_command"

Note: The test_command must be quoted!

As a minimal example, the following command will run and profile the MNIST test:

python tt_torch/tools/profile.py "pytest -svv tests/models/mnist/test_mnist.py::test_mnist_train[full-eval]"

The report is created at results/perf/device_ops_perf_trace.csv by default, unless an output path is specified.

Limitations

  • Tracy is a single process profiler and will not work with multiprocessed workflows. This includes tests parameterized by op_by_op_shlo and op_by_op_torch, which break down a model into individual ops and run them serially in separate processes.
  • To view traces, you can use install/tt-metal/generated/profiler/.logs/tracy_profile_log_host.tracy.
    • This is a .tracy file that can be consumed by the tt-metal Tracy GUI and produce visual profiling traces of host and device activity.
    • You must use the tt-metal Tracy GUI to view this file. Refer to the GUI section in the tt-metal profiling documentation. Other sections are not applicable to tt-torch profiling.

Troubleshooting

  • tt-torch/install/tt-metal/tools/profiler/bin/capture-release -o tracy_profile_log_host.tracy -f -p 8086' timed out after X seconds
    • Tracy uses a client-server model to communicate profiling data between the Tracy capture server and the client being profiled.
    • Communication between client and server is done on a given port (default: 8086) as specified with the -p option.
    • If there are multiple tracy clients/server processes active at once or previous processes are left dangling, or other processes on host occupying port 8086, there may be contention and unexpected behaviour including capture server timeouts.
    • This may be addressed by manually specifying an unused port with the -p option to profile.py.

How to add model tests?

Requirements

Build your environment

TT-Torch Backend in a nutshell

ModelTester and OnnxModelTester

Our testing framework uses ModelTester, OnnxModelTester defined under tests/utils.py ModelTester and OnnxModelTester are designed to facilitate the testing of PyTorch and ONNX models, respectively. These classes provide a structured framework for loading models, preparing inputs, running inference, and verifying the accuracy of the outputs.

ModelTester

The ModelTester class serves as a base class for testing PyTorch models. It handles common testing procedures and provides abstract methods that derived classes can implement for specific model loading and input preparation. Derived classes must implement the following abstract methods:

  • _load_model(): This method should load the PyTorch model to be tested and return the model object.
  • _load_inputs(): This method should load or generate the input data for the model and return it. The input should be a Torch object.
  • _extract_outputs() (optional): This method should return a tuple of torch tensors based on the outputs if ModelTester _extract_outputs fails.

OnnxModelTester

The OnnxModelTester class inherits from ModelTester and extends it to specifically handle testing of ONNX models.

Derived classes must implement the following abstract methods:

  • _load_model(): This method should load the Onnx model to be tested and return the model object.
  • _load_inputs(): This method should load or generate the input data for the model and return it. The input should be a Torch object.
  • _extract_outputs() (optional): This method should return a tuple of torch tensors based on the outputs if ModelTester _extract_outputs fails.

Backend

Backends are described under tt_torch/dynamo/backend.py and tt_torch/onnx_compile/onnx_compile.py There are a few factors determining which backend to use:

class CompileDepth(Enum):
    TORCH_FX = 1
    STABLEHLO = 2
    TTNN_IR = 3
    COMPILE_OP_BY_OP = 4
    EXECUTE_OP_BY_OP = 5
    EXECUTE = 6
class OpByOpBackend(Enum):
    TORCH = 1
    STABLEHLO = 2

Backends for Torch Models:

  • Op by Op Flows (COMPILE_OP_BY_OP/ EXECUTE_OP_BY_OP):
    • OpByOpBackend = TORCH --> uses TorchExecutor
    • OpByOpBackend = STABLEHLO --> uses StablehloExecutor
  • Other Compile Depths:
    • Only OpByOpBackend = TORCH is allowed.
    • Uses Executor

Backends for ONNX Models:

  • Op by Op Flows (COMPILE_OP_BY_OP/ EXECUTE_OP_BY_OP): Only OpByOpBackend = STABLEHLO is allowed. Uses StablehloExecutor
  • Other Compile Depths: Only OpByOpBackend = STABLEHLO is allowed. Uses OnnxExecutor

Executor

TT-Torch provides a set of executor classes that handle different types of models (ONNX, PyTorch) and compilation strategies (full compilation, op-by-op, etc.). The executor classes form a hierarchy, with specialized executors for different scenarios.

Executor (Base)
├── OpByOpExecutor
│   ├── TorchExecutor
│   └── StablehloExecutor
└── OnnxExecutor

Executor (Base Class)

The Executor class is the foundation for all executor implementations. It provides the basic framework for:

  • Managing model representations (PyTorch programs, etc.)
  • Converting input types between different formats
  • Handling constants and model parameters
  • Executing compiled models via TT-MLIR
  • Managing device resources
  • Verifying execution results
Key methods:
  • __call__: Main entry point for executing the model
  • set_binary: Sets the compiled binary for execution
  • typecast_inputs: Converts inputs to hardware-supported types
  • register_intermediate_callback: Sets up callbacks for runtime verification

OpByOpExecutor

OpByOpExecutor extends the base Executor to support operation-by-operation compilation and execution. This allows for:

  • Detailed profiling of individual operations
  • Verification of each operation's outputs
  • Debugging specific operations that might fail
Key methods:
  • compile_op: Compiles a single operation
  • run_op: Executes a single compiled operation

TorchExecutor

TorchExecutor is specialized for handling PyTorch models in an op-by-op fashion. It:

  • Processes PyTorch FX graph modules node by node
  • Converts PyTorch operations to StableHLO
  • Compares outputs with golden (PyTorch) outputs for verification
Key methods:
  • get_stable_hlo_graph: Converts a PyTorch operation to StableHLO IR
  • run_gm_op_by_op: Executes a graph module operation by operation

StablehloExecutor

StablehloExecutor specializes in executing models through the StableHLO IR. It can:

  • Process ONNX models converted to StableHLO
  • Process PyTorch models converted to StableHLO
  • Execute individual StableHLO operations
Key methods:
  • add_program: Adds a PyTorch program to the executor
  • add_onnx_model_proto: Adds an ONNX model to the executor
  • get_stable_hlo_graph: Prepares a StableHLO operation for compilation
  • shlo_op_by_op: Executes StableHLO operations individually

OnnxExecutor

OnnxExecutor is designed for handling ONNX models. It can:

  • Execute ONNX models using ONNX Runtime
  • Execute ONNX models converted to TT-MLIR binaries

CompilerConfig

This class manages settings for running models on Tenstorrent devices. Key aspects include:

  • Compilation Depth: Defines the level of the compilation pipeline to reach.
  • Profiling: Enables the collection of performance data for individual operations.
  • Verification: Controls various checks and validations during compilation.
  • Environment Overrides: Allows configuration through environment variables. This is explained in detail under Controlling Compiler Behaviour

Please see tt_torch/tools/utils.py for detailed information.

How to write a test?

The following is an example test body:

# Insert SPDX licensing. Pre-commit will insert if it is missing
# SPDX-FileCopyrightText: (c) 2025 Tenstorrent AI ULC
#
# SPDX-License-Identifier: Apache-2.0

# some base imports that are required for all tests:
import torch
import pytest
import onnx # for Onnx Tests

from tests.utils import ModelTester # for PyTorch Tests
from tests.utils import OnnxModelTester # for Onnx Tests
from tt_torch.tools.utils import CompilerConfig, CompileDepth, OpByOpBackend

class ThisTester(ModelTester): # or class ThisTester(OnnxModelTester):
    def _load_model(self):
        model = ....
        return model
    def _load_inputs(self):
        inputs = ...
        return inputs

# you can pytest parameterize certain arguments. i.e. Mode, OpByOpBackend, Model Name
@pytest.mark.parametrize(
    "mode",
    ["train", "eval"],
)
@pytest.mark.parametrize(
    "model_name",
    [
        "model_name_0",
        "model_name_1",
    ],
)
@pytest.mark.parametrize(
    "op_by_op",
    [OpByOpBackend.STABLEHLO, OpByOpBackend.TORCH, None],
    ids=["op_by_op_stablehlo", "op_by_op_torch", "full"],
)
# For PyTorch Tests
def <test_name>(record_property, model_name, mode, op_by_op):

    cc = CompilerConfig()
    cc.enable_consteval = True
    cc.consteval_parameters = True
    if op_by_op:
        cc.compile_depth = CompileDepth.EXECUTE_OP_BY_OP
        if op_by_op == OpByOpBackend.STABLEHLO:
            cc.op_by_op_backend = OpByOpBackend.STABLEHLO

    tester = ThisTester(
        model_name,
        mode,
        compiler_config=cc,
        record_property_handle=record_property,
    )
    results = tester.test_model()

    if mode == "eval":
        # code to evaluate the output is as expected
    tester.finalize()

# For Onnx Tests:
def <test_name>(record_property, model_name, mode, op_by_op):
    cc = CompilerConfig()
    if op_by_op:
        cc.compile_depth = CompileDepth.EXECUTE_OP_BY_OP
        cc.op_by_op_backend = OpByOpBackend.STABLEHLO

    tester = ThisTester(
        model_name,
        mode,
        compiler_config=cc,
        record_property_handle=record_property,
        model_group="red",
    )

    results = tester.test_model()
    if mode == "eval":
        # code to evaluate the output is as expected
    tester.finalize()

You can find example tests under tests/models Note: please make sure to distinguish Onnx tests by appending _onnx to test names. i.e. test_EfficientNet_onnx.py

Test run modes

  • op-by-op flow: This will break down model into graphs and break down graphs into ops, compiling and executing unique (first seen occurrence) ops independently. Results are written to .json file and and optionally converted to XLS file for reporting, as post-processing step. The op-by-op flow is typically used for bringing up new models and debugging and you should start there, especially if the model is a new, untested architecture or your have reason to believe it will not work end-to-end out of the box. Engaged with cc.compile_depth = CompileDepth.EXECUTE_OP_BY_OP in test, typically driven by pytest params [op_by_op_torch-eval].

  • full end-to-end flow: This is the typical compile + execute of the model that typically includes functional correctness checking. Engaged with cc.compile_depth = CompileDepth.EXECUTE in test, typically driven by pytest params [full-eval].

Where to add tests on tt-torch GitHub CI?

If you're a Tenstorrent internal developer and have a new model that is either running fully/correctly or still needs some work (compiler support, runtime support, etc), it should be added to CI in the same PR you add the model. Below is guide for where to add it.

Case 1: The new model test runs correctly end-to-end

If you've tried it and it runs – great!

  • Add it to run in "nightly full model execute list" in .github/workflows/run-full-model-execution-tests-nightly.yml while ideally balancing existing groups of tests. Example:
tests/models/Qwen/test_qwen2_casual_lm.py::test_qwen2_casual_lm[full-Qwen/Qwen2.5-1.5B-eval]
  • Also add it to "weekly op-by-op-flow list" in .github/workflows/run-op-by-op-flow-tests-weekly.yml where we less frequently run tests that have all ops passing through to EXECUTE depth in op-by-op flow. Example:
tests/models/Qwen/test_qwen2_casual_lm.py::test_qwen2_casual_lm[op_by_op_torch-Qwen/Qwen2.5-1.5B-eval]

Case 2: The new model test runs end-to-end but encounters a PCC/ATOL/Checker error

This is okay, there is still value in running the model.

  • Follow previous section instructions for adding it to "nightly full model execute" and "weekly op-by-op-flow list" but first open a GitHub issue (follow template and models_pcc_issue label like the example below) to track the PCC/ATOL/Checker error, reference it in the test body so it can be tracked/debugged, and disable PCC/ATOL/Token checking as needed. Example:
# TODO Enable checking - https://github.com/tenstorrent/tt-torch/issues/490
assert_pcc=False,
assert_atol=False,

Case 3: The new model test does not run correctly end-to-end

No problem. If your end-to-end model hits a compiler failure (unsupported op, etc) or runtime assert of any kind, this is why the op-by-op flow exists. The op-by-op flow is designed to flag per-op compile/runtime failures (which are perfectly fine) but is expected to return overall passed status.

  • Go ahead and run the op-by-op flow locally (or on CI) for your model, and if the pytest finishes without fatal errors, add it to the "nightly op-by-op flow list" (a new or existing group) in .github/workflows/run-op-by-op-flow-tests-nightly.yml where individual ops will be tracked/debugged and later promoted to "nightly full model execute list" once ready. Example:
tests/models/t5/test_t5.py::test_t5[op_by_op_torch-t5-large-eval]
  • It is helpful if you can run python results/parse_op_by_op_results.py (will generate results/models_op_per_op.xlsx for all models you've recently run in op-by-op-flow) and include the XLS file in your PR. This XLS file contains op-by-op-flow results and is also generated in Nightly regression for all work-in-progress models in .github/workflows/run-op-by-op-flow-tests-nightly.yml.

  • If your model is reported in results/models_op_per_op.xlsx as being able to compile all ops successfully (ie. all ops can compile to status 6: CONVERTED_TO_TTNN, but some hit runtime 7: EXECUTE failures) then it should also be added to "nightly e2e compile list" in .github/workflows/run-e2e-compile-tests.yml which stops before executing the model via TT_TORCH_COMPILE_DEPTH=TTNN_IR pytest ...

How to load test files into/from Large File System (LFS)

We have set up access to a AWS S3 bucket to be able to load and access model related files for testing. We can load files into our S3 bucket and access them from the tester scripts. You will need access to S3 bucket portal to add files. If you don't have an AWS account or access to the S3 bucket please reach out to the tt-torch community leader. Then, depending on if the test is running on CI or locally we will be able to load the files from the CI/IRD LFS caches that automatically sync up with contents in S3 bucket.

Load files into S3 bucket

Access S3 bucket portal, if you don't have access to the S3 bucket please reach out to the tt-torch community leader, and load file from local dir. Please add files following this structure:

test_files
├── pytorch
|   ├── huggingface
|   |   ├── meta-llama
│   |   |   ├── Llama-3.1-70B
│   |   |   |   └── <hugginface files>
│   |   |   ├── Llama-2-7b-hf
│   |   |   |   └── <hugginface files>
│   |   |   └── ...
│   |   └── ...
│   ├── yolov10
│   |   └── yolov10.pt
│   └── ...
└── onnx
    ├── ViT
    |   └── ViT.onnx
    └── ...

Load files from S3 bucket

Once files is loaded into S3 bucket we can access the file using a helper function:

@staticmethod
def get_file(s3_path):
from tests.utils import ModelTester, get_file, skip_full_eval_test

...
class ThisTester(ModelTester):
    def _load_model(self):
        file = get_file("test_files/pytorch/yoloyv10/yolov_10n.pt")

...

The s3_path arg should be the full path of the file in the S3 bucket.

Loading files locally

Locally get_file() will pull files directly from an IRD LFS cache. The IRD LFS cache is set up to sync up with S3 bucket every 5-10 minutes. You will need to set the IRD_LF_CACHE environment variable to the appropriate address. Contact tt-torch community leader for IRD LF cache address.

The file/s will be downloaded into a local cache so next time you want to access the same file we won't have to access the IRD cache. The default location for the local cache is ~/.cache/. If you want to redirect files to a custom cache path set the LOCAL_LF_CACHE env variable to the desired path.

Loading files from CI

Once a file has been loaded into ther S3 bucket the CI's shared DOCKER_CACHE_DIR has been set up to sync up with the contents of the S3 bucket every hour. get_file() will fetch the file from the DOCKER_CACHE_DIR.

Supported Models

The following models can be currently run through tt-torch as of Feb 3rd, 2025. Please note, there is a known bug causing incorrect output for some models. The PCC is displayed at the end of each test below. This issue will be addressed soon.

Model NameVariantPytest Command
AlbertMasked LM Basetests/models/albert/test_albert_masked_lm.py::test_albert_masked_lm[full-albert/albert-base-v2-eval]
Masked LM Largetests/models/albert/test_albert_masked_lm.py::test_albert_masked_lm[full-albert/albert-large-v2-eval]
Masked LM XLargetests/models/albert/test_albert_masked_lm.py::test_albert_masked_lm[full-albert/albert-xlarge-v2-eval]
Masked LM XXLargetests/models/albert/test_albert_masked_lm.py::test_albert_masked_lm[full-albert/albert-xxlarge-v2-eval]
Sequence Classification Basetests/models/albert/test_albert_sequence_classification.py::test_albert_sequence_classification[full-textattack/albert-base-v2-imdb-eval]
Token Classification Basetests/models/albert/test_albert_token_classification.py::test_albert_token_classification[full-albert/albert-base-v2-eval]
Autoencoder(linear)tests/models/autoencoder_linear/test_autoencoder_linear.py::test_autoencoder_linear[full-eval]
DistilBertbase uncasedtests/models/distilbert/test_distilbert.py::test_distilbert[full-distilbert-base-uncased-eval]
Llama3Btests/models/llama/test_llama_3b.py::test_llama_3b[full-meta-llama/Llama-3.2-3B-eval]
MLPMixertests/models/mlpmixer/test_mlpmixer.py::test_mlpmixer[full-eval]
MNistpytest -svv tests/models/mnist/test_mnist.py::test_mnist_train[full-eval]
MobileNet V2tests/models/MobileNetV2/test_MobileNetV2.py::test_MobileNetV2[full-eval]
TorchVisiontests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-mobilenet_v2]
MobileNet V3Small TorchVisiontests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-mobilenet_v3_small]
Large TorchVisiontests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-mobilenet_v3_large]
OpenPosetests/models/openpose/test_openpose_v2.py::test_openpose_v2[full-eval]
Preciever_IOtests/models/perceiver_io/test_perceiver_io.py::test_perceiver_io[full-eval]
ResNet18tests/models/resnet/test_resnet.py::test_resnet[full-eval]
18 TorchVisiontests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-resnet18]
34 TorchVisiontests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-resnet34]
50tests/models/resnet50/test_resnet50.py::test_resnet[full-eval]
50 TorchVisiontests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-resnet50]
101 TorchVisiontests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-resnet101]
152 TorchVisiontests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-resnet152]
Wide ResNet50tests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-wide_resnet50_2]
101tests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-wide_resnet101_2]
ResNext50tests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-resnext50_32x4d]
101_32x8dtests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-resnext101_32x8d]
101_64x4dtests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-resnext101_64x4d]
Regnety 400tests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-regnet_y_400mf]
y 800tests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-regnet_y_800mf]
y 1 6tests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-regnet_y_1_6gf]
y 3 2tests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-regnet_y_3_2gf]
y 8tests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-regnet_y_8gf]
y 16tests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-regnet_y_16gf]
y 32tests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-regnet_y_32gf]
x 400tests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-regnet_x_400mf]
x 800tests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-regnet_x_800mf]
x 1 6tests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-regnet_x_1_6gf]
x 3 2tests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-regnet_x_3_2gf]
x 8tests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-regnet_x_8gf]
x 16tests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-regnet_x_16gf]
x 32tests/models/torchvision/test_torchvision_image_classification.py::test_torchvision_image_classification[full-regnet_x_32gf]
YoloV3tests/models/yolov3/test_yolov3.py::test_yolov3[full-eval]

Ops Documentation

This section contains documentation for Ops operations.

Stablehlo Documentation

This section contains documentation for Stablehlo operations.

arith.constant

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1]>,
aten::_safe_softmax4

stablehlo.add::ttnn.add

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[256,256]>,
Tensor<[256,256]>,
ttnn.addaten::add.Tensor6
1Tensor<[1,1,32,32]>,
Tensor<[1,1,32,32]>,
ttnn.addaten::add.Tensor4
2Tensor<[1,32,1]>,
Tensor<[1,32,1]>,
ttnn.addaten::add.Tensor4
3Tensor<[1,32,32,128]>,
Tensor<[1,32,32,128]>,
ttnn.addaten::add.Tensor5
4Tensor<[1,32,32,32]>,
Tensor<[1,32,32,32]>,
ttnn.addaten::add.Tensor4
5Tensor<[1,32,4096]>,
Tensor<[1,32,4096]>,
ttnn.addaten::add.Tensor5
6Tensor<[32]>,
Tensor<[32]>,
ttnn.addaten::arange4
7Tensor<[32,1]>,
Tensor<[32,1]>,
ttnn.addaten::triu4
8Tensor<[1,7,768]>,
Tensor<[1,7,768]>,
ttnn.addaten::add.Tensor5
9Tensor<[7]>,
Tensor<[7]>,
ttnn.addaten::add.Tensor4
10Tensor<[1,7,1]>,
Tensor<[1,7,1]>,
ttnn.addaten::add.Tensor4
11Tensor<[7,2304]>,
Tensor<[7,2304]>,
ttnn.addaten::add.Tensor4
12Tensor<[1,12,7,7]>,
Tensor<[1,12,7,7]>,
ttnn.addaten::add.Tensor4
13Tensor<[7,768]>,
Tensor<[7,768]>,
ttnn.addaten::add.Tensor4
14Tensor<[7,3072]>,
Tensor<[7,3072]>,
ttnn.addaten::add.Tensor4
15Tensor<[1,7,3072]>,
Tensor<[1,7,3072]>,
ttnn.addaten::add.Tensor5
16Tensor<[1]>,
Tensor<[1]>,
ttnn.addaten::arange4
17Tensor<[1,32,112,112]>,
Tensor<[1,32,112,112]>,
ttnn.addaten::add.Tensor4
18Tensor<[64]>,
Tensor<[64]>,
ttnn.addaten::add.Tensor4
19Tensor<[1,64,112,112]>,
Tensor<[1,64,112,112]>,
ttnn.addaten::add.Tensor4
20Tensor<[1,64,56,56]>,
Tensor<[1,64,56,56]>,
ttnn.addaten::add.Tensor4
21Tensor<[128]>,
Tensor<[128]>,
ttnn.addaten::add.Tensor4
22Tensor<[1,128,56,56]>,
Tensor<[1,128,56,56]>,
ttnn.addaten::add.Tensor4
23Tensor<[1,128,28,28]>,
Tensor<[1,128,28,28]>,
ttnn.addaten::add.Tensor4
24Tensor<[256]>,
Tensor<[256]>,
ttnn.addaten::add.Tensor4
25Tensor<[1,256,28,28]>,
Tensor<[1,256,28,28]>,
ttnn.addaten::add.Tensor4
26Tensor<[512]>,
Tensor<[512]>,
ttnn.addaten::add.Tensor4
27Tensor<[1,512,28,28]>,
Tensor<[1,512,28,28]>,
ttnn.addaten::add.Tensor4
28Tensor<[1,19,28,28]>,
Tensor<[1,19,28,28]>,
ttnn.addaten::convolution4
29Tensor<[1,38,28,28]>,
Tensor<[1,38,28,28]>,
ttnn.addaten::convolution4
30Tensor<[256,512]>,
Tensor<[256,512]>,
ttnn.addaten::add.Tensor4
31Tensor<[1,256,1]>,
Tensor<[1,256,1]>,
ttnn.addaten::add.Tensor4
32Tensor<[1,256,512]>,
Tensor<[1,256,512]>,
ttnn.addaten::add.Tensor4
33Tensor<[1,1000]>,
Tensor<[1,1000]>,
ttnn.addaten::add.Tensor4
34Tensor<[1,1024,512]>,
Tensor<[1,1024,512]>,
ttnn.addaten::convolution4
35Tensor<[1,256,256]>,
Tensor<[1,256,256]>,
ttnn.addaten::gelu4
36Tensor<[1,64,1,1]>,
Tensor<[1,64,1,1]>,
ttnn.addaten::add.Tensor4
37Tensor<[1,64,360,640]>,
Tensor<[1,64,360,640]>,
ttnn.addaten::add.Tensor4
38Tensor<[1,64,180,320]>,
Tensor<[1,64,180,320]>,
ttnn.addaten::add.Tensor4
39Tensor<[1,256,1,1]>,
Tensor<[1,256,1,1]>,
ttnn.addaten::add.Tensor4
40Tensor<[1,256,180,320]>,
Tensor<[1,256,180,320]>,
ttnn.addaten::add.Tensor4
41Tensor<[1,128,1,1]>,
Tensor<[1,128,1,1]>,
ttnn.addaten::add.Tensor4
42Tensor<[1,128,180,320]>,
Tensor<[1,128,180,320]>,
ttnn.addaten::add.Tensor4
43Tensor<[1,128,90,160]>,
Tensor<[1,128,90,160]>,
ttnn.addaten::add.Tensor4
44Tensor<[1,512,1,1]>,
Tensor<[1,512,1,1]>,
ttnn.addaten::add.Tensor4
45Tensor<[1,512,90,160]>,
Tensor<[1,512,90,160]>,
ttnn.addaten::add.Tensor4
46Tensor<[1,256,90,160]>,
Tensor<[1,256,90,160]>,
ttnn.addaten::add.Tensor4
47Tensor<[1,256,45,80]>,
Tensor<[1,256,45,80]>,
ttnn.addaten::add.Tensor4
48Tensor<[1,1024,1,1]>,
Tensor<[1,1024,1,1]>,
ttnn.addaten::add.Tensor4
49Tensor<[1,1024,45,80]>,
Tensor<[1,1024,45,80]>,
ttnn.addaten::add.Tensor4
50Tensor<[1,512,45,80]>,
Tensor<[1,512,45,80]>,
ttnn.addaten::add.Tensor4
51Tensor<[1,512,23,40]>,
Tensor<[1,512,23,40]>,
ttnn.addaten::add.Tensor4
52Tensor<[1,2048,1,1]>,
Tensor<[1,2048,1,1]>,
ttnn.addaten::add.Tensor4
53Tensor<[1,2048,23,40]>,
Tensor<[1,2048,23,40]>,
ttnn.addaten::add.Tensor4
54Tensor<[23]>,
Tensor<[23]>,
ttnn.addaten::add.Tensor4
55Tensor<[40]>,
Tensor<[40]>,
ttnn.addaten::add.Tensor4
56Tensor<[1,1,40]>,
Tensor<[1,1,40]>,
ttnn.addaten::add.Tensor4
57Tensor<[1,23,1]>,
Tensor<[1,23,1]>,
ttnn.addaten::add.Tensor4
58Tensor<[920,1,256]>,
Tensor<[920,1,256]>,
ttnn.addaten::add.Tensor5
59Tensor<[920,256]>,
Tensor<[920,256]>,
ttnn.addaten::add.Tensor4
60Tensor<[920,1,1]>,
Tensor<[920,1,1]>,
ttnn.addaten::add.Tensor4
61Tensor<[920,2048]>,
Tensor<[920,2048]>,
ttnn.addaten::add.Tensor4
62Tensor<[100,1,256]>,
Tensor<[100,1,256]>,
ttnn.addaten::add.Tensor5
63Tensor<[100,256]>,
Tensor<[100,256]>,
ttnn.addaten::add.Tensor4
64Tensor<[100,1,1]>,
Tensor<[100,1,1]>,
ttnn.addaten::add.Tensor4
65Tensor<[100,2048]>,
Tensor<[100,2048]>,
ttnn.addaten::add.Tensor4
66Tensor<[6,1,100,92]>,
Tensor<[6,1,100,92]>,
ttnn.addaten::add.Tensor4
67Tensor<[6,1,100,256]>,
Tensor<[6,1,100,256]>,
ttnn.addaten::add.Tensor4
68Tensor<[6,1,100,4]>,
Tensor<[6,1,100,4]>,
ttnn.addaten::add.Tensor4
69Tensor<[8,920,920]>,
Tensor<[8,920,920]>,
ttnn.addaten::baddbmm4
70Tensor<[8,100,920]>,
Tensor<[8,100,920]>,
ttnn.addaten::baddbmm4
71Tensor<[1,256,23,40]>,
Tensor<[1,256,23,40]>,
ttnn.addaten::convolution4
72Tensor<[1,10]>,
Tensor<[1,10]>,
ttnn.addaten::add.Tensor5
73Tensor<[1,10,768]>,
Tensor<[1,10,768]>,
ttnn.addaten::add.Tensor5
74Tensor<[1,10,1]>,
Tensor<[1,10,1]>,
ttnn.addaten::add.Tensor4
75Tensor<[10,768]>,
Tensor<[10,768]>,
ttnn.addaten::add.Tensor4
76Tensor<[1,12,10,10]>,
Tensor<[1,12,10,10]>,
ttnn.addaten::add.Tensor4
77Tensor<[10,3072]>,
Tensor<[10,3072]>,
ttnn.addaten::add.Tensor4
78Tensor<[10,250002]>,
Tensor<[10,250002]>,
ttnn.addaten::add.Tensor4
79Tensor<[1,10,3072]>,
Tensor<[1,10,3072]>,
ttnn.addaten::gelu4
80Tensor<[1,1280]>,
Tensor<[1,1280]>,
ttnn.addaten::add.Tensor4
81Tensor<[1,32,1,1]>,
Tensor<[1,32,1,1]>,
ttnn.addaten::add.Tensor4
82Tensor<[1,320,64,64]>,
Tensor<[1,320,64,64]>,
ttnn.addaten::add.Tensor4
83Tensor<[1,320]>,
Tensor<[1,320]>,
ttnn.addaten::add.Tensor4
84Tensor<[1,4096,1]>,
Tensor<[1,4096,1]>,
ttnn.addaten::add.Tensor4
85Tensor<[1,4096,320]>,
Tensor<[1,4096,320]>,
ttnn.addaten::add.Tensor4
86Tensor<[4096,320]>,
Tensor<[4096,320]>,
ttnn.addaten::add.Tensor4
87Tensor<[4096,2560]>,
Tensor<[4096,2560]>,
ttnn.addaten::add.Tensor4
88Tensor<[1,320,32,32]>,
Tensor<[1,320,32,32]>,
ttnn.addaten::add.Tensor4
89Tensor<[1,640]>,
Tensor<[1,640]>,
ttnn.addaten::add.Tensor4
90Tensor<[1,640,32,32]>,
Tensor<[1,640,32,32]>,
ttnn.addaten::add.Tensor4
91Tensor<[1,1024,1]>,
Tensor<[1,1024,1]>,
ttnn.addaten::add.Tensor4
92Tensor<[1,1024,640]>,
Tensor<[1,1024,640]>,
ttnn.addaten::add.Tensor4
93Tensor<[1024,640]>,
Tensor<[1024,640]>,
ttnn.addaten::add.Tensor4
94Tensor<[1024,5120]>,
Tensor<[1024,5120]>,
ttnn.addaten::add.Tensor4
95Tensor<[1,640,16,16]>,
Tensor<[1,640,16,16]>,
ttnn.addaten::add.Tensor4
96Tensor<[1,1280,16,16]>,
Tensor<[1,1280,16,16]>,
ttnn.addaten::add.Tensor4
97Tensor<[1,256,1280]>,
Tensor<[1,256,1280]>,
ttnn.addaten::add.Tensor4
98Tensor<[256,1280]>,
Tensor<[256,1280]>,
ttnn.addaten::add.Tensor4
99Tensor<[256,10240]>,
Tensor<[256,10240]>,
ttnn.addaten::add.Tensor4
100Tensor<[1,1280,8,8]>,
Tensor<[1,1280,8,8]>,
ttnn.addaten::add.Tensor4
101Tensor<[1,64,1]>,
Tensor<[1,64,1]>,
ttnn.addaten::add.Tensor4
102Tensor<[1,64,1280]>,
Tensor<[1,64,1280]>,
ttnn.addaten::add.Tensor4
103Tensor<[64,1280]>,
Tensor<[64,1280]>,
ttnn.addaten::add.Tensor4
104Tensor<[64,10240]>,
Tensor<[64,10240]>,
ttnn.addaten::add.Tensor4
105Tensor<[1,2560,8,8]>,
Tensor<[1,2560,8,8]>,
ttnn.addaten::add.Tensor4
106Tensor<[16]>,
Tensor<[16]>,
ttnn.addaten::add.Tensor4
107Tensor<[1,2560,16,16]>,
Tensor<[1,2560,16,16]>,
ttnn.addaten::add.Tensor4
108Tensor<[1,1920,16,16]>,
Tensor<[1,1920,16,16]>,
ttnn.addaten::add.Tensor4
109Tensor<[1,1920,32,32]>,
Tensor<[1,1920,32,32]>,
ttnn.addaten::add.Tensor4
110Tensor<[1,1280,32,32]>,
Tensor<[1,1280,32,32]>,
ttnn.addaten::add.Tensor4
111Tensor<[1,960,32,32]>,
Tensor<[1,960,32,32]>,
ttnn.addaten::add.Tensor4
112Tensor<[1,960,64,64]>,
Tensor<[1,960,64,64]>,
ttnn.addaten::add.Tensor4
113Tensor<[1,640,64,64]>,
Tensor<[1,640,64,64]>,
ttnn.addaten::add.Tensor4
114Tensor<[160]>,
Tensor<[160]>,
ttnn.addaten::arange.start4
115Tensor<[1,4,64,64]>,
Tensor<[1,4,64,64]>,
ttnn.addaten::convolution4
116Tensor<[1,4096,1280]>,
Tensor<[1,4096,1280]>,
ttnn.addaten::gelu4
117Tensor<[1,1024,2560]>,
Tensor<[1,1024,2560]>,
ttnn.addaten::gelu4
118Tensor<[1,256,5120]>,
Tensor<[1,256,5120]>,
ttnn.addaten::gelu4
119Tensor<[1,64,5120]>,
Tensor<[1,64,5120]>,
ttnn.addaten::gelu4
120Tensor<[1280]>,
Tensor<[1280]>,
ttnn.addaten::index.Tensor4
121Tensor<[640]>,
Tensor<[640]>,
ttnn.addaten::index.Tensor4
122Tensor<[1,25,768]>,
Tensor<[1,25,768]>,
ttnn.addaten::add.Tensor5
123Tensor<[1,25,1]>,
Tensor<[1,25,1]>,
ttnn.addaten::add.Tensor4
124Tensor<[25,768]>,
Tensor<[25,768]>,
ttnn.addaten::add.Tensor4
125Tensor<[1,12,25,25]>,
Tensor<[1,12,25,25]>,
ttnn.addaten::add.Tensor4
126Tensor<[25,3072]>,
Tensor<[25,3072]>,
ttnn.addaten::add.Tensor4
127Tensor<[25,2]>,
Tensor<[25,2]>,
ttnn.addaten::add.Tensor4
128Tensor<[1,1]>,
Tensor<[1,1]>,
ttnn.addaten::add.Tensor4
129Tensor<[1,25,3072]>,
Tensor<[1,25,3072]>,
ttnn.addaten::gelu4
130Tensor<[1,1445,192]>,
Tensor<[1,1445,192]>,
ttnn.addaten::add.Tensor5
131Tensor<[1,1445,1]>,
Tensor<[1,1445,1]>,
ttnn.addaten::add.Tensor4
132Tensor<[1445,192]>,
Tensor<[1445,192]>,
ttnn.addaten::add.Tensor4
133Tensor<[1445,768]>,
Tensor<[1445,768]>,
ttnn.addaten::add.Tensor4
134Tensor<[100,192]>,
Tensor<[100,192]>,
ttnn.addaten::add.Tensor4
135Tensor<[100,92]>,
Tensor<[100,92]>,
ttnn.addaten::add.Tensor4
136Tensor<[100,4]>,
Tensor<[100,4]>,
ttnn.addaten::add.Tensor4
137Tensor<[1,192,32,42]>,
Tensor<[1,192,32,42]>,
ttnn.addaten::convolution4
138Tensor<[1,1445,768]>,
Tensor<[1,1445,768]>,
ttnn.addaten::gelu4
139Tensor<[1,256,14,14]>,
Tensor<[1,256,14,14]>,
ttnn.addaten::add.Tensor4
140Tensor<[1,512,7,7]>,
Tensor<[1,512,7,7]>,
ttnn.addaten::add.Tensor4
141Tensor<[1,8,768]>,
Tensor<[1,8,768]>,
ttnn.addaten::add.Tensor5
142Tensor<[1,8,1]>,
Tensor<[1,8,1]>,
ttnn.addaten::add.Tensor4
143Tensor<[1,12,8,8]>,
Tensor<[1,12,8,8]>,
ttnn.addaten::add.Tensor4
144Tensor<[1,768,8]>,
Tensor<[1,768,8]>,
ttnn.addaten::add.Tensor5
145Tensor<[1,768]>,
Tensor<[1,768]>,
ttnn.addaten::add.Tensor4
146Tensor<[1,3]>,
Tensor<[1,3]>,
ttnn.addaten::add.Tensor4
147Tensor<[1,3072,8]>,
Tensor<[1,3072,8]>,
ttnn.addaten::convolution4
148Tensor<[1,2048,768]>,
Tensor<[1,2048,768]>,
ttnn.addaten::add.Tensor4
149Tensor<[1,2048,1]>,
Tensor<[1,2048,1]>,
ttnn.addaten::add.Tensor4
150Tensor<[2048,256]>,
Tensor<[2048,256]>,
ttnn.addaten::add.Tensor4
151Tensor<[2048,1280]>,
Tensor<[2048,1280]>,
ttnn.addaten::add.Tensor4
152Tensor<[1,8,256,2048]>,
Tensor<[1,8,256,2048]>,
ttnn.addaten::add.Tensor4
153Tensor<[256,768]>,
Tensor<[256,768]>,
ttnn.addaten::add.Tensor4
154Tensor<[2048,768]>,
Tensor<[2048,768]>,
ttnn.addaten::add.Tensor4
155Tensor<[2048,262]>,
Tensor<[2048,262]>,
ttnn.addaten::add.Tensor4
156Tensor<[2048]>,
Tensor<[2048]>,
ttnn.addaten::arange.start4
157Tensor<[1,256,56,56]>,
Tensor<[1,256,56,56]>,
ttnn.addaten::add.Tensor4
158Tensor<[1024]>,
Tensor<[1024]>,
ttnn.addaten::add.Tensor4
159Tensor<[1,1024,14,14]>,
Tensor<[1,1024,14,14]>,
ttnn.addaten::add.Tensor4
160Tensor<[1,512,14,14]>,
Tensor<[1,512,14,14]>,
ttnn.addaten::add.Tensor4
161Tensor<[1,2048,7,7]>,
Tensor<[1,2048,7,7]>,
ttnn.addaten::add.Tensor4
162Tensor<[12]>,
Tensor<[12]>,
ttnn.addaten::add.Tensor4
163Tensor<[1,193,768]>,
Tensor<[1,193,768]>,
ttnn.addaten::add.Tensor5
164Tensor<[1,201,1]>,
Tensor<[1,201,1]>,
ttnn.addaten::add.Tensor4
165Tensor<[1,201,768]>,
Tensor<[1,201,768]>,
ttnn.addaten::add.Tensor4
166Tensor<[201,768]>,
Tensor<[201,768]>,
ttnn.addaten::add.Tensor4
167Tensor<[1,12,201,201]>,
Tensor<[1,12,201,201]>,
ttnn.addaten::add.Tensor4
168Tensor<[201,3072]>,
Tensor<[201,3072]>,
ttnn.addaten::add.Tensor4
169Tensor<[1,1536]>,
Tensor<[1,1536]>,
ttnn.addaten::add.Tensor4
170Tensor<[1,3129]>,
Tensor<[1,3129]>,
ttnn.addaten::add.Tensor4
171Tensor<[1,768,12,16]>,
Tensor<[1,768,12,16]>,
ttnn.addaten::convolution4
172Tensor<[1,201,3072]>,
Tensor<[1,201,3072]>,
ttnn.addaten::gelu4
173Tensor<[1,128]>,
Tensor<[1,128]>,
ttnn.addaten::add.Tensor4
174Tensor<[1,32,26,26]>,
Tensor<[1,32,26,26]>,
ttnn.addaten::convolution4
175Tensor<[1,64,24,24]>,
Tensor<[1,64,24,24]>,
ttnn.addaten::convolution4
176Tensor<[19]>,
Tensor<[19]>,
ttnn.addaten::add.Tensor4
177Tensor<[1,19]>,
Tensor<[1,19]>,
ttnn.addaten::add.Tensor4
178Tensor<[1,19,1024]>,
Tensor<[1,19,1024]>,
ttnn.addaten::add.Tensor5
179Tensor<[1,19,1]>,
Tensor<[1,19,1]>,
ttnn.addaten::add.Tensor4
180Tensor<[19,1024]>,
Tensor<[19,1024]>,
ttnn.addaten::add.Tensor4
181Tensor<[1,16,19,19]>,
Tensor<[1,16,19,19]>,
ttnn.addaten::add.Tensor4
182Tensor<[19,4096]>,
Tensor<[19,4096]>,
ttnn.addaten::add.Tensor4
183Tensor<[1,19,4096]>,
Tensor<[1,19,4096]>,
ttnn.addaten::gelu4
184Tensor<[14]>,
Tensor<[14]>,
ttnn.addaten::add.Tensor4
185Tensor<[1,14,56,56]>,
Tensor<[1,14,56,56]>,
ttnn.addaten::add.Tensor4
186Tensor<[24]>,
Tensor<[24]>,
ttnn.addaten::add.Tensor4
187Tensor<[1,24,56,56]>,
Tensor<[1,24,56,56]>,
ttnn.addaten::add.Tensor4
188Tensor<[1,40,56,56]>,
Tensor<[1,40,56,56]>,
ttnn.addaten::add.Tensor4
189Tensor<[68]>,
Tensor<[68]>,
ttnn.addaten::add.Tensor4
190Tensor<[1,68,56,56]>,
Tensor<[1,68,56,56]>,
ttnn.addaten::add.Tensor4
191Tensor<[1,16,28,28]>,
Tensor<[1,16,28,28]>,
ttnn.addaten::add.Tensor4
192Tensor<[28]>,
Tensor<[28]>,
ttnn.addaten::add.Tensor4
193Tensor<[1,28,28,28]>,
Tensor<[1,28,28,28]>,
ttnn.addaten::add.Tensor4
194Tensor<[46]>,
Tensor<[46]>,
ttnn.addaten::add.Tensor4
195Tensor<[1,46,28,28]>,
Tensor<[1,46,28,28]>,
ttnn.addaten::add.Tensor4
196Tensor<[78]>,
Tensor<[78]>,
ttnn.addaten::add.Tensor4
197Tensor<[1,78,28,28]>,
Tensor<[1,78,28,28]>,
ttnn.addaten::add.Tensor4
198Tensor<[134]>,
Tensor<[134]>,
ttnn.addaten::add.Tensor4
199Tensor<[1,134,28,28]>,
Tensor<[1,134,28,28]>,
ttnn.addaten::add.Tensor4
200Tensor<[20]>,
Tensor<[20]>,
ttnn.addaten::add.Tensor4
201Tensor<[1,20,28,28]>,
Tensor<[1,20,28,28]>,
ttnn.addaten::add.Tensor4
202Tensor<[34]>,
Tensor<[34]>,
ttnn.addaten::add.Tensor4
203Tensor<[1,34,28,28]>,
Tensor<[1,34,28,28]>,
ttnn.addaten::add.Tensor4
204Tensor<[58]>,
Tensor<[58]>,
ttnn.addaten::add.Tensor4
205Tensor<[1,58,28,28]>,
Tensor<[1,58,28,28]>,
ttnn.addaten::add.Tensor4
206Tensor<[98]>,
Tensor<[98]>,
ttnn.addaten::add.Tensor4
207Tensor<[1,98,28,28]>,
Tensor<[1,98,28,28]>,
ttnn.addaten::add.Tensor4
208Tensor<[168]>,
Tensor<[168]>,
ttnn.addaten::add.Tensor4
209Tensor<[1,168,28,28]>,
Tensor<[1,168,28,28]>,
ttnn.addaten::add.Tensor4
210Tensor<[320]>,
Tensor<[320]>,
ttnn.addaten::add.Tensor4
211Tensor<[1,320,28,28]>,
Tensor<[1,320,28,28]>,
ttnn.addaten::add.Tensor4
212Tensor<[1,40,14,14]>,
Tensor<[1,40,14,14]>,
ttnn.addaten::add.Tensor4
213Tensor<[1,68,14,14]>,
Tensor<[1,68,14,14]>,
ttnn.addaten::add.Tensor4
214Tensor<[116]>,
Tensor<[116]>,
ttnn.addaten::add.Tensor4
215Tensor<[1,116,14,14]>,
Tensor<[1,116,14,14]>,
ttnn.addaten::add.Tensor4
216Tensor<[196]>,
Tensor<[196]>,
ttnn.addaten::add.Tensor4
217Tensor<[1,196,14,14]>,
Tensor<[1,196,14,14]>,
ttnn.addaten::add.Tensor4
218Tensor<[334]>,
Tensor<[334]>,
ttnn.addaten::add.Tensor4
219Tensor<[1,334,14,14]>,
Tensor<[1,334,14,14]>,
ttnn.addaten::add.Tensor4
220Tensor<[1,640,14,14]>,
Tensor<[1,640,14,14]>,
ttnn.addaten::add.Tensor4
221Tensor<[1,160,7,7]>,
Tensor<[1,160,7,7]>,
ttnn.addaten::add.Tensor4
222Tensor<[272]>,
Tensor<[272]>,
ttnn.addaten::add.Tensor4
223Tensor<[1,272,7,7]>,
Tensor<[1,272,7,7]>,
ttnn.addaten::add.Tensor4
224Tensor<[462]>,
Tensor<[462]>,
ttnn.addaten::add.Tensor4
225Tensor<[1,462,7,7]>,
Tensor<[1,462,7,7]>,
ttnn.addaten::add.Tensor4
226Tensor<[1,1024,7,7]>,
Tensor<[1,1024,7,7]>,
ttnn.addaten::add.Tensor4
227Tensor<[1,32,512,512]>,
Tensor<[1,32,512,512]>,
ttnn.addaten::add.Tensor4
228Tensor<[1,64,256,256]>,
Tensor<[1,64,256,256]>,
ttnn.addaten::add.Tensor4
229Tensor<[1,32,256,256]>,
Tensor<[1,32,256,256]>,
ttnn.addaten::add.Tensor4
230Tensor<[1,128,128,128]>,
Tensor<[1,128,128,128]>,
ttnn.addaten::add.Tensor4
231Tensor<[1,64,128,128]>,
Tensor<[1,64,128,128]>,
ttnn.addaten::add.Tensor4
232Tensor<[1,256,64,64]>,
Tensor<[1,256,64,64]>,
ttnn.addaten::add.Tensor4
233Tensor<[1,128,64,64]>,
Tensor<[1,128,64,64]>,
ttnn.addaten::add.Tensor4
234Tensor<[1,512,32,32]>,
Tensor<[1,512,32,32]>,
ttnn.addaten::add.Tensor4
235Tensor<[1,256,32,32]>,
Tensor<[1,256,32,32]>,
ttnn.addaten::add.Tensor4
236Tensor<[1,1024,16,16]>,
Tensor<[1,1024,16,16]>,
ttnn.addaten::add.Tensor4
237Tensor<[1,512,16,16]>,
Tensor<[1,512,16,16]>,
ttnn.addaten::add.Tensor4
238Tensor<[1,256,16,16]>,
Tensor<[1,256,16,16]>,
ttnn.addaten::add.Tensor4
239Tensor<[1,128,32,32]>,
Tensor<[1,128,32,32]>,
ttnn.addaten::add.Tensor4
240Tensor<[1,255,16,16]>,
Tensor<[1,255,16,16]>,
ttnn.addaten::convolution4
241Tensor<[1,255,32,32]>,
Tensor<[1,255,32,32]>,
ttnn.addaten::convolution4
242Tensor<[1,255,64,64]>,
Tensor<[1,255,64,64]>,
ttnn.addaten::convolution4
243Tensor<[1,1,256,256]>,
Tensor<[1,1,256,256]>,
ttnn.addaten::convolution4
244Tensor<[1,4,14,14]>,
Tensor<[1,4,14,14]>,
ttnn.addaten::convolution4
245Tensor<[1,16,14,14]>,
Tensor<[1,16,14,14]>,
ttnn.addaten::convolution4
246Tensor<[1,1,28,28]>,
Tensor<[1,1,28,28]>,
ttnn.addaten::convolution4
247Tensor<[1,32,1536]>,
Tensor<[1,32,1536]>,
ttnn.addaten::add.Tensor4
248Tensor<[32,4608]>,
Tensor<[32,4608]>,
ttnn.addaten::add.Tensor4
249Tensor<[1,16,32,32]>,
Tensor<[1,16,32,32]>,
ttnn.addaten::add.Tensor4
250Tensor<[32,1536]>,
Tensor<[32,1536]>,
ttnn.addaten::add.Tensor4
251Tensor<[32,6144]>,
Tensor<[32,6144]>,
ttnn.addaten::add.Tensor4
252Tensor<[1,32,6144]>,
Tensor<[1,32,6144]>,
ttnn.addaten::add.Tensor4
253Tensor<[16,32,32]>,
Tensor<[16,32,32]>,
ttnn.addaten::baddbmm4
254Tensor<[1,16,768]>,
Tensor<[1,16,768]>,
ttnn.addaten::add.Tensor5
255Tensor<[1,16,1]>,
Tensor<[1,16,1]>,
ttnn.addaten::add.Tensor4
256Tensor<[16,768]>,
Tensor<[16,768]>,
ttnn.addaten::add.Tensor4
257Tensor<[1,12,16,16]>,
Tensor<[1,12,16,16]>,
ttnn.addaten::add.Tensor4
258Tensor<[16,3072]>,
Tensor<[16,3072]>,
ttnn.addaten::add.Tensor4
259Tensor<[1,16,3072]>,
Tensor<[1,16,3072]>,
ttnn.addaten::gelu4
260Tensor<[1,64,224,224]>,
Tensor<[1,64,224,224]>,
ttnn.addaten::add.Tensor4
261Tensor<[1,128,112,112]>,
Tensor<[1,128,112,112]>,
ttnn.addaten::add.Tensor4
262Tensor<[1,1,224,224]>,
Tensor<[1,1,224,224]>,
ttnn.addaten::convolution4
263Tensor<[1,19200,1]>,
Tensor<[1,19200,1]>,
ttnn.addaten::add.Tensor4
264Tensor<[1,19200,64]>,
Tensor<[1,19200,64]>,
ttnn.addaten::add.Tensor4
265Tensor<[19200,64]>,
Tensor<[19200,64]>,
ttnn.addaten::add.Tensor4
266Tensor<[1,300,1]>,
Tensor<[1,300,1]>,
ttnn.addaten::add.Tensor4
267Tensor<[1,300,64]>,
Tensor<[1,300,64]>,
ttnn.addaten::add.Tensor4
268Tensor<[300,64]>,
Tensor<[300,64]>,
ttnn.addaten::add.Tensor4
269Tensor<[19200,256]>,
Tensor<[19200,256]>,
ttnn.addaten::add.Tensor4
270Tensor<[1,4800,1]>,
Tensor<[1,4800,1]>,
ttnn.addaten::add.Tensor4
271Tensor<[1,4800,128]>,
Tensor<[1,4800,128]>,
ttnn.addaten::add.Tensor4
272Tensor<[4800,128]>,
Tensor<[4800,128]>,
ttnn.addaten::add.Tensor4
273Tensor<[1,300,128]>,
Tensor<[1,300,128]>,
ttnn.addaten::add.Tensor4
274Tensor<[300,128]>,
Tensor<[300,128]>,
ttnn.addaten::add.Tensor4
275Tensor<[4800,512]>,
Tensor<[4800,512]>,
ttnn.addaten::add.Tensor4
276Tensor<[1,1200,1]>,
Tensor<[1,1200,1]>,
ttnn.addaten::add.Tensor4
277Tensor<[1,1200,320]>,
Tensor<[1,1200,320]>,
ttnn.addaten::add.Tensor4
278Tensor<[1200,320]>,
Tensor<[1200,320]>,
ttnn.addaten::add.Tensor4
279Tensor<[1,300,320]>,
Tensor<[1,300,320]>,
ttnn.addaten::add.Tensor4
280Tensor<[300,320]>,
Tensor<[300,320]>,
ttnn.addaten::add.Tensor4
281Tensor<[1200,1280]>,
Tensor<[1200,1280]>,
ttnn.addaten::add.Tensor4
282Tensor<[1,300,512]>,
Tensor<[1,300,512]>,
ttnn.addaten::add.Tensor4
283Tensor<[300,512]>,
Tensor<[300,512]>,
ttnn.addaten::add.Tensor4
284Tensor<[300,2048]>,
Tensor<[300,2048]>,
ttnn.addaten::add.Tensor4
285Tensor<[30]>,
Tensor<[30]>,
ttnn.addaten::add.Tensor4
286Tensor<[30,1]>,
Tensor<[30,1]>,
ttnn.addaten::add.Tensor4
287Tensor<[1,64,30,40]>,
Tensor<[1,64,30,40]>,
ttnn.addaten::add.Tensor5
288Tensor<[1,32,30,40]>,
Tensor<[1,32,30,40]>,
ttnn.addaten::add.Tensor4
289Tensor<[60]>,
Tensor<[60]>,
ttnn.addaten::add.Tensor4
290Tensor<[60,1]>,
Tensor<[60,1]>,
ttnn.addaten::add.Tensor4
291Tensor<[80]>,
Tensor<[80]>,
ttnn.addaten::add.Tensor4
292Tensor<[1,64,60,80]>,
Tensor<[1,64,60,80]>,
ttnn.addaten::add.Tensor5
293Tensor<[1,32,60,80]>,
Tensor<[1,32,60,80]>,
ttnn.addaten::add.Tensor4
294Tensor<[120]>,
Tensor<[120]>,
ttnn.addaten::add.Tensor4
295Tensor<[120,1]>,
Tensor<[120,1]>,
ttnn.addaten::add.Tensor4
296Tensor<[1,64,120,160]>,
Tensor<[1,64,120,160]>,
ttnn.addaten::add.Tensor5
297Tensor<[1,32,120,160]>,
Tensor<[1,32,120,160]>,
ttnn.addaten::add.Tensor4
298Tensor<[240]>,
Tensor<[240]>,
ttnn.addaten::add.Tensor4
299Tensor<[240,1]>,
Tensor<[240,1]>,
ttnn.addaten::add.Tensor4
300Tensor<[1,64,240,320]>,
Tensor<[1,64,240,320]>,
ttnn.addaten::add.Tensor5
301Tensor<[480]>,
Tensor<[480]>,
ttnn.addaten::add.Tensor4
302Tensor<[480,1]>,
Tensor<[480,1]>,
ttnn.addaten::add.Tensor4
303Tensor<[1,64,480,640]>,
Tensor<[1,64,480,640]>,
ttnn.addaten::add.Tensor5
304Tensor<[1,64,15,20]>,
Tensor<[1,64,15,20]>,
ttnn.addaten::convolution4
305Tensor<[1,256,120,160]>,
Tensor<[1,256,120,160]>,
ttnn.addaten::convolution4
306Tensor<[1,128,60,80]>,
Tensor<[1,128,60,80]>,
ttnn.addaten::convolution4
307Tensor<[1,128,15,20]>,
Tensor<[1,128,15,20]>,
ttnn.addaten::convolution4
308Tensor<[1,512,60,80]>,
Tensor<[1,512,60,80]>,
ttnn.addaten::convolution4
309Tensor<[1,320,30,40]>,
Tensor<[1,320,30,40]>,
ttnn.addaten::convolution4
310Tensor<[1,320,15,20]>,
Tensor<[1,320,15,20]>,
ttnn.addaten::convolution4
311Tensor<[1,1280,30,40]>,
Tensor<[1,1280,30,40]>,
ttnn.addaten::convolution4
312Tensor<[1,512,15,20]>,
Tensor<[1,512,15,20]>,
ttnn.addaten::convolution4
313Tensor<[1,2048,15,20]>,
Tensor<[1,2048,15,20]>,
ttnn.addaten::convolution4
314Tensor<[1,2,30,40]>,
Tensor<[1,2,30,40]>,
ttnn.addaten::convolution4
315Tensor<[1,2,60,80]>,
Tensor<[1,2,60,80]>,
ttnn.addaten::convolution4
316Tensor<[1,2,120,160]>,
Tensor<[1,2,120,160]>,
ttnn.addaten::convolution4
317Tensor<[1,1,480,640]>,
Tensor<[1,1,480,640]>,
ttnn.addaten::convolution4
318Tensor<[1,19200,256]>,
Tensor<[1,19200,256]>,
ttnn.addaten::gelu4
319Tensor<[1,4800,512]>,
Tensor<[1,4800,512]>,
ttnn.addaten::gelu4
320Tensor<[1,1200,1280]>,
Tensor<[1,1200,1280]>,
ttnn.addaten::gelu4
321Tensor<[1,300,2048]>,
Tensor<[1,300,2048]>,
ttnn.addaten::gelu4
322Tensor<[1,197,768]>,
Tensor<[1,197,768]>,
ttnn.addaten::add.Tensor5
323Tensor<[1,197,1]>,
Tensor<[1,197,1]>,
ttnn.addaten::add.Tensor4
324Tensor<[197,768]>,
Tensor<[197,768]>,
ttnn.addaten::add.Tensor4
325Tensor<[197,3072]>,
Tensor<[197,3072]>,
ttnn.addaten::add.Tensor4
326Tensor<[1,768,14,14]>,
Tensor<[1,768,14,14]>,
ttnn.addaten::convolution4
327Tensor<[1,197,3072]>,
Tensor<[1,197,3072]>,
ttnn.addaten::gelu4
328Tensor<[1,16384,1]>,
Tensor<[1,16384,1]>,
ttnn.addaten::add.Tensor4
329Tensor<[1,16384,32]>,
Tensor<[1,16384,32]>,
ttnn.addaten::add.Tensor4
330Tensor<[16384,32]>,
Tensor<[16384,32]>,
ttnn.addaten::add.Tensor4
331Tensor<[1,256,32]>,
Tensor<[1,256,32]>,
ttnn.addaten::add.Tensor4
332Tensor<[256,32]>,
Tensor<[256,32]>,
ttnn.addaten::add.Tensor4
333Tensor<[16384,128]>,
Tensor<[16384,128]>,
ttnn.addaten::add.Tensor4
334Tensor<[1,4096,64]>,
Tensor<[1,4096,64]>,
ttnn.addaten::add.Tensor4
335Tensor<[4096,64]>,
Tensor<[4096,64]>,
ttnn.addaten::add.Tensor4
336Tensor<[1,256,64]>,
Tensor<[1,256,64]>,
ttnn.addaten::add.Tensor4
337Tensor<[256,64]>,
Tensor<[256,64]>,
ttnn.addaten::add.Tensor4
338Tensor<[4096,256]>,
Tensor<[4096,256]>,
ttnn.addaten::add.Tensor4
339Tensor<[1,1024,160]>,
Tensor<[1,1024,160]>,
ttnn.addaten::add.Tensor4
340Tensor<[1024,160]>,
Tensor<[1024,160]>,
ttnn.addaten::add.Tensor4
341Tensor<[1,256,160]>,
Tensor<[1,256,160]>,
ttnn.addaten::add.Tensor4
342Tensor<[256,160]>,
Tensor<[256,160]>,
ttnn.addaten::add.Tensor4
343Tensor<[256,1024]>,
Tensor<[256,1024]>,
ttnn.addaten::add.Tensor4
344Tensor<[1,16384,256]>,
Tensor<[1,16384,256]>,
ttnn.addaten::add.Tensor4
345Tensor<[128,1]>,
Tensor<[128,1]>,
ttnn.addaten::add.Tensor4
346Tensor<[1,256,128,128]>,
Tensor<[1,256,128,128]>,
ttnn.addaten::add.Tensor5
347Tensor<[1,4096,256]>,
Tensor<[1,4096,256]>,
ttnn.addaten::add.Tensor4
348Tensor<[1,1024,256]>,
Tensor<[1,1024,256]>,
ttnn.addaten::add.Tensor4
349Tensor<[1,32,128,128]>,
Tensor<[1,32,128,128]>,
ttnn.addaten::convolution4
350Tensor<[1,32,16,16]>,
Tensor<[1,32,16,16]>,
ttnn.addaten::convolution4
351Tensor<[1,64,64,64]>,
Tensor<[1,64,64,64]>,
ttnn.addaten::convolution4
352Tensor<[1,64,16,16]>,
Tensor<[1,64,16,16]>,
ttnn.addaten::convolution4
353Tensor<[1,160,32,32]>,
Tensor<[1,160,32,32]>,
ttnn.addaten::convolution4
354Tensor<[1,160,16,16]>,
Tensor<[1,160,16,16]>,
ttnn.addaten::convolution4
355Tensor<[1,150,128,128]>,
Tensor<[1,150,128,128]>,
ttnn.addaten::convolution4
356Tensor<[1,16384,128]>,
Tensor<[1,16384,128]>,
ttnn.addaten::gelu4
357Tensor<[1,256,1024]>,
Tensor<[1,256,1024]>,
ttnn.addaten::gelu4
358Tensor<[1,1,7,7]>,
Tensor<[1,1,7,7]>,
ttnn.addaten::add.Tensor4
359Tensor<[1,7,4544]>,
Tensor<[1,7,4544]>,
ttnn.addaten::add.Tensor4
360Tensor<[1,71,7,64]>,
Tensor<[1,71,7,64]>,
ttnn.addaten::add.Tensor5
361Tensor<[1,1,7,64]>,
Tensor<[1,1,7,64]>,
ttnn.addaten::add.Tensor5
362Tensor<[1,71,7,7]>,
Tensor<[1,71,7,7]>,
ttnn.addaten::add.Tensor4
363Tensor<[1,7,18176]>,
Tensor<[1,7,18176]>,
ttnn.addaten::gelu4
364Tensor<[7,1]>,
Tensor<[7,1]>,
ttnn.addaten::triu4
365Tensor<[1,16,112,112]>,
Tensor<[1,16,112,112]>,
ttnn.addaten::add.Tensor4
366Tensor<[96]>,
Tensor<[96]>,
ttnn.addaten::add.Tensor4
367Tensor<[1,96,112,112]>,
Tensor<[1,96,112,112]>,
ttnn.addaten::add.Tensor4
368Tensor<[1,96,56,56]>,
Tensor<[1,96,56,56]>,
ttnn.addaten::add.Tensor4
369Tensor<[144]>,
Tensor<[144]>,
ttnn.addaten::add.Tensor4
370Tensor<[1,144,56,56]>,
Tensor<[1,144,56,56]>,
ttnn.addaten::add.Tensor4
371Tensor<[1,144,28,28]>,
Tensor<[1,144,28,28]>,
ttnn.addaten::add.Tensor4
372Tensor<[1,32,28,28]>,
Tensor<[1,32,28,28]>,
ttnn.addaten::add.Tensor4
373Tensor<[192]>,
Tensor<[192]>,
ttnn.addaten::add.Tensor4
374Tensor<[1,192,28,28]>,
Tensor<[1,192,28,28]>,
ttnn.addaten::add.Tensor4
375Tensor<[1,192,14,14]>,
Tensor<[1,192,14,14]>,
ttnn.addaten::add.Tensor4
376Tensor<[1,64,14,14]>,
Tensor<[1,64,14,14]>,
ttnn.addaten::add.Tensor4
377Tensor<[384]>,
Tensor<[384]>,
ttnn.addaten::add.Tensor4
378Tensor<[1,384,14,14]>,
Tensor<[1,384,14,14]>,
ttnn.addaten::add.Tensor4
379Tensor<[1,96,14,14]>,
Tensor<[1,96,14,14]>,
ttnn.addaten::add.Tensor4
380Tensor<[576]>,
Tensor<[576]>,
ttnn.addaten::add.Tensor4
381Tensor<[1,576,14,14]>,
Tensor<[1,576,14,14]>,
ttnn.addaten::add.Tensor4
382Tensor<[1,576,7,7]>,
Tensor<[1,576,7,7]>,
ttnn.addaten::add.Tensor4
383Tensor<[960]>,
Tensor<[960]>,
ttnn.addaten::add.Tensor4
384Tensor<[1,960,7,7]>,
Tensor<[1,960,7,7]>,
ttnn.addaten::add.Tensor4
385Tensor<[1,320,7,7]>,
Tensor<[1,320,7,7]>,
ttnn.addaten::add.Tensor4
386Tensor<[1,1280,7,7]>,
Tensor<[1,1280,7,7]>,
ttnn.addaten::add.Tensor4
387Tensor<[1,12,128]>,
Tensor<[1,12,128]>,
ttnn.addaten::add.Tensor5
388Tensor<[1,12,1]>,
Tensor<[1,12,1]>,
ttnn.addaten::add.Tensor4
389Tensor<[12,768]>,
Tensor<[12,768]>,
ttnn.addaten::add.Tensor4
390Tensor<[1,12,12,12]>,
Tensor<[1,12,12,12]>,
ttnn.addaten::add.Tensor4
391Tensor<[1,12,768]>,
Tensor<[1,12,768]>,
ttnn.addaten::add.Tensor5
392Tensor<[12,3072]>,
Tensor<[12,3072]>,
ttnn.addaten::add.Tensor4
393Tensor<[1,12,3072]>,
Tensor<[1,12,3072]>,
ttnn.addaten::add.Tensor5
394Tensor<[12,2]>,
Tensor<[12,2]>,
ttnn.addaten::add.Tensor4
395Tensor<[1,9,128]>,
Tensor<[1,9,128]>,
ttnn.addaten::add.Tensor5
396Tensor<[1,9,1]>,
Tensor<[1,9,1]>,
ttnn.addaten::add.Tensor4
397Tensor<[9,768]>,
Tensor<[9,768]>,
ttnn.addaten::add.Tensor4
398Tensor<[1,12,9,9]>,
Tensor<[1,12,9,9]>,
ttnn.addaten::add.Tensor4
399Tensor<[1,9,768]>,
Tensor<[1,9,768]>,
ttnn.addaten::add.Tensor5
400Tensor<[9,3072]>,
Tensor<[9,3072]>,
ttnn.addaten::add.Tensor4
401Tensor<[1,9,3072]>,
Tensor<[1,9,3072]>,
ttnn.addaten::add.Tensor5
402Tensor<[9,128]>,
Tensor<[9,128]>,
ttnn.addaten::add.Tensor4
403Tensor<[9,30000]>,
Tensor<[9,30000]>,
ttnn.addaten::add.Tensor4
404Tensor<[9,2048]>,
Tensor<[9,2048]>,
ttnn.addaten::add.Tensor4
405Tensor<[1,16,9,9]>,
Tensor<[1,16,9,9]>,
ttnn.addaten::add.Tensor4
406Tensor<[1,9,2048]>,
Tensor<[1,9,2048]>,
ttnn.addaten::add.Tensor5
407Tensor<[9,8192]>,
Tensor<[9,8192]>,
ttnn.addaten::add.Tensor4
408Tensor<[1,9,8192]>,
Tensor<[1,9,8192]>,
ttnn.addaten::add.Tensor5
409Tensor<[9,1024]>,
Tensor<[9,1024]>,
ttnn.addaten::add.Tensor4
410Tensor<[1,9,1024]>,
Tensor<[1,9,1024]>,
ttnn.addaten::add.Tensor5
411Tensor<[9,4096]>,
Tensor<[9,4096]>,
ttnn.addaten::add.Tensor4
412Tensor<[1,9,4096]>,
Tensor<[1,9,4096]>,
ttnn.addaten::add.Tensor5
413Tensor<[1,64,9,9]>,
Tensor<[1,64,9,9]>,
ttnn.addaten::add.Tensor4
414Tensor<[9,16384]>,
Tensor<[9,16384]>,
ttnn.addaten::add.Tensor4
415Tensor<[1,9,16384]>,
Tensor<[1,9,16384]>,
ttnn.addaten::add.Tensor5
416Tensor<[1,2]>,
Tensor<[1,2]>,
ttnn.addaten::add.Tensor4
417Tensor<[1,14,128]>,
Tensor<[1,14,128]>,
ttnn.addaten::add.Tensor5
418Tensor<[1,14,1]>,
Tensor<[1,14,1]>,
ttnn.addaten::add.Tensor4
419Tensor<[14,768]>,
Tensor<[14,768]>,
ttnn.addaten::add.Tensor4
420Tensor<[1,12,14,14]>,
Tensor<[1,12,14,14]>,
ttnn.addaten::add.Tensor4
421Tensor<[1,14,768]>,
Tensor<[1,14,768]>,
ttnn.addaten::add.Tensor5
422Tensor<[14,3072]>,
Tensor<[14,3072]>,
ttnn.addaten::add.Tensor4
423Tensor<[1,14,3072]>,
Tensor<[1,14,3072]>,
ttnn.addaten::add.Tensor5
424Tensor<[14,2]>,
Tensor<[14,2]>,
ttnn.addaten::add.Tensor4
425Tensor<[1,50,768]>,
Tensor<[1,50,768]>,
ttnn.addaten::add.Tensor5
426Tensor<[1,50,1]>,
Tensor<[1,50,1]>,
ttnn.addaten::add.Tensor4
427Tensor<[50,768]>,
Tensor<[50,768]>,
ttnn.addaten::add.Tensor4
428Tensor<[50,3072]>,
Tensor<[50,3072]>,
ttnn.addaten::add.Tensor4
429Tensor<[2,7,512]>,
Tensor<[2,7,512]>,
ttnn.addaten::add.Tensor4
430Tensor<[2,7,1]>,
Tensor<[2,7,1]>,
ttnn.addaten::add.Tensor4
431Tensor<[2,1,7,7]>,
Tensor<[2,1,7,7]>,
ttnn.addaten::add.Tensor5
432Tensor<[14,512]>,
Tensor<[14,512]>,
ttnn.addaten::add.Tensor4
433Tensor<[2,8,7,7]>,
Tensor<[2,8,7,7]>,
ttnn.addaten::add.Tensor4
434Tensor<[14,2048]>,
Tensor<[14,2048]>,
ttnn.addaten::add.Tensor4
435Tensor<[2]>,
Tensor<[2]>,
ttnn.addaten::arange4
436Tensor<[1,197,1024]>,
Tensor<[1,197,1024]>,
ttnn.addaten::add.Tensor4
437Tensor<[197,1024]>,
Tensor<[197,1024]>,
ttnn.addaten::add.Tensor4
438Tensor<[27]>,
Tensor<[27]>,
ttnn.addaten::add.Tensor4
439Tensor<[27,1]>,
Tensor<[27,1]>,
ttnn.addaten::add.Tensor4
440Tensor<[1,16,27,27]>,
Tensor<[1,16,27,27]>,
ttnn.addaten::add.Tensor5
441Tensor<[196,196]>,
Tensor<[196,196]>,
ttnn.addaten::add.Tensor4
442Tensor<[1,16,197,197]>,
Tensor<[1,16,197,197]>,
ttnn.addaten::add.Tensor5
443Tensor<[197,4096]>,
Tensor<[197,4096]>,
ttnn.addaten::add.Tensor4
444Tensor<[1,1024]>,
Tensor<[1,1024]>,
ttnn.addaten::add.Tensor4
445Tensor<[197]>,
Tensor<[197]>,
ttnn.addaten::arange4
446Tensor<[1,197,4096]>,
Tensor<[1,197,4096]>,
ttnn.addaten::gelu4
447Tensor<[1,12,27,27]>,
Tensor<[1,12,27,27]>,
ttnn.addaten::add.Tensor5
448Tensor<[1,12,197,197]>,
Tensor<[1,12,197,197]>,
ttnn.addaten::add.Tensor5
449Tensor<[1,64]>,
Tensor<[1,64]>,
ttnn.addaten::add.Tensor4
450Tensor<[1,12]>,
Tensor<[1,12]>,
ttnn.addaten::add.Tensor4
451Tensor<[1,784]>,
Tensor<[1,784]>,
ttnn.addaten::add.Tensor4

stablehlo.and::ttnn.and

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[19]>,
Tensor<[19]>,
ttnn.andaten::logical_and5
1Tensor<[197]>,
Tensor<[197]>,
ttnn.andaten::logical_and5

stablehlo.broadcast_in_dim

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,32,32,32]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
1Tensor<[1,32,32,1]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
2Scalar,
dims: []
aten::_safe_softmax4
3Tensor<[1,1,32,32]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
4Tensor<[1,1,1,32]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
5Tensor<[1,32,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
6Tensor<[32]>,
dims: [0]
aten::arange4
7Tensor<[1,1,32]>,
dims: [0, 1, 2]
aten::bmm4
8Tensor<[32,128,32]>,
dims: [0, 1, 2]
aten::bmm4
9Tensor<[32,32,128]>,
dims: [0, 1, 2]
aten::bmm4
10Tensor<[32]>,
dims: [1]
aten::gt.Tensor4
11Tensor<[32,1]>,
dims: [0, 1]
aten::gt.Tensor4
12Tensor<[1,32,32,128]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
13Tensor<[1,32,128,32]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
14Tensor<[1,32,128]>,
dims: [0, 1, 2]
aten::mul.Tensor4
15Tensor<[1,32,4096]>,
dims: [0, 1, 2]
aten::mul.Tensor4
16Tensor<[4096]>,
dims: [2]
aten::mul.Tensor4
17Tensor<[1,1,32,128]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
18Tensor<[1,32]>,
dims: [0, 1]
aten::triu4
19Tensor<[32,32]>,
dims: [0, 1]
aten::triu4
20Tensor<[1,12,7,7]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
21Tensor<[1,12,7,1]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
22Tensor<[7]>,
dims: [0]
aten::add.Tensor4
23Tensor<[1,7,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
24Tensor<[1,7,768]>,
dims: [0, 1, 2]
aten::add.Tensor4
25Tensor<[768]>,
dims: [2]
aten::add.Tensor4
26Tensor<[7,2304]>,
dims: [0, 1]
aten::add.Tensor4
27Tensor<[2304]>,
dims: [1]
aten::add.Tensor4
28Tensor<[1,1,7,7]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
29Tensor<[7,768]>,
dims: [0, 1]
aten::add.Tensor4
30Tensor<[768]>,
dims: [1]
aten::add.Tensor4
31Tensor<[7,3072]>,
dims: [0, 1]
aten::add.Tensor4
32Tensor<[3072]>,
dims: [1]
aten::add.Tensor4
33Tensor<[1,7,3072]>,
dims: [0, 1, 2]
aten::add.Tensor4
34Tensor<[1]>,
dims: [0]
aten::arange4
35Tensor<[12,64,7]>,
dims: [0, 1, 2]
aten::bmm4
36Tensor<[12,7,64]>,
dims: [0, 1, 2]
aten::bmm4
37Tensor<[1,7]>,
dims: [0, 1]
aten::eq.Scalar4
38Tensor<[1,1,1,7]>,
dims: [0, 1, 2, 3]
aten::expand4
39Tensor<[7]>,
dims: [1]
aten::lt.Tensor4
40Tensor<[7,1]>,
dims: [0, 1]
aten::lt.Tensor4
41Tensor<[1,12,7,64]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
42Tensor<[1,12,64,7]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
43Tensor<[2304]>,
dims: [0]
aten::mul.Tensor4
44Tensor<[768]>,
dims: [0]
aten::mul.Tensor4
45Tensor<[3072]>,
dims: [0]
aten::mul.Tensor4
46Tensor<[7,7]>,
dims: [0, 1]
aten::where.self4
47Tensor<[1,32,112,112]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
48Tensor<[32,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
49Tensor<[64]>,
dims: [0]
aten::add.Tensor4
50Tensor<[1,64,112,112]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
51Tensor<[64,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
52Tensor<[1,64,56,56]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
53Tensor<[128]>,
dims: [0]
aten::add.Tensor4
54Tensor<[1,128,56,56]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
55Tensor<[128,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
56Tensor<[1,128,28,28]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
57Tensor<[256]>,
dims: [0]
aten::add.Tensor4
58Tensor<[1,256,28,28]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
59Tensor<[256,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
60Tensor<[512]>,
dims: [0]
aten::add.Tensor4
61Tensor<[1,512,28,28]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
62Tensor<[512,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
63Tensor<[1,19,28,28]>,
dims: [0, 1, 2, 3]
aten::convolution4
64Tensor<[19,1,1]>,
dims: [1, 2, 3]
aten::convolution4
65Tensor<[1,38,28,28]>,
dims: [0, 1, 2, 3]
aten::convolution4
66Tensor<[38,1,1]>,
dims: [1, 2, 3]
aten::convolution4
67Tensor<[256,512]>,
dims: [0, 1]
aten::add.Tensor4
68Tensor<[512]>,
dims: [1]
aten::add.Tensor4
69Tensor<[1,256,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
70Tensor<[1,256,512]>,
dims: [0, 1, 2]
aten::add.Tensor4
71Tensor<[512]>,
dims: [2]
aten::add.Tensor4
72Tensor<[256,256]>,
dims: [0, 1]
aten::add.Tensor4
73Tensor<[256]>,
dims: [1]
aten::add.Tensor4
74Tensor<[1,1000]>,
dims: [0, 1]
aten::add.Tensor4
75Tensor<[1000]>,
dims: [1]
aten::add.Tensor4
76Tensor<[1,1024,512]>,
dims: [0, 1, 2]
aten::convolution4
77Tensor<[1024,1]>,
dims: [1, 2]
aten::convolution4
78Tensor<[256,1]>,
dims: [1, 2]
aten::convolution4
79Tensor<[1,512]>,
dims: [0, 1]
aten::mean.dim4
80Tensor<[1000]>,
dims: [0]
aten::mul.Tensor4
81Tensor<[8,920,920]>,
dims: [0, 1, 2]
aten::_softmax4
82Tensor<[8,920,1]>,
dims: [0, 1, 2]
aten::_softmax4
83Tensor<[8,100,100]>,
dims: [0, 1, 2]
aten::_softmax4
84Tensor<[8,100,1]>,
dims: [0, 1, 2]
aten::_softmax4
85Tensor<[8,100,920]>,
dims: [0, 1, 2]
aten::_softmax4
86Tensor<[1,64,1,1]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
87Tensor<[1,64,360,640]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
88Tensor<[1,64,180,320]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
89Tensor<[1,256,1,1]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
90Tensor<[1,256,180,320]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
91Tensor<[1,128,1,1]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
92Tensor<[1,128,180,320]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
93Tensor<[1,128,90,160]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
94Tensor<[1,512,1,1]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
95Tensor<[1,512,90,160]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
96Tensor<[1,256,90,160]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
97Tensor<[1,256,45,80]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
98Tensor<[1,1024,1,1]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
99Tensor<[1,1024,45,80]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
100Tensor<[1,512,45,80]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
101Tensor<[1,512,23,40]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
102Tensor<[1,2048,1,1]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
103Tensor<[1,2048,23,40]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
104Tensor<[23]>,
dims: [0]
aten::add.Tensor4
105Tensor<[40]>,
dims: [0]
aten::add.Tensor4
106Tensor<[1,1,40]>,
dims: [0, 1, 2]
aten::add.Tensor4
107Tensor<[1,23,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
108Tensor<[920,1,256]>,
dims: [0, 1, 2]
aten::add.Tensor4
109Tensor<[256]>,
dims: [2]
aten::add.Tensor4
110Tensor<[920,256]>,
dims: [0, 1]
aten::add.Tensor4
111Tensor<[920,1,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
112Tensor<[920,2048]>,
dims: [0, 1]
aten::add.Tensor4
113Tensor<[2048]>,
dims: [1]
aten::add.Tensor4
114Tensor<[100,256]>,
dims: [0, 1]
aten::add.Tensor4
115Tensor<[100,1,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
116Tensor<[100,1,256]>,
dims: [0, 1, 2]
aten::add.Tensor4
117Tensor<[100,2048]>,
dims: [0, 1]
aten::add.Tensor4
118Tensor<[6,1,100,92]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
119Tensor<[92]>,
dims: [3]
aten::add.Tensor4
120Tensor<[6,1,100,256]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
121Tensor<[256]>,
dims: [3]
aten::add.Tensor4
122Tensor<[6,1,100,4]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
123Tensor<[4]>,
dims: [3]
aten::add.Tensor4
124Tensor<[8,32,920]>,
dims: [0, 1, 2]
aten::baddbmm4
125Tensor<[8,1,920]>,
dims: [0, 1, 2]
aten::baddbmm4
126Tensor<[920,256,256]>,
dims: [0, 1, 2]
aten::bmm4
127Tensor<[8,920,32]>,
dims: [0, 1, 2]
aten::bmm4
128Tensor<[8,32,100]>,
dims: [0, 1, 2]
aten::bmm4
129Tensor<[8,100,32]>,
dims: [0, 1, 2]
aten::bmm4
130Tensor<[6,256,92]>,
dims: [0, 1, 2]
aten::bmm4
131Tensor<[6,256,256]>,
dims: [0, 1, 2]
aten::bmm4
132Tensor<[1,256,23,40]>,
dims: [0, 1, 2, 3]
aten::convolution4
133Tensor<[1,23,40]>,
dims: [0, 1, 2]
aten::div.Tensor4
134Tensor<[1,23,40,1]>,
dims: [0, 1, 2, 3]
aten::div.Tensor4
135Tensor<[128]>,
dims: [3]
aten::div.Tensor4
136Tensor<[256,256]>,
dims: [1, 2]
aten::expand5
137Tensor<[1,1,1,920]>,
dims: [0, 1, 2, 3]
aten::expand5
138Tensor<[256,92]>,
dims: [2, 3]
aten::expand5
139Tensor<[256,256]>,
dims: [2, 3]
aten::expand5
140Tensor<[1,1,1,1]>,
dims: [0, 1, 2, 3]
aten::index.Tensor4
141Tensor<[1,1,1]>,
dims: [1, 2, 3]
aten::index.Tensor4
142Tensor<[23,1]>,
dims: [2, 3]
aten::index.Tensor4
143Tensor<[40]>,
dims: [3]
aten::index.Tensor4
144Tensor<[2048]>,
dims: [0]
aten::mul.Tensor4
145Tensor<[1,920]>,
dims: [0, 1]
aten::where.self4
146Tensor<[1,12,10,10]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
147Tensor<[1,12,10,1]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
148Tensor<[1,10]>,
dims: [0, 1]
aten::add.Tensor5
149Tensor<[1,10,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
150Tensor<[1,10,768]>,
dims: [0, 1, 2]
aten::add.Tensor4
151Tensor<[10,768]>,
dims: [0, 1]
aten::add.Tensor4
152Tensor<[1,1,10,10]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
153Tensor<[10,3072]>,
dims: [0, 1]
aten::add.Tensor4
154Tensor<[10,250002]>,
dims: [0, 1]
aten::add.Tensor4
155Tensor<[250002]>,
dims: [1]
aten::add.Tensor4
156Tensor<[12,64,10]>,
dims: [0, 1, 2]
aten::bmm4
157Tensor<[12,10,64]>,
dims: [0, 1, 2]
aten::bmm4
158Tensor<[1,1,1,10]>,
dims: [0, 1, 2, 3]
aten::expand4
159Tensor<[1,12,10,64]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
160Tensor<[1,12,64,10]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
161Tensor<[250002]>,
dims: [0]
aten::mul.Tensor4
162Tensor<[1,8,4096,4096]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
163Tensor<[1,8,4096,1]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
164Tensor<[1,8,4096,9]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
165Tensor<[1,8,1024,1024]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
166Tensor<[1,8,1024,1]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
167Tensor<[1,8,1024,9]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
168Tensor<[1,8,256,256]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
169Tensor<[1,8,256,1]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
170Tensor<[1,8,256,9]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
171Tensor<[1,8,64,64]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
172Tensor<[1,8,64,1]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
173Tensor<[1,8,64,9]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
174Tensor<[1,1280]>,
dims: [0, 1]
aten::add.Tensor4
175Tensor<[1280]>,
dims: [1]
aten::add.Tensor4
176Tensor<[1,32,1,1]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
177Tensor<[1,320,64,64]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
178Tensor<[1,320,1,1]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
179Tensor<[1,320]>,
dims: [0, 1]
aten::add.Tensor4
180Tensor<[320]>,
dims: [1]
aten::add.Tensor4
181Tensor<[1,4096,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
182Tensor<[1,4096,320]>,
dims: [0, 1, 2]
aten::add.Tensor4
183Tensor<[320]>,
dims: [2]
aten::add.Tensor4
184Tensor<[4096,320]>,
dims: [0, 1]
aten::add.Tensor4
185Tensor<[4096,2560]>,
dims: [0, 1]
aten::add.Tensor4
186Tensor<[2560]>,
dims: [1]
aten::add.Tensor4
187Tensor<[1,320,32,32]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
188Tensor<[1,640]>,
dims: [0, 1]
aten::add.Tensor4
189Tensor<[640]>,
dims: [1]
aten::add.Tensor4
190Tensor<[1,640,32,32]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
191Tensor<[1,640,1,1]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
192Tensor<[1,1024,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
193Tensor<[1,1024,640]>,
dims: [0, 1, 2]
aten::add.Tensor4
194Tensor<[640]>,
dims: [2]
aten::add.Tensor4
195Tensor<[1024,640]>,
dims: [0, 1]
aten::add.Tensor4
196Tensor<[1024,5120]>,
dims: [0, 1]
aten::add.Tensor4
197Tensor<[5120]>,
dims: [1]
aten::add.Tensor4
198Tensor<[1,640,16,16]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
199Tensor<[1,1280,16,16]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
200Tensor<[1,1280,1,1]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
201Tensor<[1,256,1280]>,
dims: [0, 1, 2]
aten::add.Tensor4
202Tensor<[1280]>,
dims: [2]
aten::add.Tensor4
203Tensor<[256,1280]>,
dims: [0, 1]
aten::add.Tensor4
204Tensor<[256,10240]>,
dims: [0, 1]
aten::add.Tensor4
205Tensor<[10240]>,
dims: [1]
aten::add.Tensor4
206Tensor<[1,1280,8,8]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
207Tensor<[1,64,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
208Tensor<[1,64,1280]>,
dims: [0, 1, 2]
aten::add.Tensor4
209Tensor<[64,1280]>,
dims: [0, 1]
aten::add.Tensor4
210Tensor<[64,10240]>,
dims: [0, 1]
aten::add.Tensor4
211Tensor<[1,2560,8,8]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
212Tensor<[1,2560,1,1]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
213Tensor<[16]>,
dims: [0]
aten::add.Tensor4
214Tensor<[1,2560,16,16]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
215Tensor<[1,1920,16,16]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
216Tensor<[1,1920,1,1]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
217Tensor<[1,1920,32,32]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
218Tensor<[1,1280,32,32]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
219Tensor<[1,960,32,32]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
220Tensor<[1,960,1,1]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
221Tensor<[1,960,64,64]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
222Tensor<[1,640,64,64]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
223Tensor<[160]>,
dims: [0]
aten::arange.start4
224Tensor<[8,40,4096]>,
dims: [0, 1, 2]
aten::bmm4
225Tensor<[8,4096,40]>,
dims: [0, 1, 2]
aten::bmm4
226Tensor<[8,40,9]>,
dims: [0, 1, 2]
aten::bmm4
227Tensor<[8,9,40]>,
dims: [0, 1, 2]
aten::bmm4
228Tensor<[8,80,1024]>,
dims: [0, 1, 2]
aten::bmm4
229Tensor<[8,1024,80]>,
dims: [0, 1, 2]
aten::bmm4
230Tensor<[8,80,9]>,
dims: [0, 1, 2]
aten::bmm4
231Tensor<[8,9,80]>,
dims: [0, 1, 2]
aten::bmm4
232Tensor<[8,160,256]>,
dims: [0, 1, 2]
aten::bmm4
233Tensor<[8,256,160]>,
dims: [0, 1, 2]
aten::bmm4
234Tensor<[8,160,9]>,
dims: [0, 1, 2]
aten::bmm4
235Tensor<[8,9,160]>,
dims: [0, 1, 2]
aten::bmm4
236Tensor<[8,160,64]>,
dims: [0, 1, 2]
aten::bmm4
237Tensor<[8,64,160]>,
dims: [0, 1, 2]
aten::bmm4
238Tensor<[320,1,1]>,
dims: [1, 2, 3]
aten::convolution4
239Tensor<[640,1,1]>,
dims: [1, 2, 3]
aten::convolution4
240Tensor<[1280,1,1]>,
dims: [1, 2, 3]
aten::convolution4
241Tensor<[1,4,64,64]>,
dims: [0, 1, 2, 3]
aten::convolution4
242Tensor<[4,1,1]>,
dims: [1, 2, 3]
aten::convolution4
243Tensor<[1280]>,
dims: [0]
aten::index.Tensor4
244Tensor<[16,1]>,
dims: [2, 3]
aten::index.Tensor4
245Tensor<[16]>,
dims: [3]
aten::index.Tensor4
246Tensor<[32,1]>,
dims: [2, 3]
aten::index.Tensor4
247Tensor<[32]>,
dims: [3]
aten::index.Tensor4
248Tensor<[640]>,
dims: [0]
aten::index.Tensor4
249Tensor<[64,1]>,
dims: [2, 3]
aten::index.Tensor4
250Tensor<[64]>,
dims: [3]
aten::index.Tensor4
251Tensor<[1,8,4096,40]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
252Tensor<[1,8,40,4096]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
253Tensor<[1,8,40,9]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
254Tensor<[1,8,1024,80]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
255Tensor<[1,8,80,1024]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
256Tensor<[1,8,80,9]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
257Tensor<[1,8,256,160]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
258Tensor<[1,8,160,256]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
259Tensor<[1,8,160,9]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
260Tensor<[1,8,64,160]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
261Tensor<[1,8,160,64]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
262Tensor<[1,1]>,
dims: [0, 1]
aten::mul.Tensor4
263Tensor<[1,160]>,
dims: [0, 1]
aten::mul.Tensor4
264Tensor<[1,32,10,4096]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
265Tensor<[320]>,
dims: [0]
aten::mul.Tensor4
266Tensor<[2560]>,
dims: [0]
aten::mul.Tensor4
267Tensor<[1,32,10,1024]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
268Tensor<[1,32,20,1024]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
269Tensor<[5120]>,
dims: [0]
aten::mul.Tensor4
270Tensor<[1,32,20,256]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
271Tensor<[1,32,40,256]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
272Tensor<[10240]>,
dims: [0]
aten::mul.Tensor4
273Tensor<[1,32,40,64]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
274Tensor<[1,32,80,64]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
275Tensor<[1,32,80,256]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
276Tensor<[1,32,60,256]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
277Tensor<[1,32,60,1024]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
278Tensor<[1,32,40,1024]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
279Tensor<[1,32,30,1024]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
280Tensor<[1,32,30,4096]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
281Tensor<[1,32,20,4096]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
282Tensor<[1,12,25,25]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
283Tensor<[1,12,25,1]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
284Tensor<[1,25,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
285Tensor<[1,25,768]>,
dims: [0, 1, 2]
aten::add.Tensor4
286Tensor<[25,768]>,
dims: [0, 1]
aten::add.Tensor4
287Tensor<[1,1,25,25]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
288Tensor<[25,3072]>,
dims: [0, 1]
aten::add.Tensor4
289Tensor<[25,2]>,
dims: [0, 1]
aten::add.Tensor4
290Tensor<[2]>,
dims: [1]
aten::add.Tensor4
291Tensor<[1]>,
dims: [1]
aten::add.Tensor4
292Tensor<[12,64,25]>,
dims: [0, 1, 2]
aten::bmm4
293Tensor<[12,25,64]>,
dims: [0, 1, 2]
aten::bmm4
294Tensor<[1,1,1,25]>,
dims: [0, 1, 2, 3]
aten::expand4
295Tensor<[1,12,25,64]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
296Tensor<[1,12,64,25]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
297Tensor<[2]>,
dims: [0]
aten::mul.Tensor4
298Tensor<[1,3,1445,1445]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
299Tensor<[1,3,1445,1]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
300Tensor<[1,1445,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
301Tensor<[1,1445,192]>,
dims: [0, 1, 2]
aten::add.Tensor4
302Tensor<[192]>,
dims: [2]
aten::add.Tensor4
303Tensor<[1445,192]>,
dims: [0, 1]
aten::add.Tensor4
304Tensor<[192]>,
dims: [1]
aten::add.Tensor4
305Tensor<[1445,768]>,
dims: [0, 1]
aten::add.Tensor4
306Tensor<[100,192]>,
dims: [0, 1]
aten::add.Tensor4
307Tensor<[100,92]>,
dims: [0, 1]
aten::add.Tensor4
308Tensor<[92]>,
dims: [1]
aten::add.Tensor4
309Tensor<[100,4]>,
dims: [0, 1]
aten::add.Tensor4
310Tensor<[4]>,
dims: [1]
aten::add.Tensor4
311Tensor<[3,64,1445]>,
dims: [0, 1, 2]
aten::bmm4
312Tensor<[3,1445,64]>,
dims: [0, 1, 2]
aten::bmm4
313Tensor<[1,192,32,42]>,
dims: [0, 1, 2, 3]
aten::convolution4
314Tensor<[192,1,1]>,
dims: [1, 2, 3]
aten::convolution4
315Tensor<[1,3,1445,64]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
316Tensor<[1,3,64,1445]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
317Tensor<[192]>,
dims: [0]
aten::mul.Tensor4
318Tensor<[92]>,
dims: [0]
aten::mul.Tensor4
319Tensor<[4]>,
dims: [0]
aten::mul.Tensor4
320Tensor<[1,256,14,14]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
321Tensor<[1,512,7,7]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
322Tensor<[1,12,8,8]>,
dims: [0, 1, 2, 3]
aten::_softmax4
323Tensor<[1,12,8,1]>,
dims: [0, 1, 2, 3]
aten::_softmax4
324Tensor<[1,8,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
325Tensor<[1,8,768]>,
dims: [0, 1, 2]
aten::add.Tensor4
326Tensor<[1,1,1,8]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
327Tensor<[1,768]>,
dims: [0, 1]
aten::add.Tensor4
328Tensor<[1,3]>,
dims: [0, 1]
aten::add.Tensor4
329Tensor<[3]>,
dims: [1]
aten::add.Tensor4
330Tensor<[12,64,8]>,
dims: [0, 1, 2]
aten::bmm4
331Tensor<[12,8,64]>,
dims: [0, 1, 2]
aten::bmm4
332Tensor<[1,768,8]>,
dims: [0, 1, 2]
aten::convolution4
333Tensor<[768,1]>,
dims: [1, 2]
aten::convolution4
334Tensor<[1,3072,8]>,
dims: [0, 1, 2]
aten::convolution4
335Tensor<[3072,1]>,
dims: [1, 2]
aten::convolution4
336Tensor<[3]>,
dims: [0]
aten::mul.Tensor4
337Tensor<[1,8,256,2048]>,
dims: [0, 1, 2, 3]
aten::_softmax4
338Tensor<[1,8,2048,256]>,
dims: [0, 1, 2, 3]
aten::_softmax4
339Tensor<[1,8,2048,1]>,
dims: [0, 1, 2, 3]
aten::_softmax4
340Tensor<[1,2048,768]>,
dims: [0, 1, 2]
aten::add.Tensor4
341Tensor<[2048,768]>,
dims: [1, 2]
aten::add.Tensor4
342Tensor<[1,2048,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
343Tensor<[2048,256]>,
dims: [0, 1]
aten::add.Tensor4
344Tensor<[2048,1280]>,
dims: [0, 1]
aten::add.Tensor4
345Tensor<[1,1,1,2048]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
346Tensor<[256,768]>,
dims: [0, 1]
aten::add.Tensor4
347Tensor<[2048,768]>,
dims: [0, 1]
aten::add.Tensor4
348Tensor<[2048,262]>,
dims: [0, 1]
aten::add.Tensor4
349Tensor<[262]>,
dims: [1]
aten::add.Tensor4
350Tensor<[8,32,2048]>,
dims: [0, 1, 2]
aten::bmm4
351Tensor<[8,2048,160]>,
dims: [0, 1, 2]
aten::bmm4
352Tensor<[8,32,256]>,
dims: [0, 1, 2]
aten::bmm4
353Tensor<[8,256,96]>,
dims: [0, 1, 2]
aten::bmm4
354Tensor<[256,1280]>,
dims: [1, 2]
aten::expand5
355Tensor<[1,256,56,56]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
356Tensor<[1024]>,
dims: [0]
aten::add.Tensor4
357Tensor<[1,1024,14,14]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
358Tensor<[1024,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
359Tensor<[1,512,14,14]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
360Tensor<[1,2048,7,7]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
361Tensor<[2048,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
362Tensor<[1,12,201,201]>,
dims: [0, 1, 2, 3]
aten::_softmax4
363Tensor<[1,12,201,1]>,
dims: [0, 1, 2, 3]
aten::_softmax4
364Tensor<[12]>,
dims: [0]
aten::add.Tensor4
365Tensor<[1,201,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
366Tensor<[1,201,768]>,
dims: [0, 1, 2]
aten::add.Tensor4
367Tensor<[201,768]>,
dims: [0, 1]
aten::add.Tensor4
368Tensor<[1,1,1,201]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
369Tensor<[201,3072]>,
dims: [0, 1]
aten::add.Tensor4
370Tensor<[1,1536]>,
dims: [0, 1]
aten::add.Tensor4
371Tensor<[1536]>,
dims: [1]
aten::add.Tensor4
372Tensor<[1,3129]>,
dims: [0, 1]
aten::add.Tensor4
373Tensor<[3129]>,
dims: [1]
aten::add.Tensor4
374Tensor<[12,64,201]>,
dims: [0, 1, 2]
aten::bmm4
375Tensor<[12,201,64]>,
dims: [0, 1, 2]
aten::bmm4
376Tensor<[1,768,12,16]>,
dims: [0, 1, 2, 3]
aten::convolution4
377Tensor<[768,1,1]>,
dims: [1, 2, 3]
aten::convolution4
378Tensor<[12,1]>,
dims: [0, 1]
aten::expand4
379Tensor<[1,16]>,
dims: [0, 1]
aten::expand4
380Tensor<[12,1]>,
dims: [2, 3]
aten::index.Tensor4
381Tensor<[1536]>,
dims: [0]
aten::mul.Tensor4
382Tensor<[3129]>,
dims: [0]
aten::mul.Tensor4
383Tensor<[1,192]>,
dims: [0, 1]
aten::rsub.Scalar4
384Tensor<[1,128]>,
dims: [0, 1]
aten::add.Tensor4
385Tensor<[128]>,
dims: [1]
aten::add.Tensor4
386Tensor<[10]>,
dims: [1]
aten::add.Tensor4
387Tensor<[1,32,26,26]>,
dims: [0, 1, 2, 3]
aten::convolution4
388Tensor<[1,64,24,24]>,
dims: [0, 1, 2, 3]
aten::convolution4
389Tensor<[10]>,
dims: [0]
aten::mul.Tensor4
390Tensor<[16,19,19]>,
dims: [0, 1, 2]
aten::_softmax4
391Tensor<[16,19,1]>,
dims: [0, 1, 2]
aten::_softmax4
392Tensor<[19]>,
dims: [0]
aten::add.Tensor4
393Tensor<[1,19]>,
dims: [0, 1]
aten::add.Tensor4
394Tensor<[1,19,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
395Tensor<[1,19,1024]>,
dims: [0, 1, 2]
aten::add.Tensor4
396Tensor<[1024]>,
dims: [2]
aten::add.Tensor4
397Tensor<[19,1024]>,
dims: [0, 1]
aten::add.Tensor4
398Tensor<[1024]>,
dims: [1]
aten::add.Tensor4
399Tensor<[1,16,19,19]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
400Tensor<[1,1,19,19]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
401Tensor<[19,4096]>,
dims: [0, 1]
aten::add.Tensor4
402Tensor<[4096]>,
dims: [1]
aten::add.Tensor4
403Tensor<[16,64,19]>,
dims: [0, 1, 2]
aten::bmm4
404Tensor<[16,19,64]>,
dims: [0, 1, 2]
aten::bmm4
405Tensor<[1,1,1,19]>,
dims: [0, 1, 2, 3]
aten::expand4
406Tensor<[19]>,
dims: [1]
aten::lt.Tensor4
407Tensor<[19,1]>,
dims: [0, 1]
aten::lt.Tensor4
408Tensor<[4096]>,
dims: [0]
aten::mul.Tensor4
409Tensor<[19,256008]>,
dims: [0, 1]
aten::sub.Tensor4
410Tensor<[19,19]>,
dims: [0, 1]
aten::where.self4
411Tensor<[14]>,
dims: [0]
aten::add.Tensor4
412Tensor<[1,14,56,56]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
413Tensor<[14,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
414Tensor<[24]>,
dims: [0]
aten::add.Tensor4
415Tensor<[1,24,56,56]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
416Tensor<[24,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
417Tensor<[1,40,56,56]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
418Tensor<[40,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
419Tensor<[68]>,
dims: [0]
aten::add.Tensor4
420Tensor<[1,68,56,56]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
421Tensor<[68,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
422Tensor<[1,16,28,28]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
423Tensor<[16,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
424Tensor<[28]>,
dims: [0]
aten::add.Tensor4
425Tensor<[1,28,28,28]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
426Tensor<[28,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
427Tensor<[46]>,
dims: [0]
aten::add.Tensor4
428Tensor<[1,46,28,28]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
429Tensor<[46,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
430Tensor<[78]>,
dims: [0]
aten::add.Tensor4
431Tensor<[1,78,28,28]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
432Tensor<[78,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
433Tensor<[134]>,
dims: [0]
aten::add.Tensor4
434Tensor<[1,134,28,28]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
435Tensor<[134,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
436Tensor<[20]>,
dims: [0]
aten::add.Tensor4
437Tensor<[1,20,28,28]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
438Tensor<[20,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
439Tensor<[34]>,
dims: [0]
aten::add.Tensor4
440Tensor<[1,34,28,28]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
441Tensor<[34,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
442Tensor<[58]>,
dims: [0]
aten::add.Tensor4
443Tensor<[1,58,28,28]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
444Tensor<[58,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
445Tensor<[98]>,
dims: [0]
aten::add.Tensor4
446Tensor<[1,98,28,28]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
447Tensor<[98,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
448Tensor<[168]>,
dims: [0]
aten::add.Tensor4
449Tensor<[1,168,28,28]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
450Tensor<[168,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
451Tensor<[1,320,28,28]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
452Tensor<[1,40,14,14]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
453Tensor<[1,68,14,14]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
454Tensor<[116]>,
dims: [0]
aten::add.Tensor4
455Tensor<[1,116,14,14]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
456Tensor<[116,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
457Tensor<[196]>,
dims: [0]
aten::add.Tensor4
458Tensor<[1,196,14,14]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
459Tensor<[196,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
460Tensor<[334]>,
dims: [0]
aten::add.Tensor4
461Tensor<[1,334,14,14]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
462Tensor<[334,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
463Tensor<[1,640,14,14]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
464Tensor<[1,160,7,7]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
465Tensor<[160,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
466Tensor<[272]>,
dims: [0]
aten::add.Tensor4
467Tensor<[1,272,7,7]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
468Tensor<[272,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
469Tensor<[462]>,
dims: [0]
aten::add.Tensor4
470Tensor<[1,462,7,7]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
471Tensor<[462,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
472Tensor<[1,1024,7,7]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
473Tensor<[1,32,512,512]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
474Tensor<[1,64,256,256]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
475Tensor<[1,32,256,256]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
476Tensor<[1,128,128,128]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
477Tensor<[1,64,128,128]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
478Tensor<[1,256,64,64]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
479Tensor<[1,128,64,64]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
480Tensor<[1,512,32,32]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
481Tensor<[1,256,32,32]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
482Tensor<[1,1024,16,16]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
483Tensor<[1,512,16,16]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
484Tensor<[1,256,16,16]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
485Tensor<[1,128,32,32]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
486Tensor<[1,255,16,16]>,
dims: [0, 1, 2, 3]
aten::convolution4
487Tensor<[255,1,1]>,
dims: [1, 2, 3]
aten::convolution4
488Tensor<[1,255,32,32]>,
dims: [0, 1, 2, 3]
aten::convolution4
489Tensor<[1,255,64,64]>,
dims: [0, 1, 2, 3]
aten::convolution4
490Tensor<[1,1,256,256]>,
dims: [0, 1, 2, 3]
aten::convolution4
491Tensor<[1,4,14,14]>,
dims: [0, 1, 2, 3]
aten::convolution4
492Tensor<[1,16,14,14]>,
dims: [0, 1, 2, 3]
aten::convolution4
493Tensor<[1,1,28,28]>,
dims: [0, 1, 2, 3]
aten::convolution4
494Tensor<[1,16,32,32]>,
dims: [0, 1, 2, 3]
aten::_softmax4
495Tensor<[1,16,32,1]>,
dims: [0, 1, 2, 3]
aten::_softmax4
496Tensor<[1,32,1536]>,
dims: [0, 1, 2]
aten::add.Tensor4
497Tensor<[1536]>,
dims: [2]
aten::add.Tensor4
498Tensor<[32,4608]>,
dims: [0, 1]
aten::add.Tensor4
499Tensor<[4608]>,
dims: [1]
aten::add.Tensor4
500Tensor<[32,1536]>,
dims: [0, 1]
aten::add.Tensor4
501Tensor<[32,6144]>,
dims: [0, 1]
aten::add.Tensor4
502Tensor<[6144]>,
dims: [1]
aten::add.Tensor4
503Tensor<[1,32,6144]>,
dims: [0, 1, 2]
aten::add.Tensor4
504Tensor<[16,96,32]>,
dims: [0, 1, 2]
aten::baddbmm4
505Tensor<[16,32,32]>,
dims: [0, 1, 2]
aten::baddbmm4
506Tensor<[16,1,32]>,
dims: [0, 1, 2]
aten::baddbmm4
507Tensor<[16,32,96]>,
dims: [0, 1, 2]
aten::bmm4
508Tensor<[16,1]>,
dims: [1, 2]
aten::mul.Tensor4
509Tensor<[4608]>,
dims: [0]
aten::mul.Tensor4
510Tensor<[6144]>,
dims: [0]
aten::mul.Tensor4
511Tensor<[1,12,16,16]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
512Tensor<[1,12,16,1]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
513Tensor<[1,16,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
514Tensor<[1,16,768]>,
dims: [0, 1, 2]
aten::add.Tensor4
515Tensor<[16,768]>,
dims: [0, 1]
aten::add.Tensor4
516Tensor<[1,1,16,16]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
517Tensor<[16,3072]>,
dims: [0, 1]
aten::add.Tensor4
518Tensor<[12,64,16]>,
dims: [0, 1, 2]
aten::bmm4
519Tensor<[12,16,64]>,
dims: [0, 1, 2]
aten::bmm4
520Tensor<[1,1,1,16]>,
dims: [0, 1, 2, 3]
aten::expand4
521Tensor<[1,12,16,64]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
522Tensor<[1,12,64,16]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
523Tensor<[1,64,224,224]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
524Tensor<[1,128,112,112]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
525Tensor<[1,1,224,224]>,
dims: [0, 1, 2, 3]
aten::convolution4
526Tensor<[1,1,19200,300]>,
dims: [0, 1, 2, 3]
aten::_softmax4
527Tensor<[1,1,19200,1]>,
dims: [0, 1, 2, 3]
aten::_softmax4
528Tensor<[1,2,4800,300]>,
dims: [0, 1, 2, 3]
aten::_softmax4
529Tensor<[1,2,4800,1]>,
dims: [0, 1, 2, 3]
aten::_softmax4
530Tensor<[1,5,1200,300]>,
dims: [0, 1, 2, 3]
aten::_softmax4
531Tensor<[1,5,1200,1]>,
dims: [0, 1, 2, 3]
aten::_softmax4
532Tensor<[1,8,300,300]>,
dims: [0, 1, 2, 3]
aten::_softmax4
533Tensor<[1,8,300,1]>,
dims: [0, 1, 2, 3]
aten::_softmax4
534Tensor<[1,19200,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
535Tensor<[1,19200,64]>,
dims: [0, 1, 2]
aten::add.Tensor4
536Tensor<[64]>,
dims: [2]
aten::add.Tensor4
537Tensor<[19200,64]>,
dims: [0, 1]
aten::add.Tensor4
538Tensor<[64]>,
dims: [1]
aten::add.Tensor4
539Tensor<[1,300,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
540Tensor<[1,300,64]>,
dims: [0, 1, 2]
aten::add.Tensor4
541Tensor<[300,64]>,
dims: [0, 1]
aten::add.Tensor4
542Tensor<[19200,256]>,
dims: [0, 1]
aten::add.Tensor4
543Tensor<[1,4800,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
544Tensor<[1,4800,128]>,
dims: [0, 1, 2]
aten::add.Tensor4
545Tensor<[128]>,
dims: [2]
aten::add.Tensor4
546Tensor<[4800,128]>,
dims: [0, 1]
aten::add.Tensor4
547Tensor<[1,300,128]>,
dims: [0, 1, 2]
aten::add.Tensor4
548Tensor<[300,128]>,
dims: [0, 1]
aten::add.Tensor4
549Tensor<[4800,512]>,
dims: [0, 1]
aten::add.Tensor4
550Tensor<[1,1200,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
551Tensor<[1,1200,320]>,
dims: [0, 1, 2]
aten::add.Tensor4
552Tensor<[1200,320]>,
dims: [0, 1]
aten::add.Tensor4
553Tensor<[1,300,320]>,
dims: [0, 1, 2]
aten::add.Tensor4
554Tensor<[300,320]>,
dims: [0, 1]
aten::add.Tensor4
555Tensor<[1200,1280]>,
dims: [0, 1]
aten::add.Tensor4
556Tensor<[1,300,512]>,
dims: [0, 1, 2]
aten::add.Tensor4
557Tensor<[300,512]>,
dims: [0, 1]
aten::add.Tensor4
558Tensor<[300,2048]>,
dims: [0, 1]
aten::add.Tensor4
559Tensor<[30]>,
dims: [0]
aten::add.Tensor4
560Tensor<[30,1]>,
dims: [0, 1]
aten::add.Tensor4
561Tensor<[1,64,30,40]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
562Tensor<[1,32,30,40]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
563Tensor<[60]>,
dims: [0]
aten::add.Tensor4
564Tensor<[60,1]>,
dims: [0, 1]
aten::add.Tensor4
565Tensor<[80]>,
dims: [0]
aten::add.Tensor4
566Tensor<[1,64,60,80]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
567Tensor<[1,32,60,80]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
568Tensor<[120]>,
dims: [0]
aten::add.Tensor4
569Tensor<[120,1]>,
dims: [0, 1]
aten::add.Tensor4
570Tensor<[1,64,120,160]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
571Tensor<[1,32,120,160]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
572Tensor<[240]>,
dims: [0]
aten::add.Tensor4
573Tensor<[240,1]>,
dims: [0, 1]
aten::add.Tensor4
574Tensor<[480]>,
dims: [0]
aten::add.Tensor4
575Tensor<[480,1]>,
dims: [0, 1]
aten::add.Tensor4
576Tensor<[1,64,300]>,
dims: [0, 1, 2]
aten::bmm4
577Tensor<[1,256,64]>,
dims: [0, 1, 2]
aten::bmm4
578Tensor<[2,64,300]>,
dims: [0, 1, 2]
aten::bmm4
579Tensor<[2,300,64]>,
dims: [0, 1, 2]
aten::bmm4
580Tensor<[1,512,128]>,
dims: [0, 1, 2]
aten::bmm4
581Tensor<[5,64,300]>,
dims: [0, 1, 2]
aten::bmm4
582Tensor<[5,300,64]>,
dims: [0, 1, 2]
aten::bmm4
583Tensor<[1,1280,320]>,
dims: [0, 1, 2]
aten::bmm4
584Tensor<[8,64,300]>,
dims: [0, 1, 2]
aten::bmm4
585Tensor<[8,300,64]>,
dims: [0, 1, 2]
aten::bmm4
586Tensor<[1,2048,512]>,
dims: [0, 1, 2]
aten::bmm4
587Tensor<[1,64,15,20]>,
dims: [0, 1, 2, 3]
aten::convolution4
588Tensor<[1,256,120,160]>,
dims: [0, 1, 2, 3]
aten::convolution4
589Tensor<[1,128,60,80]>,
dims: [0, 1, 2, 3]
aten::convolution4
590Tensor<[1,128,15,20]>,
dims: [0, 1, 2, 3]
aten::convolution4
591Tensor<[1,512,60,80]>,
dims: [0, 1, 2, 3]
aten::convolution4
592Tensor<[1,320,30,40]>,
dims: [0, 1, 2, 3]
aten::convolution4
593Tensor<[1,320,15,20]>,
dims: [0, 1, 2, 3]
aten::convolution4
594Tensor<[1,1280,30,40]>,
dims: [0, 1, 2, 3]
aten::convolution4
595Tensor<[1,512,15,20]>,
dims: [0, 1, 2, 3]
aten::convolution4
596Tensor<[1,2048,15,20]>,
dims: [0, 1, 2, 3]
aten::convolution4
597Tensor<[1,2,30,40]>,
dims: [0, 1, 2, 3]
aten::convolution4
598Tensor<[2,1,1]>,
dims: [1, 2, 3]
aten::convolution4
599Tensor<[1,2,60,80]>,
dims: [0, 1, 2, 3]
aten::convolution4
600Tensor<[1,2,120,160]>,
dims: [0, 1, 2, 3]
aten::convolution4
601Tensor<[1,64,480,640]>,
dims: [0, 1, 2, 3]
aten::convolution4
602Tensor<[1,1,480,640]>,
dims: [0, 1, 2, 3]
aten::convolution4
603Tensor<[256,64]>,
dims: [1, 2]
aten::expand5
604Tensor<[512,128]>,
dims: [1, 2]
aten::expand5
605Tensor<[1280,320]>,
dims: [1, 2]
aten::expand5
606Tensor<[2048,512]>,
dims: [1, 2]
aten::expand5
607Tensor<[30,1]>,
dims: [2, 3]
aten::index.Tensor4
608Tensor<[60,1]>,
dims: [2, 3]
aten::index.Tensor4
609Tensor<[80]>,
dims: [3]
aten::index.Tensor4
610Tensor<[120,1]>,
dims: [2, 3]
aten::index.Tensor4
611Tensor<[160]>,
dims: [3]
aten::index.Tensor4
612Tensor<[240,1]>,
dims: [2, 3]
aten::index.Tensor4
613Tensor<[320]>,
dims: [3]
aten::index.Tensor4
614Tensor<[480,1]>,
dims: [2, 3]
aten::index.Tensor4
615Tensor<[640]>,
dims: [3]
aten::index.Tensor4
616Tensor<[1,1,30,40]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
617Tensor<[1,1,60,80]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
618Tensor<[1,1,120,160]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
619Tensor<[1,64,240,320]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
620Tensor<[1,12,197,197]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
621Tensor<[1,12,197,1]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
622Tensor<[1,197,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
623Tensor<[1,197,768]>,
dims: [0, 1, 2]
aten::add.Tensor4
624Tensor<[197,768]>,
dims: [0, 1]
aten::add.Tensor4
625Tensor<[197,3072]>,
dims: [0, 1]
aten::add.Tensor4
626Tensor<[12,64,197]>,
dims: [0, 1, 2]
aten::bmm4
627Tensor<[12,197,64]>,
dims: [0, 1, 2]
aten::bmm4
628Tensor<[1,768,14,14]>,
dims: [0, 1, 2, 3]
aten::convolution4
629Tensor<[1,12,197,64]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
630Tensor<[1,12,64,197]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
631Tensor<[1,1,16384,256]>,
dims: [0, 1, 2, 3]
aten::_softmax4
632Tensor<[1,1,16384,1]>,
dims: [0, 1, 2, 3]
aten::_softmax4
633Tensor<[1,2,4096,256]>,
dims: [0, 1, 2, 3]
aten::_softmax4
634Tensor<[1,2,4096,1]>,
dims: [0, 1, 2, 3]
aten::_softmax4
635Tensor<[1,5,1024,256]>,
dims: [0, 1, 2, 3]
aten::_softmax4
636Tensor<[1,5,1024,1]>,
dims: [0, 1, 2, 3]
aten::_softmax4
637Tensor<[1,16384,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
638Tensor<[1,16384,32]>,
dims: [0, 1, 2]
aten::add.Tensor4
639Tensor<[32]>,
dims: [2]
aten::add.Tensor4
640Tensor<[16384,32]>,
dims: [0, 1]
aten::add.Tensor4
641Tensor<[1,256,32]>,
dims: [0, 1, 2]
aten::add.Tensor4
642Tensor<[256,32]>,
dims: [0, 1]
aten::add.Tensor4
643Tensor<[16384,128]>,
dims: [0, 1]
aten::add.Tensor4
644Tensor<[1,4096,64]>,
dims: [0, 1, 2]
aten::add.Tensor4
645Tensor<[4096,64]>,
dims: [0, 1]
aten::add.Tensor4
646Tensor<[256,64]>,
dims: [0, 1]
aten::add.Tensor4
647Tensor<[4096,256]>,
dims: [0, 1]
aten::add.Tensor4
648Tensor<[1,1024,160]>,
dims: [0, 1, 2]
aten::add.Tensor4
649Tensor<[160]>,
dims: [2]
aten::add.Tensor4
650Tensor<[1024,160]>,
dims: [0, 1]
aten::add.Tensor4
651Tensor<[160]>,
dims: [1]
aten::add.Tensor4
652Tensor<[1,256,160]>,
dims: [0, 1, 2]
aten::add.Tensor4
653Tensor<[256,160]>,
dims: [0, 1]
aten::add.Tensor4
654Tensor<[1,256,256]>,
dims: [0, 1, 2]
aten::add.Tensor4
655Tensor<[256,1024]>,
dims: [0, 1]
aten::add.Tensor4
656Tensor<[1,16384,256]>,
dims: [0, 1, 2]
aten::add.Tensor4
657Tensor<[128,1]>,
dims: [0, 1]
aten::add.Tensor4
658Tensor<[1,4096,256]>,
dims: [0, 1, 2]
aten::add.Tensor4
659Tensor<[1,1024,256]>,
dims: [0, 1, 2]
aten::add.Tensor4
660Tensor<[1,256,128,128]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
661Tensor<[1,32,256]>,
dims: [0, 1, 2]
aten::bmm4
662Tensor<[1,128,32]>,
dims: [0, 1, 2]
aten::bmm4
663Tensor<[2,32,256]>,
dims: [0, 1, 2]
aten::bmm4
664Tensor<[2,256,32]>,
dims: [0, 1, 2]
aten::bmm4
665Tensor<[5,32,256]>,
dims: [0, 1, 2]
aten::bmm4
666Tensor<[5,256,32]>,
dims: [0, 1, 2]
aten::bmm4
667Tensor<[1,640,160]>,
dims: [0, 1, 2]
aten::bmm4
668Tensor<[8,256,32]>,
dims: [0, 1, 2]
aten::bmm4
669Tensor<[1,64,256]>,
dims: [0, 1, 2]
aten::bmm4
670Tensor<[1,160,256]>,
dims: [0, 1, 2]
aten::bmm4
671Tensor<[1,32,128,128]>,
dims: [0, 1, 2, 3]
aten::convolution4
672Tensor<[1,32,16,16]>,
dims: [0, 1, 2, 3]
aten::convolution4
673Tensor<[1,64,64,64]>,
dims: [0, 1, 2, 3]
aten::convolution4
674Tensor<[1,64,16,16]>,
dims: [0, 1, 2, 3]
aten::convolution4
675Tensor<[1,160,32,32]>,
dims: [0, 1, 2, 3]
aten::convolution4
676Tensor<[1,160,16,16]>,
dims: [0, 1, 2, 3]
aten::convolution4
677Tensor<[1,150,128,128]>,
dims: [0, 1, 2, 3]
aten::convolution4
678Tensor<[150,1,1]>,
dims: [1, 2, 3]
aten::convolution4
679Tensor<[128,32]>,
dims: [1, 2]
aten::expand5
680Tensor<[640,160]>,
dims: [1, 2]
aten::expand5
681Tensor<[1024,256]>,
dims: [1, 2]
aten::expand5
682Tensor<[32,256]>,
dims: [1, 2]
aten::expand5
683Tensor<[64,256]>,
dims: [1, 2]
aten::expand5
684Tensor<[160,256]>,
dims: [1, 2]
aten::expand5
685Tensor<[128,1]>,
dims: [2, 3]
aten::index.Tensor4
686Tensor<[1,71,7,7]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
687Tensor<[1,71,7,1]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
688Tensor<[1,7,4544]>,
dims: [0, 1, 2]
aten::add.Tensor4
689Tensor<[4544]>,
dims: [2]
aten::add.Tensor4
690Tensor<[1,1,7]>,
dims: [0, 1, 2]
aten::bmm4
691Tensor<[71,64,7]>,
dims: [0, 1, 2]
aten::bmm4
692Tensor<[71,7,64]>,
dims: [0, 1, 2]
aten::bmm4
693Tensor<[1,1,64,7]>,
dims: [0, 1, 2, 3]
aten::expand5
694Tensor<[1,1,7,64]>,
dims: [0, 1, 2, 3]
aten::expand5
695Tensor<[7,1,1]>,
dims: [1, 2, 3]
aten::index.Tensor4
696Tensor<[1,1]>,
dims: [2, 3]
aten::index.Tensor4
697Tensor<[1,71,7,64]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
698Tensor<[1,7,64]>,
dims: [0, 1, 2]
aten::mul.Tensor4
699Tensor<[1,16,112,112]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
700Tensor<[96]>,
dims: [0]
aten::add.Tensor4
701Tensor<[1,96,112,112]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
702Tensor<[96,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
703Tensor<[1,96,56,56]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
704Tensor<[144]>,
dims: [0]
aten::add.Tensor4
705Tensor<[1,144,56,56]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
706Tensor<[144,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
707Tensor<[1,144,28,28]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
708Tensor<[1,32,28,28]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
709Tensor<[1,192,28,28]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
710Tensor<[1,192,14,14]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
711Tensor<[1,64,14,14]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
712Tensor<[384]>,
dims: [0]
aten::add.Tensor4
713Tensor<[1,384,14,14]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
714Tensor<[384,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
715Tensor<[1,96,14,14]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
716Tensor<[576]>,
dims: [0]
aten::add.Tensor4
717Tensor<[1,576,14,14]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
718Tensor<[576,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
719Tensor<[1,576,7,7]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
720Tensor<[960]>,
dims: [0]
aten::add.Tensor4
721Tensor<[1,960,7,7]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
722Tensor<[960,1,1]>,
dims: [1, 2, 3]
aten::add.Tensor4
723Tensor<[1,320,7,7]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
724Tensor<[1,1280,7,7]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
725Tensor<[1,12,12,12]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
726Tensor<[1,12,12,1]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
727Tensor<[1,12,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
728Tensor<[1,12,128]>,
dims: [0, 1, 2]
aten::add.Tensor4
729Tensor<[12,768]>,
dims: [0, 1]
aten::add.Tensor4
730Tensor<[1,1,12,12]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
731Tensor<[1,12,768]>,
dims: [0, 1, 2]
aten::add.Tensor4
732Tensor<[12,3072]>,
dims: [0, 1]
aten::add.Tensor4
733Tensor<[1,12,3072]>,
dims: [0, 1, 2]
aten::add.Tensor4
734Tensor<[12,2]>,
dims: [0, 1]
aten::add.Tensor4
735Tensor<[12,64,12]>,
dims: [0, 1, 2]
aten::bmm4
736Tensor<[12,12,64]>,
dims: [0, 1, 2]
aten::bmm4
737Tensor<[1,1,1,12]>,
dims: [0, 1, 2, 3]
aten::expand4
738Tensor<[1,12,12,64]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
739Tensor<[1,12,64,12]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
740Tensor<[1,12,9,9]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
741Tensor<[1,12,9,1]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
742Tensor<[1,9,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
743Tensor<[1,9,128]>,
dims: [0, 1, 2]
aten::add.Tensor4
744Tensor<[9,768]>,
dims: [0, 1]
aten::add.Tensor4
745Tensor<[1,1,9,9]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
746Tensor<[1,9,768]>,
dims: [0, 1, 2]
aten::add.Tensor4
747Tensor<[9,3072]>,
dims: [0, 1]
aten::add.Tensor4
748Tensor<[1,9,3072]>,
dims: [0, 1, 2]
aten::add.Tensor4
749Tensor<[9,128]>,
dims: [0, 1]
aten::add.Tensor4
750Tensor<[9,30000]>,
dims: [0, 1]
aten::add.Tensor4
751Tensor<[30000]>,
dims: [1]
aten::add.Tensor4
752Tensor<[12,64,9]>,
dims: [0, 1, 2]
aten::bmm4
753Tensor<[12,9,64]>,
dims: [0, 1, 2]
aten::bmm4
754Tensor<[1,1,1,9]>,
dims: [0, 1, 2, 3]
aten::expand4
755Tensor<[1,12,9,64]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
756Tensor<[1,12,64,9]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
757Tensor<[30000]>,
dims: [0]
aten::mul.Tensor4
758Tensor<[1,16,9,9]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
759Tensor<[1,16,9,1]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
760Tensor<[9,2048]>,
dims: [0, 1]
aten::add.Tensor4
761Tensor<[1,9,2048]>,
dims: [0, 1, 2]
aten::add.Tensor4
762Tensor<[2048]>,
dims: [2]
aten::add.Tensor4
763Tensor<[9,8192]>,
dims: [0, 1]
aten::add.Tensor4
764Tensor<[8192]>,
dims: [1]
aten::add.Tensor4
765Tensor<[1,9,8192]>,
dims: [0, 1, 2]
aten::add.Tensor4
766Tensor<[16,128,9]>,
dims: [0, 1, 2]
aten::bmm4
767Tensor<[16,9,128]>,
dims: [0, 1, 2]
aten::bmm4
768Tensor<[1,16,9,128]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
769Tensor<[1,16,128,9]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
770Tensor<[8192]>,
dims: [0]
aten::mul.Tensor4
771Tensor<[9,1024]>,
dims: [0, 1]
aten::add.Tensor4
772Tensor<[1,9,1024]>,
dims: [0, 1, 2]
aten::add.Tensor4
773Tensor<[9,4096]>,
dims: [0, 1]
aten::add.Tensor4
774Tensor<[1,9,4096]>,
dims: [0, 1, 2]
aten::add.Tensor4
775Tensor<[16,64,9]>,
dims: [0, 1, 2]
aten::bmm4
776Tensor<[16,9,64]>,
dims: [0, 1, 2]
aten::bmm4
777Tensor<[1,16,9,64]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
778Tensor<[1,16,64,9]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
779Tensor<[1,64,9,9]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
780Tensor<[1,64,9,1]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
781Tensor<[9,16384]>,
dims: [0, 1]
aten::add.Tensor4
782Tensor<[16384]>,
dims: [1]
aten::add.Tensor4
783Tensor<[1,9,16384]>,
dims: [0, 1, 2]
aten::add.Tensor4
784Tensor<[64,64,9]>,
dims: [0, 1, 2]
aten::bmm4
785Tensor<[64,9,64]>,
dims: [0, 1, 2]
aten::bmm4
786Tensor<[1,64,9,64]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
787Tensor<[1,64,64,9]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
788Tensor<[16384]>,
dims: [0]
aten::mul.Tensor4
789Tensor<[1,2]>,
dims: [0, 1]
aten::add.Tensor4
790Tensor<[1,12,14,14]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
791Tensor<[1,12,14,1]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
792Tensor<[1,14,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
793Tensor<[1,14,128]>,
dims: [0, 1, 2]
aten::add.Tensor4
794Tensor<[14,768]>,
dims: [0, 1]
aten::add.Tensor4
795Tensor<[1,1,14,14]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
796Tensor<[1,14,768]>,
dims: [0, 1, 2]
aten::add.Tensor4
797Tensor<[14,3072]>,
dims: [0, 1]
aten::add.Tensor4
798Tensor<[1,14,3072]>,
dims: [0, 1, 2]
aten::add.Tensor4
799Tensor<[14,2]>,
dims: [0, 1]
aten::add.Tensor4
800Tensor<[12,64,14]>,
dims: [0, 1, 2]
aten::bmm4
801Tensor<[12,14,64]>,
dims: [0, 1, 2]
aten::bmm4
802Tensor<[1,1,1,14]>,
dims: [0, 1, 2, 3]
aten::expand4
803Tensor<[1,12,14,64]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
804Tensor<[1,12,64,14]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
805Tensor<[1,12,50,50]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
806Tensor<[1,12,50,1]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
807Tensor<[2,8,7,7]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
808Tensor<[2,8,7,1]>,
dims: [0, 1, 2, 3]
aten::_safe_softmax4
809Tensor<[1,50,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
810Tensor<[1,50,768]>,
dims: [0, 1, 2]
aten::add.Tensor4
811Tensor<[50,768]>,
dims: [0, 1]
aten::add.Tensor4
812Tensor<[50,3072]>,
dims: [0, 1]
aten::add.Tensor4
813Tensor<[2,7,512]>,
dims: [0, 1, 2]
aten::add.Tensor4
814Tensor<[1,7,512]>,
dims: [0, 1, 2]
aten::add.Tensor4
815Tensor<[2,7,1]>,
dims: [0, 1, 2]
aten::add.Tensor4
816Tensor<[14,512]>,
dims: [0, 1]
aten::add.Tensor4
817Tensor<[2,1,7,7]>,
dims: [0, 1, 2, 3]
aten::add.Tensor4
818Tensor<[14,2048]>,
dims: [0, 1]
aten::add.Tensor4
819Tensor<[12,64,50]>,
dims: [0, 1, 2]
aten::bmm4
820Tensor<[12,50,64]>,
dims: [0, 1, 2]
aten::bmm4
821Tensor<[16,64,7]>,
dims: [0, 1, 2]
aten::bmm4
822Tensor<[16,7,64]>,
dims: [0, 1, 2]
aten::bmm4
823Tensor<[2,512]>,
dims: [0, 1]
aten::div.Tensor4
824Tensor<[2,1]>,
dims: [0, 1]
aten::div.Tensor4
825Tensor<[2,1,1,7]>,
dims: [0, 1, 2, 3]
aten::expand4
826Tensor<[1,12,50,64]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
827Tensor<[1,12,64,50]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
828Tensor<[2,8,7,64]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
829Tensor<[2,8,64,7]>,
dims: [0, 1, 2, 3]
aten::mul.Scalar4
830Tensor<[1,50,3072]>,
dims: [0, 1, 2]
aten::mul.Tensor4
831Tensor<[2,7,2048]>,
dims: [0, 1, 2]
aten::mul.Tensor4
832Tensor<[1,16,197,197]>,
dims: [0, 1, 2, 3]
aten::_softmax4
833Tensor<[1,16,197,1]>,
dims: [0, 1, 2, 3]
aten::_softmax4
834Tensor<[1,197,1024]>,
dims: [0, 1, 2]
aten::add.Tensor4
835Tensor<[197,1024]>,
dims: [0, 1]
aten::add.Tensor4
836Tensor<[27]>,
dims: [0]
aten::add.Tensor4
837Tensor<[27,1]>,
dims: [0, 1]
aten::add.Tensor4
838Tensor<[196,196]>,
dims: [0, 1]
aten::add.Tensor4
839Tensor<[197,4096]>,
dims: [0, 1]
aten::add.Tensor4
840Tensor<[1,1024]>,
dims: [0, 1]
aten::add.Tensor4
841Tensor<[197]>,
dims: [0]
aten::arange4
842Tensor<[16,64,197]>,
dims: [0, 1, 2]
aten::bmm4
843Tensor<[16,197,64]>,
dims: [0, 1, 2]
aten::bmm4
844Tensor<[14,1]>,
dims: [0, 1]
aten::expand4
845Tensor<[1,14]>,
dims: [0, 1]
aten::expand4
846Tensor<[27,1]>,
dims: [2, 3]
aten::index.Tensor4
847Tensor<[27]>,
dims: [3]
aten::index.Tensor4
848Tensor<[1,16,27,27]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
849Tensor<[2,196,1]>,
dims: [0, 1, 2]
aten::sub.Tensor4
850Tensor<[2,1,196]>,
dims: [0, 1, 2]
aten::sub.Tensor4
851Tensor<[1,197]>,
dims: [0, 1]
aten::where.self4
852Tensor<[196,197]>,
dims: [0, 1]
aten::where.self4
853Tensor<[197,1]>,
dims: [0, 1]
aten::where.self4
854Tensor<[197,197]>,
dims: [0, 1]
aten::where.self4
855Tensor<[12,1,1]>,
dims: [1, 2, 3]
aten::index.Tensor4
856Tensor<[1,12,27,27]>,
dims: [0, 1, 2, 3]
aten::mul.Tensor4
857Tensor<[1,64]>,
dims: [0, 1]
aten::add.Tensor4
858Tensor<[1,12]>,
dims: [0, 1]
aten::add.Tensor4
859Tensor<[12]>,
dims: [1]
aten::add.Tensor4
860Tensor<[1,784]>,
dims: [0, 1]
aten::add.Tensor4
861Tensor<[784]>,
dims: [1]
aten::add.Tensor4
862Tensor<[784]>,
dims: [0]
aten::mul.Tensor4

stablehlo.ceil::ttnn.ceil

STABLE HLO Input Variationsttnn opTorch NameStatus
0Scalar,
ttnn.ceilaten::arange4

stablehlo.clamp::ttnn.clamp

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,1024,512]>,
Tensor<[1,1024,512]>,
ttnn.clampaten::gelu4
1Tensor<[1,256,256]>,
Tensor<[1,256,256]>,
ttnn.clampaten::gelu4
2Tensor<[1,10,3072]>,
Tensor<[1,10,3072]>,
ttnn.clampaten::gelu4
3Tensor<[1,10,768]>,
Tensor<[1,10,768]>,
ttnn.clampaten::gelu4
4Tensor<[1,4096,1280]>,
Tensor<[1,4096,1280]>,
ttnn.clampaten::gelu4
5Tensor<[1,1024,2560]>,
Tensor<[1,1024,2560]>,
ttnn.clampaten::gelu4
6Tensor<[1,256,5120]>,
Tensor<[1,256,5120]>,
ttnn.clampaten::gelu4
7Tensor<[1,64,5120]>,
Tensor<[1,64,5120]>,
ttnn.clampaten::gelu4
8Tensor<[1,25,3072]>,
Tensor<[1,25,3072]>,
ttnn.clampaten::gelu4
9Tensor<[1,1445,768]>,
Tensor<[1,1445,768]>,
ttnn.clampaten::gelu4
10Tensor<[1,3072,8]>,
Tensor<[1,3072,8]>,
ttnn.clampaten::gelu4
11Tensor<[1,256,1280]>,
Tensor<[1,256,1280]>,
ttnn.clampaten::gelu4
12Tensor<[1,2048,768]>,
Tensor<[1,2048,768]>,
ttnn.clampaten::gelu4
13Tensor<[1,201,3072]>,
Tensor<[1,201,3072]>,
ttnn.clampaten::gelu4
14Tensor<[1,1536]>,
Tensor<[1,1536]>,
ttnn.clampaten::gelu4
15Tensor<[1,19,4096]>,
Tensor<[1,19,4096]>,
ttnn.clampaten::gelu4
16Tensor<[1,16,3072]>,
Tensor<[1,16,3072]>,
ttnn.clampaten::gelu4
17Scalar,
Tensor<[30]>,
Scalar,
ttnn.clampaten::clamp4
18Scalar,
Tensor<[30,1]>,
Scalar,
ttnn.clampaten::clamp4
19Scalar,
Tensor<[40]>,
Scalar,
ttnn.clampaten::clamp4
20Scalar,
Tensor<[60]>,
Scalar,
ttnn.clampaten::clamp4
21Scalar,
Tensor<[60,1]>,
Scalar,
ttnn.clampaten::clamp4
22Scalar,
Tensor<[80]>,
Scalar,
ttnn.clampaten::clamp4
23Scalar,
Tensor<[120]>,
Scalar,
ttnn.clampaten::clamp4
24Scalar,
Tensor<[120,1]>,
Scalar,
ttnn.clampaten::clamp4
25Scalar,
Tensor<[160]>,
Scalar,
ttnn.clampaten::clamp4
26Scalar,
Tensor<[240]>,
Scalar,
ttnn.clampaten::clamp4
27Scalar,
Tensor<[240,1]>,
Scalar,
ttnn.clampaten::clamp4
28Scalar,
Tensor<[320]>,
Scalar,
ttnn.clampaten::clamp4
29Scalar,
Tensor<[480]>,
Scalar,
ttnn.clampaten::clamp4
30Scalar,
Tensor<[480,1]>,
Scalar,
ttnn.clampaten::clamp4
31Scalar,
Tensor<[640]>,
Scalar,
ttnn.clampaten::clamp4
32Tensor<[1,19200,256]>,
Tensor<[1,19200,256]>,
ttnn.clampaten::gelu4
33Tensor<[1,4800,512]>,
Tensor<[1,4800,512]>,
ttnn.clampaten::gelu4
34Tensor<[1,1200,1280]>,
Tensor<[1,1200,1280]>,
ttnn.clampaten::gelu4
35Tensor<[1,300,2048]>,
Tensor<[1,300,2048]>,
ttnn.clampaten::gelu4
36Tensor<[1,197,3072]>,
Tensor<[1,197,3072]>,
ttnn.clampaten::gelu4
37Scalar,
Tensor<[128]>,
Scalar,
ttnn.clampaten::clamp4
38Scalar,
Tensor<[128,1]>,
Scalar,
ttnn.clampaten::clamp4
39Tensor<[1,16384,128]>,
Tensor<[1,16384,128]>,
ttnn.clampaten::gelu4
40Tensor<[1,4096,256]>,
Tensor<[1,4096,256]>,
ttnn.clampaten::gelu4
41Tensor<[1,1024,640]>,
Tensor<[1,1024,640]>,
ttnn.clampaten::gelu4
42Tensor<[1,256,1024]>,
Tensor<[1,256,1024]>,
ttnn.clampaten::gelu4
43Tensor<[1,7,18176]>,
Tensor<[1,7,18176]>,
ttnn.clampaten::gelu4
44Scalar,
Tensor<[27]>,
Scalar,
ttnn.clampaten::clamp4
45Scalar,
Tensor<[27,1]>,
Scalar,
ttnn.clampaten::clamp4
46Tensor<[1,197,4096]>,
Tensor<[1,197,4096]>,
ttnn.clampaten::gelu4

stablehlo.compare::ttnn.?

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,32,32,32]>,
Tensor<[1,32,32,32]>,
ttnn.eqaten::_safe_softmax4
1Tensor<[1,1,32,32]>,
Tensor<[1,1,32,32]>,
ttnn.eqaten::eq.Scalar4
2Tensor<[32,32]>,
Tensor<[32,32]>,
ttnn.gtaten::gt.Tensor4
3Tensor<[1,12,7,7]>,
Tensor<[1,12,7,7]>,
ttnn.eqaten::_safe_softmax4
4Tensor<[1,7]>,
Tensor<[1,7]>,
ttnn.eqaten::eq.Scalar4
5Tensor<[7,7]>,
Tensor<[7,7]>,
ttnn.ltaten::lt.Tensor4
6Tensor<[1,12,10,10]>,
Tensor<[1,12,10,10]>,
ttnn.eqaten::_safe_softmax4
7Tensor<[1,10]>,
Tensor<[1,10]>,
ttnn.neaten::ne.Scalar4
8Tensor<[1,8,4096,4096]>,
Tensor<[1,8,4096,4096]>,
ttnn.eqaten::_safe_softmax4
9Tensor<[1,8,4096,9]>,
Tensor<[1,8,4096,9]>,
ttnn.eqaten::_safe_softmax4
10Tensor<[1,8,1024,1024]>,
Tensor<[1,8,1024,1024]>,
ttnn.eqaten::_safe_softmax4
11Tensor<[1,8,1024,9]>,
Tensor<[1,8,1024,9]>,
ttnn.eqaten::_safe_softmax4
12Tensor<[1,8,256,256]>,
Tensor<[1,8,256,256]>,
ttnn.eqaten::_safe_softmax4
13Tensor<[1,8,256,9]>,
Tensor<[1,8,256,9]>,
ttnn.eqaten::_safe_softmax4
14Tensor<[1,8,64,64]>,
Tensor<[1,8,64,64]>,
ttnn.eqaten::_safe_softmax4
15Tensor<[1,8,64,9]>,
Tensor<[1,8,64,9]>,
ttnn.eqaten::_safe_softmax4
16Tensor<[1,12,25,25]>,
Tensor<[1,12,25,25]>,
ttnn.eqaten::_safe_softmax4
17Tensor<[1,3,1445,1445]>,
Tensor<[1,3,1445,1445]>,
ttnn.eqaten::_safe_softmax4
18Tensor<[19]>,
Tensor<[19]>,
ttnn.ltaten::lt.Scalar4
19Tensor<[19,19]>,
Tensor<[19,19]>,
ttnn.ltaten::lt.Tensor4
20Tensor<[1,12,16,16]>,
Tensor<[1,12,16,16]>,
ttnn.eqaten::_safe_softmax4
21Tensor<[1,12,197,197]>,
Tensor<[1,12,197,197]>,
ttnn.eqaten::_safe_softmax4
22Tensor<[1,71,7,7]>,
Tensor<[1,71,7,7]>,
ttnn.eqaten::_safe_softmax4
23Tensor<[1,1,7,7]>,
Tensor<[1,1,7,7]>,
ttnn.eqaten::eq.Scalar4
24Tensor<[1,12,12,12]>,
Tensor<[1,12,12,12]>,
ttnn.eqaten::_safe_softmax4
25Tensor<[1,12,9,9]>,
Tensor<[1,12,9,9]>,
ttnn.eqaten::_safe_softmax4
26Tensor<[1,16,9,9]>,
Tensor<[1,16,9,9]>,
ttnn.eqaten::_safe_softmax4
27Tensor<[1,64,9,9]>,
Tensor<[1,64,9,9]>,
ttnn.eqaten::_safe_softmax4
28Tensor<[1,12,14,14]>,
Tensor<[1,12,14,14]>,
ttnn.eqaten::_safe_softmax4
29Tensor<[1,12,50,50]>,
Tensor<[1,12,50,50]>,
ttnn.eqaten::_safe_softmax4
30Tensor<[2,8,7,7]>,
Tensor<[2,8,7,7]>,
ttnn.eqaten::_safe_softmax4
31Tensor<[197]>,
Tensor<[197]>,
ttnn.geaten::ge.Scalar4

stablehlo.concatenate::ttnn.concat

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,32,64]>,
Tensor<[1,32,64]>,
dim: 2
ttnn.concataten::cat5
1Tensor<[1,32,32,64]>,
Tensor<[1,32,32,64]>,
dim: 3
ttnn.concataten::cat5
2Tensor<[1,1]>,
Tensor<[1,1]>,
dim: 1
ttnn.concataten::index.Tensor4
3Tensor<[1,128,28,28]>,
Tensor<[1,19,28,28]>,
Tensor<[1,38,28,28]>,
dim: 1
ttnn.concataten::cat5
4Tensor<[1,23,40,128]>,
Tensor<[1,23,40,128]>,
dim: 3
ttnn.concataten::cat5
5Tensor<[1,1,23,40,1]>,
Tensor<[1,1,23,40,1]>,
Tensor<[1,1,23,40,1]>,
Tensor<[1,1,23,40,1]>,
dim: 4
ttnn.concataten::index.Tensor4
6Tensor<[1,23,40,64,1]>,
Tensor<[1,23,40,64,1]>,
dim: 4
ttnn.concataten::stack5
7Tensor<[1,100,1,256]>,
Tensor<[1,100,1,256]>,
Tensor<[1,100,1,256]>,
Tensor<[1,100,1,256]>,
Tensor<[1,100,1,256]>,
Tensor<[1,100,1,256]>,
dim: 0
ttnn.concataten::stack5
8Tensor<[1,160]>,
Tensor<[1,160]>,
dim: 1
ttnn.concataten::cat5
9Tensor<[1,1280,8,8]>,
Tensor<[1,1280,8,8]>,
dim: 1
ttnn.concataten::cat5
10Tensor<[1,1280,16,16]>,
Tensor<[1,1280,16,16]>,
dim: 1
ttnn.concataten::cat5
11Tensor<[1,1280,16,16]>,
Tensor<[1,640,16,16]>,
dim: 1
ttnn.concataten::cat5
12Tensor<[1,1280,32,32]>,
Tensor<[1,640,32,32]>,
dim: 1
ttnn.concataten::cat5
13Tensor<[1,640,32,32]>,
Tensor<[1,640,32,32]>,
dim: 1
ttnn.concataten::cat5
14Tensor<[1,640,32,32]>,
Tensor<[1,320,32,32]>,
dim: 1
ttnn.concataten::cat5
15Tensor<[1,640,64,64]>,
Tensor<[1,320,64,64]>,
dim: 1
ttnn.concataten::cat5
16Tensor<[1,320,64,64]>,
Tensor<[1,320,64,64]>,
dim: 1
ttnn.concataten::cat5
17Tensor<[1,1280,16,16,1]>,
Tensor<[1,1280,16,16,1]>,
Tensor<[1,1280,16,16,1]>,
Tensor<[1,1280,16,16,1]>,
dim: 4
ttnn.concataten::index.Tensor4
18Tensor<[1,1280,32,32,1]>,
Tensor<[1,1280,32,32,1]>,
Tensor<[1,1280,32,32,1]>,
Tensor<[1,1280,32,32,1]>,
dim: 4
ttnn.concataten::index.Tensor4
19Tensor<[1,640,64,64,1]>,
Tensor<[1,640,64,64,1]>,
Tensor<[1,640,64,64,1]>,
Tensor<[1,640,64,64,1]>,
dim: 4
ttnn.concataten::index.Tensor4
20Tensor<[1,1,192]>,
Tensor<[1,1344,192]>,
Tensor<[1,100,192]>,
dim: 1
ttnn.concataten::cat5
21Tensor<[1,8,768]>,
Tensor<[1,193,768]>,
dim: 1
ttnn.concataten::cat5
22Tensor<[1,8]>,
Tensor<[1,193]>,
dim: 1
ttnn.concataten::cat4
23Tensor<[1,1,12,16,1]>,
Tensor<[1,1,12,16,1]>,
Tensor<[1,1,12,16,1]>,
Tensor<[1,1,12,16,1]>,
dim: 4
ttnn.concataten::index.Tensor4
24Tensor<[12,16,1]>,
Tensor<[12,16,1]>,
dim: 2
ttnn.concataten::stack4
25Tensor<[19,1,1]>,
Tensor<[19,1,1]>,
dim: 2
ttnn.concataten::gather4
26Tensor<[1,14,56,56]>,
Tensor<[1,64,56,56]>,
dim: 1
ttnn.concataten::cat5
27Tensor<[1,14,56,56]>,
Tensor<[1,24,56,56]>,
Tensor<[1,64,56,56]>,
dim: 1
ttnn.concataten::cat5
28Tensor<[1,14,56,56]>,
Tensor<[1,40,56,56]>,
dim: 1
ttnn.concataten::cat5
29Tensor<[1,14,56,56]>,
Tensor<[1,24,56,56]>,
Tensor<[1,40,56,56]>,
Tensor<[1,64,56,56]>,
dim: 1
ttnn.concataten::cat5
30Tensor<[1,14,56,56]>,
Tensor<[1,14,56,56]>,
Tensor<[1,14,56,56]>,
Tensor<[1,14,56,56]>,
Tensor<[1,68,56,56]>,
dim: 1
ttnn.concataten::cat5
31Tensor<[1,16,28,28]>,
Tensor<[1,128,28,28]>,
dim: 1
ttnn.concataten::cat5
32Tensor<[1,16,28,28]>,
Tensor<[1,28,28,28]>,
Tensor<[1,128,28,28]>,
dim: 1
ttnn.concataten::cat5
33Tensor<[1,16,28,28]>,
Tensor<[1,46,28,28]>,
dim: 1
ttnn.concataten::cat5
34Tensor<[1,16,28,28]>,
Tensor<[1,28,28,28]>,
Tensor<[1,46,28,28]>,
Tensor<[1,128,28,28]>,
dim: 1
ttnn.concataten::cat5
35Tensor<[1,16,28,28]>,
Tensor<[1,78,28,28]>,
dim: 1
ttnn.concataten::cat5
36Tensor<[1,16,28,28]>,
Tensor<[1,28,28,28]>,
Tensor<[1,78,28,28]>,
dim: 1
ttnn.concataten::cat5
37Tensor<[1,16,28,28]>,
Tensor<[1,28,28,28]>,
Tensor<[1,46,28,28]>,
Tensor<[1,78,28,28]>,
Tensor<[1,128,28,28]>,
dim: 1
ttnn.concataten::cat5
38Tensor<[1,16,28,28]>,
Tensor<[1,16,28,28]>,
Tensor<[1,16,28,28]>,
Tensor<[1,16,28,28]>,
Tensor<[1,16,28,28]>,
Tensor<[1,16,28,28]>,
Tensor<[1,16,28,28]>,
Tensor<[1,16,28,28]>,
Tensor<[1,134,28,28]>,
dim: 1
ttnn.concataten::cat5
39Tensor<[1,20,28,28]>,
Tensor<[1,256,28,28]>,
dim: 1
ttnn.concataten::cat5
40Tensor<[1,20,28,28]>,
Tensor<[1,34,28,28]>,
Tensor<[1,256,28,28]>,
dim: 1
ttnn.concataten::cat5
41Tensor<[1,20,28,28]>,
Tensor<[1,58,28,28]>,
dim: 1
ttnn.concataten::cat5
42Tensor<[1,20,28,28]>,
Tensor<[1,34,28,28]>,
Tensor<[1,58,28,28]>,
Tensor<[1,256,28,28]>,
dim: 1
ttnn.concataten::cat5
43Tensor<[1,20,28,28]>,
Tensor<[1,98,28,28]>,
dim: 1
ttnn.concataten::cat5
44Tensor<[1,20,28,28]>,
Tensor<[1,34,28,28]>,
Tensor<[1,98,28,28]>,
dim: 1
ttnn.concataten::cat5
45Tensor<[1,20,28,28]>,
Tensor<[1,34,28,28]>,
Tensor<[1,58,28,28]>,
Tensor<[1,98,28,28]>,
Tensor<[1,256,28,28]>,
dim: 1
ttnn.concataten::cat5
46Tensor<[1,20,28,28]>,
Tensor<[1,20,28,28]>,
Tensor<[1,20,28,28]>,
Tensor<[1,20,28,28]>,
Tensor<[1,20,28,28]>,
Tensor<[1,20,28,28]>,
Tensor<[1,20,28,28]>,
Tensor<[1,20,28,28]>,
Tensor<[1,168,28,28]>,
dim: 1
ttnn.concataten::cat5
47Tensor<[1,40,14,14]>,
Tensor<[1,320,14,14]>,
dim: 1
ttnn.concataten::cat5
48Tensor<[1,40,14,14]>,
Tensor<[1,68,14,14]>,
Tensor<[1,320,14,14]>,
dim: 1
ttnn.concataten::cat5
49Tensor<[1,40,14,14]>,
Tensor<[1,116,14,14]>,
dim: 1
ttnn.concataten::cat5
50Tensor<[1,40,14,14]>,
Tensor<[1,68,14,14]>,
Tensor<[1,116,14,14]>,
Tensor<[1,320,14,14]>,
dim: 1
ttnn.concataten::cat5
51Tensor<[1,40,14,14]>,
Tensor<[1,196,14,14]>,
dim: 1
ttnn.concataten::cat5
52Tensor<[1,40,14,14]>,
Tensor<[1,68,14,14]>,
Tensor<[1,196,14,14]>,
dim: 1
ttnn.concataten::cat5
53Tensor<[1,40,14,14]>,
Tensor<[1,68,14,14]>,
Tensor<[1,116,14,14]>,
Tensor<[1,196,14,14]>,
Tensor<[1,320,14,14]>,
dim: 1
ttnn.concataten::cat5
54Tensor<[1,40,14,14]>,
Tensor<[1,40,14,14]>,
Tensor<[1,40,14,14]>,
Tensor<[1,40,14,14]>,
Tensor<[1,40,14,14]>,
Tensor<[1,40,14,14]>,
Tensor<[1,40,14,14]>,
Tensor<[1,40,14,14]>,
Tensor<[1,334,14,14]>,
dim: 1
ttnn.concataten::cat5
55Tensor<[1,160,7,7]>,
Tensor<[1,640,7,7]>,
dim: 1
ttnn.concataten::cat5
56Tensor<[1,160,7,7]>,
Tensor<[1,272,7,7]>,
Tensor<[1,640,7,7]>,
dim: 1
ttnn.concataten::cat5
57Tensor<[1,160,7,7]>,
Tensor<[1,160,7,7]>,
Tensor<[1,462,7,7]>,
dim: 1
ttnn.concataten::cat5
58Tensor<[1,256,32,32]>,
Tensor<[1,512,32,32]>,
dim: 1
ttnn.concataten::cat5
59Tensor<[1,128,64,64]>,
Tensor<[1,256,64,64]>,
dim: 1
ttnn.concataten::cat5
60Tensor<[1,256,32,32,1]>,
Tensor<[1,256,32,32,1]>,
Tensor<[1,256,32,32,1]>,
Tensor<[1,256,32,32,1]>,
dim: 4
ttnn.concataten::index.Tensor4
61Tensor<[1,128,64,64,1]>,
Tensor<[1,128,64,64,1]>,
Tensor<[1,128,64,64,1]>,
Tensor<[1,128,64,64,1]>,
dim: 4
ttnn.concataten::index.Tensor4
62Tensor<[1,256,32,32]>,
Tensor<[1,256,32,32]>,
dim: 1
ttnn.concataten::cat5
63Tensor<[1,128,64,64]>,
Tensor<[1,128,64,64]>,
dim: 1
ttnn.concataten::cat5
64Tensor<[1,64,128,128]>,
Tensor<[1,64,128,128]>,
dim: 1
ttnn.concataten::cat5
65Tensor<[1,32,256,256]>,
Tensor<[1,32,256,256]>,
dim: 1
ttnn.concataten::cat5
66Tensor<[1,512,28,28]>,
Tensor<[1,512,28,28]>,
dim: 1
ttnn.concataten::cat5
67Tensor<[1,256,56,56]>,
Tensor<[1,256,56,56]>,
dim: 1
ttnn.concataten::cat5
68Tensor<[1,128,112,112]>,
Tensor<[1,128,112,112]>,
dim: 1
ttnn.concataten::cat5
69Tensor<[1,64,224,224]>,
Tensor<[1,64,224,224]>,
dim: 1
ttnn.concataten::cat5
70Tensor<[1,64,30,40]>,
Tensor<[1,64,30,40]>,
dim: 1
ttnn.concataten::cat5
71Tensor<[1,64,60,80]>,
Tensor<[1,64,60,80]>,
dim: 1
ttnn.concataten::cat5
72Tensor<[1,64,120,160]>,
Tensor<[1,64,120,160]>,
dim: 1
ttnn.concataten::cat5
73Tensor<[1,64,30,40,1]>,
Tensor<[1,64,30,40,1]>,
Tensor<[1,64,30,40,1]>,
Tensor<[1,64,30,40,1]>,
dim: 4
ttnn.concataten::index.Tensor4
74Tensor<[1,64,60,80,1]>,
Tensor<[1,64,60,80,1]>,
Tensor<[1,64,60,80,1]>,
Tensor<[1,64,60,80,1]>,
dim: 4
ttnn.concataten::index.Tensor4
75Tensor<[1,64,120,160,1]>,
Tensor<[1,64,120,160,1]>,
Tensor<[1,64,120,160,1]>,
Tensor<[1,64,120,160,1]>,
dim: 4
ttnn.concataten::index.Tensor4
76Tensor<[1,64,240,320,1]>,
Tensor<[1,64,240,320,1]>,
Tensor<[1,64,240,320,1]>,
Tensor<[1,64,240,320,1]>,
dim: 4
ttnn.concataten::index.Tensor4
77Tensor<[1,64,480,640,1]>,
Tensor<[1,64,480,640,1]>,
Tensor<[1,64,480,640,1]>,
Tensor<[1,64,480,640,1]>,
dim: 4
ttnn.concataten::index.Tensor4
78Tensor<[1,1,768]>,
Tensor<[1,196,768]>,
dim: 1
ttnn.concataten::cat5
79Tensor<[1,256,128,128]>,
Tensor<[1,256,128,128]>,
Tensor<[1,256,128,128]>,
Tensor<[1,256,128,128]>,
dim: 1
ttnn.concataten::cat5
80Tensor<[1,256,128,128,1]>,
Tensor<[1,256,128,128,1]>,
Tensor<[1,256,128,128,1]>,
Tensor<[1,256,128,128,1]>,
dim: 4
ttnn.concataten::index.Tensor4
81Tensor<[1,7,32]>,
Tensor<[1,7,32]>,
dim: 2
ttnn.concataten::cat5
82Tensor<[1,71,7,32]>,
Tensor<[1,71,7,32]>,
dim: 3
ttnn.concataten::cat5
83Tensor<[1,1,7,32]>,
Tensor<[1,1,7,32]>,
dim: 3
ttnn.concataten::cat5
84Tensor<[1,7,1,64,1]>,
Tensor<[1,7,1,64,1]>,
Tensor<[1,7,1,64,1]>,
Tensor<[1,7,1,64,1]>,
dim: 4
ttnn.concataten::index.Tensor4
85Tensor<[1,1,768]>,
Tensor<[1,49,768]>,
dim: 1
ttnn.concataten::cat5
86Tensor<[2,1]>,
Tensor<[2,1]>,
dim: 1
ttnn.concataten::index.Tensor4
87Tensor<[1,1,1024]>,
Tensor<[1,196,1024]>,
dim: 1
ttnn.concataten::cat5
88Tensor<[729,16]>,
Tensor<[3,16]>,
dim: 0
ttnn.concataten::cat5
89Tensor<[1,16,27,27,1]>,
Tensor<[1,16,27,27,1]>,
Tensor<[1,16,27,27,1]>,
Tensor<[1,16,27,27,1]>,
dim: 4
ttnn.concataten::index.Tensor4
90Tensor<[1,14,14]>,
Tensor<[1,14,14]>,
dim: 0
ttnn.concataten::stack4
91Tensor<[729,12]>,
Tensor<[3,12]>,
dim: 0
ttnn.concataten::cat5
92Tensor<[1,12,27,27,1]>,
Tensor<[1,12,27,27,1]>,
Tensor<[1,12,27,27,1]>,
Tensor<[1,12,27,27,1]>,
dim: 4
ttnn.concataten::index.Tensor4

stablehlo.constant

STABLE HLO Input Variationsttnn opTorch NameStatus
0Scalar,
aten::_safe_softmax4
1Tensor<[32]>,
aten::arange4
2Tensor<[32,1]>,
aten::triu4
3Tensor<[7]>,
aten::add.Tensor4
4Tensor<[1]>,
aten::arange4
5Tensor<[1,7]>,
aten::eq.Scalar4
6Tensor<[64]>,
aten::reciprocal5
7Tensor<[128]>,
aten::reciprocal5
8Tensor<[256]>,
aten::reciprocal5
9Tensor<[512]>,
aten::reciprocal5
10Tensor<[1,32,112,112]>,
aten::relu4
11Tensor<[1,64,112,112]>,
aten::relu4
12Tensor<[1,64,56,56]>,
aten::relu4
13Tensor<[1,128,56,56]>,
aten::relu4
14Tensor<[1,128,28,28]>,
aten::relu4
15Tensor<[1,256,28,28]>,
aten::relu4
16Tensor<[1,512,28,28]>,
aten::relu4
17Tensor<[1,1024,512]>,
aten::gelu4
18Tensor<[1,256,256]>,
aten::gelu4
19Tensor<[1,720,1280]>,
aten::ones4
20Tensor<[1,64,360,640]>,
aten::relu4
21Tensor<[1,64,180,320]>,
aten::relu4
22Tensor<[1,256,180,320]>,
aten::relu4
23Tensor<[1,128,180,320]>,
aten::relu4
24Tensor<[1,128,90,160]>,
aten::relu4
25Tensor<[1,512,90,160]>,
aten::relu4
26Tensor<[1,256,90,160]>,
aten::relu4
27Tensor<[1,256,45,80]>,
aten::relu4
28Tensor<[1,1024,45,80]>,
aten::relu4
29Tensor<[1,512,45,80]>,
aten::relu4
30Tensor<[1,512,23,40]>,
aten::relu4
31Tensor<[1,2048,23,40]>,
aten::relu4
32Tensor<[920,1,2048]>,
aten::relu4
33Tensor<[100,1,2048]>,
aten::relu4
34Tensor<[6,1,100,256]>,
aten::relu4
35Tensor<[1,1]>,
aten::select_scatter4
36Tensor<[1,3,720,1280]>,
aten::zeros5
37Tensor<[1,10]>,
aten::add.Tensor5
38Tensor<[1,10,3072]>,
aten::gelu4
39Tensor<[1,10,768]>,
aten::gelu4
40Tensor<[1,4096,1280]>,
aten::gelu4
41Tensor<[1,1024,2560]>,
aten::gelu4
42Tensor<[1,256,5120]>,
aten::gelu4
43Tensor<[1,64,5120]>,
aten::gelu4
44Tensor<[1280]>,
aten::index.Tensor4
45Tensor<[640]>,
aten::index.Tensor4
46Tensor<[1,25,3072]>,
aten::gelu4
47Tensor<[1,1445,768]>,
aten::gelu4
48Tensor<[1,100,192]>,
aten::relu4
49Tensor<[1,256,14,14]>,
aten::relu4
50Tensor<[1,512,7,7]>,
aten::relu4
51Tensor<[1,3072,8]>,
aten::gelu4
52Tensor<[2048]>,
aten::arange.start4
53Tensor<[1,256,1280]>,
aten::gelu4
54Tensor<[1,2048,768]>,
aten::gelu4
55Tensor<[1024]>,
aten::reciprocal5
56Tensor<[1,256,56,56]>,
aten::relu4
57Tensor<[1,1024,14,14]>,
aten::relu4
58Tensor<[1,512,14,14]>,
aten::relu4
59Tensor<[1,2048,7,7]>,
aten::relu4
60Tensor<[1,193]>,
aten::full_like4
61Tensor<[1,201,3072]>,
aten::gelu4
62Tensor<[1,1536]>,
aten::gelu4
63Tensor<[1,192]>,
aten::rsub.Scalar4
64Tensor<[1,8]>,
aten::zeros_like4
65Tensor<[1,32,26,26]>,
aten::relu4
66Tensor<[1,64,24,24]>,
aten::relu4
67Tensor<[1,128]>,
aten::relu4
68Tensor<[19]>,
aten::add.Tensor4
69Tensor<[1,19]>,
aten::add.Tensor4
70Tensor<[1,19,4096]>,
aten::gelu4
71Tensor<[14]>,
aten::reciprocal5
72Tensor<[24]>,
aten::reciprocal5
73Tensor<[40]>,
aten::reciprocal5
74Tensor<[68]>,
aten::reciprocal5
75Tensor<[16]>,
aten::reciprocal5
76Tensor<[28]>,
aten::reciprocal5
77Tensor<[46]>,
aten::reciprocal5
78Tensor<[78]>,
aten::reciprocal5
79Tensor<[134]>,
aten::reciprocal5
80Tensor<[20]>,
aten::reciprocal5
81Tensor<[34]>,
aten::reciprocal5
82Tensor<[58]>,
aten::reciprocal5
83Tensor<[98]>,
aten::reciprocal5
84Tensor<[168]>,
aten::reciprocal5
85Tensor<[320]>,
aten::reciprocal5
86Tensor<[116]>,
aten::reciprocal5
87Tensor<[196]>,
aten::reciprocal5
88Tensor<[334]>,
aten::reciprocal5
89Tensor<[160]>,
aten::reciprocal5
90Tensor<[272]>,
aten::reciprocal5
91Tensor<[462]>,
aten::reciprocal5
92Tensor<[1,32,256,256]>,
aten::relu4
93Tensor<[1,64,128,128]>,
aten::relu4
94Tensor<[1,128,64,64]>,
aten::relu4
95Tensor<[1,256,32,32]>,
aten::relu4
96Tensor<[1,512,16,16]>,
aten::relu4
97Tensor<[1,16,28,28]>,
aten::relu4
98Tensor<[1,4,14,14]>,
aten::relu4
99Tensor<[1,16,14,14]>,
aten::relu4
100Tensor<[1,32]>,
aten::sub.Tensor4
101Tensor<[1,16,3072]>,
aten::gelu4
102Tensor<[1,64,224,224]>,
aten::relu4
103Tensor<[1,128,112,112]>,
aten::relu4
104Tensor<[30,1]>,
aten::add.Tensor4
105Tensor<[60,1]>,
aten::add.Tensor4
106Tensor<[80]>,
aten::add.Tensor4
107Tensor<[120,1]>,
aten::add.Tensor4
108Tensor<[240,1]>,
aten::add.Tensor4
109Tensor<[480,1]>,
aten::add.Tensor4
110Tensor<[30]>,
aten::arange4
111Tensor<[60]>,
aten::arange4
112Tensor<[120]>,
aten::arange4
113Tensor<[240]>,
aten::arange4
114Tensor<[480]>,
aten::arange4
115Tensor<[1,19200,256]>,
aten::gelu4
116Tensor<[1,4800,512]>,
aten::gelu4
117Tensor<[1,1200,1280]>,
aten::gelu4
118Tensor<[1,300,2048]>,
aten::gelu4
119Tensor<[1,64,30,40]>,
aten::relu4
120Tensor<[1,32,30,40]>,
aten::relu4
121Tensor<[1,64,60,80]>,
aten::relu4
122Tensor<[1,32,60,80]>,
aten::relu4
123Tensor<[1,64,120,160]>,
aten::relu4
124Tensor<[1,32,120,160]>,
aten::relu4
125Tensor<[1,64,480,640]>,
aten::relu4
126Tensor<[1,197,3072]>,
aten::gelu4
127Tensor<[128,1]>,
aten::add.Tensor4
128Tensor<[1,16384,128]>,
aten::gelu4
129Tensor<[1,4096,256]>,
aten::gelu4
130Tensor<[1,1024,640]>,
aten::gelu4
131Tensor<[1,256,1024]>,
aten::gelu4
132Tensor<[1,256,128,128]>,
aten::relu4
133Tensor<[1,7,18176]>,
aten::gelu4
134Tensor<[7,1]>,
aten::triu4
135Tensor<[96]>,
aten::reciprocal5
136Tensor<[144]>,
aten::reciprocal5
137Tensor<[192]>,
aten::reciprocal5
138Tensor<[384]>,
aten::reciprocal5
139Tensor<[576]>,
aten::reciprocal5
140Tensor<[960]>,
aten::reciprocal5
141Tensor<[2]>,
aten::arange4
142Tensor<[27,1]>,
aten::add.Tensor4
143Tensor<[27]>,
aten::add.Tensor4
144Tensor<[196,196]>,
aten::add.Tensor4
145Tensor<[197]>,
aten::arange4
146Tensor<[1,197,4096]>,
aten::gelu4
147Tensor<[197,197]>,
aten::zeros4
148Tensor<[12]>,
aten::index.Tensor4
149Tensor<[1,64]>,
aten::relu4
150Tensor<[1,12]>,
aten::relu4

stablehlo.convert

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1]>,
aten::_safe_softmax4
1Scalar,
aten::_safe_softmax4
2Tensor<[1,1,1,32]>,
aten::add.Tensor4
3Tensor<[1,1,32,32]>,
aten::add.Tensor4
4Tensor<[1,32,4096]>,
aten::embedding4
5Tensor<[32,32]>,
aten::mul.Tensor5
6Tensor<[1,1,32]>,
prims::convert_element_type4
7Tensor<[1,32,128]>,
prims::convert_element_type5
8Tensor<[1,32,32,128]>,
prims::convert_element_type5
9Tensor<[768]>,
aten::add.Tensor4
10Tensor<[1,1,7,7]>,
aten::add.Tensor4
11Tensor<[1,7,768]>,
aten::embedding4
12Tensor<[7,7]>,
prims::convert_element_type5
13Tensor<[2304]>,
prims::convert_element_type5
14Tensor<[7,768]>,
prims::convert_element_type5
15Tensor<[768,2304]>,
prims::convert_element_type5
16Tensor<[7,2304]>,
prims::convert_element_type5
17Tensor<[1,12,7,64]>,
prims::convert_element_type5
18Tensor<[768,768]>,
prims::convert_element_type5
19Tensor<[3072]>,
prims::convert_element_type5
20Tensor<[768,3072]>,
prims::convert_element_type5
21Tensor<[7,3072]>,
prims::convert_element_type5
22Tensor<[3072,768]>,
prims::convert_element_type5
23Tensor<[1,7]>,
prims::convert_element_type5
24Tensor<[32,1,1]>,
aten::add.Tensor4
25Tensor<[64,1,1]>,
aten::add.Tensor4
26Tensor<[128,1,1]>,
aten::add.Tensor4
27Tensor<[256,1,1]>,
aten::add.Tensor4
28Tensor<[512,1,1]>,
aten::add.Tensor4
29Tensor<[1,32,112,112]>,
aten::sub.Tensor4
30Tensor<[1,64,112,112]>,
aten::sub.Tensor4
31Tensor<[1,64,56,56]>,
aten::sub.Tensor4
32Tensor<[1,128,56,56]>,
aten::sub.Tensor4
33Tensor<[1,128,28,28]>,
aten::sub.Tensor4
34Tensor<[1,256,28,28]>,
aten::sub.Tensor4
35Tensor<[1,512,28,28]>,
aten::sub.Tensor4
36Tensor<[32]>,
prims::convert_element_type5
37Tensor<[64]>,
prims::convert_element_type5
38Tensor<[128]>,
prims::convert_element_type5
39Tensor<[256]>,
prims::convert_element_type5
40Tensor<[512]>,
prims::convert_element_type5
41Tensor<[1,1024,512]>,
aten::gelu4
42Tensor<[1,256,256]>,
aten::gelu4
43Tensor<[1,256,512]>,
aten::sub.Tensor4
44Tensor<[256,768]>,
prims::convert_element_type5
45Tensor<[768,512]>,
prims::convert_element_type5
46Tensor<[256,512]>,
prims::convert_element_type5
47Tensor<[512,256]>,
prims::convert_element_type5
48Tensor<[256,256]>,
prims::convert_element_type5
49Tensor<[1000]>,
prims::convert_element_type5
50Tensor<[1,512]>,
prims::convert_element_type5
51Tensor<[512,1000]>,
prims::convert_element_type5
52Tensor<[1,1000]>,
prims::convert_element_type5
53Tensor<[8,920,920]>,
aten::_softmax4
54Tensor<[8,100,100]>,
aten::_softmax4
55Tensor<[8,100,920]>,
aten::_softmax4
56Tensor<[1,23,40]>,
aten::cumsum4
57Tensor<[920,1,256]>,
aten::sub.Tensor4
58Tensor<[100,1,256]>,
aten::sub.Tensor4
59Tensor<[1,3,720,1280]>,
aten::zeros5
60Tensor<[1,1,720,1280]>,
prims::convert_element_type5
61Tensor<[23]>,
prims::convert_element_type4
62Tensor<[40]>,
prims::convert_element_type4
63Tensor<[1,1,23,40]>,
prims::convert_element_type5
64Tensor<[1,256,23,40]>,
prims::convert_element_type5
65Tensor<[920,256]>,
prims::convert_element_type5
66Tensor<[2048]>,
prims::convert_element_type5
67Tensor<[256,2048]>,
prims::convert_element_type5
68Tensor<[920,2048]>,
prims::convert_element_type5
69Tensor<[2048,256]>,
prims::convert_element_type5
70Tensor<[100,256]>,
prims::convert_element_type5
71Tensor<[100,2048]>,
prims::convert_element_type5
72Tensor<[1,1,10,10]>,
aten::add.Tensor4
73Tensor<[1,10]>,
aten::cumsum4
74Tensor<[1,10,768]>,
aten::embedding4
75Tensor<[1,10,3072]>,
aten::gelu4
76Tensor<[10,768]>,
prims::convert_element_type5
77Tensor<[1,12,10,64]>,
prims::convert_element_type5
78Tensor<[10,3072]>,
prims::convert_element_type5
79Tensor<[250002]>,
prims::convert_element_type5
80Tensor<[768,250002]>,
prims::convert_element_type5
81Tensor<[10,250002]>,
prims::convert_element_type5
82Tensor<[1,320,1,1]>,
aten::add.Tensor4
83Tensor<[320]>,
aten::add.Tensor4
84Tensor<[640]>,
aten::add.Tensor4
85Tensor<[1,640,1,1]>,
aten::add.Tensor4
86Tensor<[1280]>,
aten::add.Tensor4
87Tensor<[1,1280,1,1]>,
aten::add.Tensor4
88Tensor<[1,2560,1,1]>,
aten::add.Tensor4
89Tensor<[1,1920,1,1]>,
aten::add.Tensor4
90Tensor<[1,960,1,1]>,
aten::add.Tensor4
91Tensor<[1,4096,1280]>,
aten::gelu4
92Tensor<[1,1024,2560]>,
aten::gelu4
93Tensor<[1,256,5120]>,
aten::gelu4
94Tensor<[1,64,5120]>,
aten::gelu4
95Tensor<[1,32,10,4096]>,
aten::sub.Tensor4
96Tensor<[1,4096,320]>,
aten::sub.Tensor4
97Tensor<[1,32,10,1024]>,
aten::sub.Tensor4
98Tensor<[1,32,20,1024]>,
aten::sub.Tensor4
99Tensor<[1,1024,640]>,
aten::sub.Tensor4
100Tensor<[1,32,20,256]>,
aten::sub.Tensor4
101Tensor<[1,32,40,256]>,
aten::sub.Tensor4
102Tensor<[1,256,1280]>,
aten::sub.Tensor4
103Tensor<[1,32,40,64]>,
aten::sub.Tensor4
104Tensor<[1,64,1280]>,
aten::sub.Tensor4
105Tensor<[1,32,80,64]>,
aten::sub.Tensor4
106Tensor<[1,32,80,256]>,
aten::sub.Tensor4
107Tensor<[1,32,60,256]>,
aten::sub.Tensor4
108Tensor<[1,32,60,1024]>,
aten::sub.Tensor4
109Tensor<[1,32,40,1024]>,
aten::sub.Tensor4
110Tensor<[1,32,30,1024]>,
aten::sub.Tensor4
111Tensor<[1,32,30,4096]>,
aten::sub.Tensor4
112Tensor<[1,32,20,4096]>,
aten::sub.Tensor4
113Tensor<[1,1]>,
prims::convert_element_type4
114Tensor<[1,320]>,
prims::convert_element_type5
115Tensor<[320,1280]>,
prims::convert_element_type5
116Tensor<[1,1280]>,
prims::convert_element_type5
117Tensor<[1280,1280]>,
prims::convert_element_type5
118Tensor<[1,320,64,64]>,
prims::convert_element_type5
119Tensor<[1280,320]>,
prims::convert_element_type5
120Tensor<[1,8,4096,40]>,
prims::convert_element_type5
121Tensor<[4096,320]>,
prims::convert_element_type5
122Tensor<[320,320]>,
prims::convert_element_type5
123Tensor<[1,8,9,40]>,
prims::convert_element_type5
124Tensor<[2560]>,
prims::convert_element_type5
125Tensor<[320,2560]>,
prims::convert_element_type5
126Tensor<[4096,2560]>,
prims::convert_element_type5
127Tensor<[4096,1280]>,
prims::convert_element_type5
128Tensor<[1,320,32,32]>,
prims::convert_element_type5
129Tensor<[1280,640]>,
prims::convert_element_type5
130Tensor<[1,640]>,
prims::convert_element_type5
131Tensor<[1,640,32,32]>,
prims::convert_element_type5
132Tensor<[1,8,1024,80]>,
prims::convert_element_type5
133Tensor<[1024,640]>,
prims::convert_element_type5
134Tensor<[640,640]>,
prims::convert_element_type5
135Tensor<[1,8,9,80]>,
prims::convert_element_type5
136Tensor<[5120]>,
prims::convert_element_type5
137Tensor<[640,5120]>,
prims::convert_element_type5
138Tensor<[1024,5120]>,
prims::convert_element_type5
139Tensor<[1024,2560]>,
prims::convert_element_type5
140Tensor<[2560,640]>,
prims::convert_element_type5
141Tensor<[1,640,16,16]>,
prims::convert_element_type5
142Tensor<[1,1280,16,16]>,
prims::convert_element_type5
143Tensor<[1,8,256,160]>,
prims::convert_element_type5
144Tensor<[256,1280]>,
prims::convert_element_type5
145Tensor<[1,8,9,160]>,
prims::convert_element_type5
146Tensor<[10240]>,
prims::convert_element_type5
147Tensor<[1280,10240]>,
prims::convert_element_type5
148Tensor<[256,10240]>,
prims::convert_element_type5
149Tensor<[256,5120]>,
prims::convert_element_type5
150Tensor<[5120,1280]>,
prims::convert_element_type5
151Tensor<[1,1280,8,8]>,
prims::convert_element_type5
152Tensor<[1,8,64,160]>,
prims::convert_element_type5
153Tensor<[64,1280]>,
prims::convert_element_type5
154Tensor<[64,10240]>,
prims::convert_element_type5
155Tensor<[64,5120]>,
prims::convert_element_type5
156Tensor<[1,2560,8,8]>,
prims::convert_element_type5
157Tensor<[16]>,
prims::convert_element_type4
158Tensor<[1,2560,16,16]>,
prims::convert_element_type5
159Tensor<[1,1920,16,16]>,
prims::convert_element_type5
160Tensor<[1,1280,32,32]>,
prims::convert_element_type5
161Tensor<[1,1920,32,32]>,
prims::convert_element_type5
162Tensor<[1,960,32,32]>,
prims::convert_element_type5
163Tensor<[1,640,64,64]>,
prims::convert_element_type5
164Tensor<[1,960,64,64]>,
prims::convert_element_type5
165Tensor<[1,1,25,25]>,
aten::add.Tensor4
166Tensor<[1,25,768]>,
aten::embedding4
167Tensor<[1,25,3072]>,
aten::gelu4
168Tensor<[25,768]>,
prims::convert_element_type5
169Tensor<[1,12,25,64]>,
prims::convert_element_type5
170Tensor<[25,3072]>,
prims::convert_element_type5
171Tensor<[2]>,
prims::convert_element_type5
172Tensor<[768,2]>,
prims::convert_element_type5
173Tensor<[25,2]>,
prims::convert_element_type5
174Tensor<[1,768]>,
prims::convert_element_type5
175Tensor<[768,1]>,
prims::convert_element_type5
176Tensor<[192]>,
aten::add.Tensor4
177Tensor<[1,1445,768]>,
aten::gelu4
178Tensor<[1,1445,192]>,
aten::sub.Tensor4
179Tensor<[1445,192]>,
prims::convert_element_type5
180Tensor<[192,192]>,
prims::convert_element_type5
181Tensor<[1,3,1445,64]>,
prims::convert_element_type5
182Tensor<[192,768]>,
prims::convert_element_type5
183Tensor<[1445,768]>,
prims::convert_element_type5
184Tensor<[768,192]>,
prims::convert_element_type5
185Tensor<[100,192]>,
prims::convert_element_type5
186Tensor<[92]>,
prims::convert_element_type5
187Tensor<[192,92]>,
prims::convert_element_type5
188Tensor<[100,92]>,
prims::convert_element_type5
189Tensor<[4]>,
prims::convert_element_type5
190Tensor<[192,4]>,
prims::convert_element_type5
191Tensor<[100,4]>,
prims::convert_element_type5
192Tensor<[1,256,14,14]>,
aten::sub.Tensor4
193Tensor<[1,512,7,7]>,
aten::sub.Tensor4
194Tensor<[1,12,8,8]>,
aten::_softmax4
195Tensor<[1,8,768]>,
aten::embedding4
196Tensor<[1,3072,8]>,
aten::gelu4
197Tensor<[1,1,1,8]>,
prims::convert_element_type4
198Tensor<[3]>,
prims::convert_element_type5
199Tensor<[768,3]>,
prims::convert_element_type5
200Tensor<[1,3]>,
prims::convert_element_type5
201Tensor<[1,8,256,2048]>,
aten::_softmax4
202Tensor<[1,8,256,256]>,
aten::_softmax4
203Tensor<[1,8,2048,256]>,
aten::_softmax4
204Tensor<[1,2048,768]>,
aten::embedding4
205Tensor<[2048,768]>,
aten::embedding4
206Tensor<[1,1,1,2048]>,
prims::convert_element_type4
207Tensor<[1280,256]>,
prims::convert_element_type5
208Tensor<[768,256]>,
prims::convert_element_type5
209Tensor<[768,1280]>,
prims::convert_element_type5
210Tensor<[2048,1280]>,
prims::convert_element_type5
211Tensor<[1280,768]>,
prims::convert_element_type5
212Tensor<[1024,1,1]>,
aten::add.Tensor4
213Tensor<[2048,1,1]>,
aten::add.Tensor4
214Tensor<[1,256,56,56]>,
aten::sub.Tensor4
215Tensor<[1,1024,14,14]>,
aten::sub.Tensor4
216Tensor<[1,512,14,14]>,
aten::sub.Tensor4
217Tensor<[1,2048,7,7]>,
aten::sub.Tensor4
218Tensor<[1024]>,
prims::convert_element_type5
219Tensor<[1,2048]>,
prims::convert_element_type5
220Tensor<[2048,1000]>,
prims::convert_element_type5
221Tensor<[1,12,201,201]>,
aten::_softmax4
222Tensor<[1,193,768]>,
aten::embedding4
223Tensor<[1,201,3072]>,
aten::gelu4
224Tensor<[1,1536]>,
aten::gelu4
225Tensor<[1536]>,
aten::mul.Tensor4
226Tensor<[1,201,768]>,
aten::sub.Tensor4
227Tensor<[1,1,384,512]>,
prims::convert_element_type4
228Tensor<[12]>,
prims::convert_element_type4
229Tensor<[1,1,12,16]>,
prims::convert_element_type4
230Tensor<[1,1,1,201]>,
prims::convert_element_type4
231Tensor<[201,768]>,
prims::convert_element_type5
232Tensor<[201,3072]>,
prims::convert_element_type5
233Tensor<[768,1536]>,
prims::convert_element_type5
234Tensor<[3129]>,
prims::convert_element_type5
235Tensor<[1536,3129]>,
prims::convert_element_type5
236Tensor<[1,3129]>,
prims::convert_element_type5
237Tensor<[1,9216]>,
prims::convert_element_type5
238Tensor<[9216,128]>,
prims::convert_element_type5
239Tensor<[1,128]>,
prims::convert_element_type5
240Tensor<[10]>,
prims::convert_element_type5
241Tensor<[128,10]>,
prims::convert_element_type5
242Tensor<[16,19,19]>,
aten::_softmax4
243Tensor<[1,19,1024]>,
aten::embedding4
244Tensor<[19]>,
aten::floor_divide4
245Tensor<[1,19,4096]>,
aten::gelu4
246Tensor<[19,1024]>,
aten::index_select4
247Tensor<[19,19]>,
prims::convert_element_type5
248Tensor<[1,1,19,19]>,
prims::convert_element_type4
249Tensor<[1024,1024]>,
prims::convert_element_type5
250Tensor<[4096]>,
prims::convert_element_type5
251Tensor<[1024,4096]>,
prims::convert_element_type5
252Tensor<[19,4096]>,
prims::convert_element_type5
253Tensor<[4096,1024]>,
prims::convert_element_type5
254Tensor<[19,256008]>,
prims::convert_element_type5
255Tensor<[14,1,1]>,
aten::add.Tensor4
256Tensor<[24,1,1]>,
aten::add.Tensor4
257Tensor<[40,1,1]>,
aten::add.Tensor4
258Tensor<[68,1,1]>,
aten::add.Tensor4
259Tensor<[16,1,1]>,
aten::add.Tensor4
260Tensor<[28,1,1]>,
aten::add.Tensor4
261Tensor<[46,1,1]>,
aten::add.Tensor4
262Tensor<[78,1,1]>,
aten::add.Tensor4
263Tensor<[134,1,1]>,
aten::add.Tensor4
264Tensor<[20,1,1]>,
aten::add.Tensor4
265Tensor<[34,1,1]>,
aten::add.Tensor4
266Tensor<[58,1,1]>,
aten::add.Tensor4
267Tensor<[98,1,1]>,
aten::add.Tensor4
268Tensor<[168,1,1]>,
aten::add.Tensor4
269Tensor<[320,1,1]>,
aten::add.Tensor4
270Tensor<[116,1,1]>,
aten::add.Tensor4
271Tensor<[196,1,1]>,
aten::add.Tensor4
272Tensor<[334,1,1]>,
aten::add.Tensor4
273Tensor<[640,1,1]>,
aten::add.Tensor4
274Tensor<[160,1,1]>,
aten::add.Tensor4
275Tensor<[272,1,1]>,
aten::add.Tensor4
276Tensor<[462,1,1]>,
aten::add.Tensor4
277Tensor<[1,14,56,56]>,
aten::sub.Tensor4
278Tensor<[1,24,56,56]>,
aten::sub.Tensor4
279Tensor<[1,40,56,56]>,
aten::sub.Tensor4
280Tensor<[1,68,56,56]>,
aten::sub.Tensor4
281Tensor<[1,16,28,28]>,
aten::sub.Tensor4
282Tensor<[1,28,28,28]>,
aten::sub.Tensor4
283Tensor<[1,46,28,28]>,
aten::sub.Tensor4
284Tensor<[1,78,28,28]>,
aten::sub.Tensor4
285Tensor<[1,134,28,28]>,
aten::sub.Tensor4
286Tensor<[1,20,28,28]>,
aten::sub.Tensor4
287Tensor<[1,34,28,28]>,
aten::sub.Tensor4
288Tensor<[1,58,28,28]>,
aten::sub.Tensor4
289Tensor<[1,98,28,28]>,
aten::sub.Tensor4
290Tensor<[1,168,28,28]>,
aten::sub.Tensor4
291Tensor<[1,320,28,28]>,
aten::sub.Tensor4
292Tensor<[1,40,14,14]>,
aten::sub.Tensor4
293Tensor<[1,68,14,14]>,
aten::sub.Tensor4
294Tensor<[1,116,14,14]>,
aten::sub.Tensor4
295Tensor<[1,196,14,14]>,
aten::sub.Tensor4
296Tensor<[1,334,14,14]>,
aten::sub.Tensor4
297Tensor<[1,640,14,14]>,
aten::sub.Tensor4
298Tensor<[1,160,7,7]>,
aten::sub.Tensor4
299Tensor<[1,272,7,7]>,
aten::sub.Tensor4
300Tensor<[1,462,7,7]>,
aten::sub.Tensor4
301Tensor<[1,1024,7,7]>,
aten::sub.Tensor4
302Tensor<[14]>,
prims::convert_element_type5
303Tensor<[24]>,
prims::convert_element_type5
304Tensor<[68]>,
prims::convert_element_type5
305Tensor<[28]>,
prims::convert_element_type5
306Tensor<[46]>,
prims::convert_element_type5
307Tensor<[78]>,
prims::convert_element_type5
308Tensor<[134]>,
prims::convert_element_type5
309Tensor<[20]>,
prims::convert_element_type5
310Tensor<[34]>,
prims::convert_element_type5
311Tensor<[58]>,
prims::convert_element_type5
312Tensor<[98]>,
prims::convert_element_type5
313Tensor<[168]>,
prims::convert_element_type5
314Tensor<[116]>,
prims::convert_element_type5
315Tensor<[196]>,
prims::convert_element_type5
316Tensor<[334]>,
prims::convert_element_type5
317Tensor<[160]>,
prims::convert_element_type5
318Tensor<[272]>,
prims::convert_element_type5
319Tensor<[462]>,
prims::convert_element_type5
320Tensor<[1,1024]>,
prims::convert_element_type5
321Tensor<[1024,1000]>,
prims::convert_element_type5
322Tensor<[1,32,512,512]>,
aten::sub.Tensor4
323Tensor<[1,64,256,256]>,
aten::sub.Tensor4
324Tensor<[1,32,256,256]>,
aten::sub.Tensor4
325Tensor<[1,128,128,128]>,
aten::sub.Tensor4
326Tensor<[1,64,128,128]>,
aten::sub.Tensor4
327Tensor<[1,256,64,64]>,
aten::sub.Tensor4
328Tensor<[1,128,64,64]>,
aten::sub.Tensor4
329Tensor<[1,512,32,32]>,
aten::sub.Tensor4
330Tensor<[1,256,32,32]>,
aten::sub.Tensor4
331Tensor<[1,1024,16,16]>,
aten::sub.Tensor4
332Tensor<[1,512,16,16]>,
aten::sub.Tensor4
333Tensor<[1,256,16,16]>,
aten::sub.Tensor4
334Tensor<[1,128,32,32]>,
aten::sub.Tensor4
335Tensor<[1,32,1536]>,
aten::embedding4
336Tensor<[16,1,32]>,
prims::convert_element_type5
337Tensor<[4608]>,
prims::convert_element_type5
338Tensor<[32,1536]>,
prims::convert_element_type5
339Tensor<[1536,4608]>,
prims::convert_element_type5
340Tensor<[32,4608]>,
prims::convert_element_type5
341Tensor<[1,16,32,32]>,
prims::convert_element_type5
342Tensor<[1536,1536]>,
prims::convert_element_type5
343Tensor<[6144]>,
prims::convert_element_type5
344Tensor<[1536,6144]>,
prims::convert_element_type5
345Tensor<[32,6144]>,
prims::convert_element_type5
346Tensor<[6144,1536]>,
prims::convert_element_type5
347Tensor<[1,1,16,16]>,
aten::add.Tensor4
348Tensor<[1,16,768]>,
aten::embedding4
349Tensor<[1,16,3072]>,
aten::gelu4
350Tensor<[16,768]>,
prims::convert_element_type5
351Tensor<[1,12,16,64]>,
prims::convert_element_type5
352Tensor<[16,3072]>,
prims::convert_element_type5
353Tensor<[1,64,224,224]>,
aten::sub.Tensor4
354Tensor<[1,128,112,112]>,
aten::sub.Tensor4
355Tensor<[1,1,19200,300]>,
aten::_softmax4
356Tensor<[1,2,4800,300]>,
aten::_softmax4
357Tensor<[1,5,1200,300]>,
aten::_softmax4
358Tensor<[1,8,300,300]>,
aten::_softmax4
359Tensor<[1,19200,256]>,
aten::gelu4
360Tensor<[1,4800,512]>,
aten::gelu4
361Tensor<[1,1200,1280]>,
aten::gelu4
362Tensor<[1,300,2048]>,
aten::gelu4
363Tensor<[1,19200,64]>,
aten::sub.Tensor4
364Tensor<[1,300,64]>,
aten::sub.Tensor4
365Tensor<[1,4800,128]>,
aten::sub.Tensor4
366Tensor<[1,300,128]>,
aten::sub.Tensor4
367Tensor<[1,1200,320]>,
aten::sub.Tensor4
368Tensor<[1,300,320]>,
aten::sub.Tensor4
369Tensor<[1,300,512]>,
aten::sub.Tensor4
370Tensor<[30,1]>,
aten::sub.Tensor4
371Tensor<[1,64,30,40]>,
aten::sub.Tensor4
372Tensor<[1,32,30,40]>,
aten::sub.Tensor4
373Tensor<[80]>,
aten::sub.Tensor4
374Tensor<[60,1]>,
aten::sub.Tensor4
375Tensor<[1,64,60,80]>,
aten::sub.Tensor4
376Tensor<[1,32,60,80]>,
aten::sub.Tensor4
377Tensor<[120,1]>,
aten::sub.Tensor4
378Tensor<[1,64,120,160]>,
aten::sub.Tensor4
379Tensor<[1,32,120,160]>,
aten::sub.Tensor4
380Tensor<[240,1]>,
aten::sub.Tensor4
381Tensor<[480,1]>,
aten::sub.Tensor4
382Tensor<[19200,64]>,
prims::convert_element_type5
383Tensor<[64,64]>,
prims::convert_element_type5
384Tensor<[300,64]>,
prims::convert_element_type5
385Tensor<[64,256]>,
prims::convert_element_type5
386Tensor<[19200,256]>,
prims::convert_element_type5
387Tensor<[4800,128]>,
prims::convert_element_type5
388Tensor<[128,128]>,
prims::convert_element_type5
389Tensor<[300,128]>,
prims::convert_element_type5
390Tensor<[128,512]>,
prims::convert_element_type5
391Tensor<[4800,512]>,
prims::convert_element_type5
392Tensor<[1200,320]>,
prims::convert_element_type5
393Tensor<[300,320]>,
prims::convert_element_type5
394Tensor<[1200,1280]>,
prims::convert_element_type5
395Tensor<[300,512]>,
prims::convert_element_type5
396Tensor<[512,512]>,
prims::convert_element_type5
397Tensor<[512,2048]>,
prims::convert_element_type5
398Tensor<[300,2048]>,
prims::convert_element_type5
399Tensor<[1,64,15,20]>,
prims::convert_element_type5
400Tensor<[30]>,
prims::convert_element_type4
401Tensor<[60]>,
prims::convert_element_type4
402Tensor<[120]>,
prims::convert_element_type4
403Tensor<[240]>,
prims::convert_element_type4
404Tensor<[1,64,240,320]>,
prims::convert_element_type5
405Tensor<[480]>,
prims::convert_element_type4
406Tensor<[1,64,480,640]>,
prims::convert_element_type5
407Tensor<[1,197,3072]>,
aten::gelu4
408Tensor<[1,197,768]>,
aten::sub.Tensor4
409Tensor<[1,3,224,224]>,
prims::convert_element_type5
410Tensor<[197,768]>,
prims::convert_element_type5
411Tensor<[1,12,197,64]>,
prims::convert_element_type5
412Tensor<[197,3072]>,
prims::convert_element_type5
413Tensor<[768,1000]>,
prims::convert_element_type5
414Tensor<[1,1,16384,256]>,
aten::_softmax4
415Tensor<[1,2,4096,256]>,
aten::_softmax4
416Tensor<[1,5,1024,256]>,
aten::_softmax4
417Tensor<[1,16384,128]>,
aten::gelu4
418Tensor<[1,4096,256]>,
aten::gelu4
419Tensor<[1,256,1024]>,
aten::gelu4
420Tensor<[1,16384,32]>,
aten::sub.Tensor4
421Tensor<[1,256,32]>,
aten::sub.Tensor4
422Tensor<[1,4096,64]>,
aten::sub.Tensor4
423Tensor<[1,256,64]>,
aten::sub.Tensor4
424Tensor<[1,1024,160]>,
aten::sub.Tensor4
425Tensor<[1,256,160]>,
aten::sub.Tensor4
426Tensor<[128,1]>,
aten::sub.Tensor4
427Tensor<[1,256,128,128]>,
aten::sub.Tensor4
428Tensor<[16384,32]>,
prims::convert_element_type5
429Tensor<[256,32]>,
prims::convert_element_type5
430Tensor<[32,128]>,
prims::convert_element_type5
431Tensor<[16384,128]>,
prims::convert_element_type5
432Tensor<[4096,64]>,
prims::convert_element_type5
433Tensor<[256,64]>,
prims::convert_element_type5
434Tensor<[4096,256]>,
prims::convert_element_type5
435Tensor<[1024,160]>,
prims::convert_element_type5
436Tensor<[160,160]>,
prims::convert_element_type5
437Tensor<[256,160]>,
prims::convert_element_type5
438Tensor<[160,640]>,
prims::convert_element_type5
439Tensor<[256,1024]>,
prims::convert_element_type5
440Tensor<[1,1,1,7]>,
aten::add.Tensor4
441Tensor<[4544]>,
aten::add.Tensor4
442Tensor<[1,7,4544]>,
aten::embedding4
443Tensor<[1,7,18176]>,
aten::gelu4
444Tensor<[1,1,7]>,
prims::convert_element_type4
445Tensor<[1,7,64]>,
prims::convert_element_type5
446Tensor<[1,71,7,64]>,
prims::convert_element_type5
447Tensor<[1,1,7,64]>,
prims::convert_element_type5
448Tensor<[96,1,1]>,
aten::add.Tensor4
449Tensor<[144,1,1]>,
aten::add.Tensor4
450Tensor<[192,1,1]>,
aten::add.Tensor4
451Tensor<[384,1,1]>,
aten::add.Tensor4
452Tensor<[576,1,1]>,
aten::add.Tensor4
453Tensor<[960,1,1]>,
aten::add.Tensor4
454Tensor<[1280,1,1]>,
aten::add.Tensor4
455Tensor<[1,16,112,112]>,
aten::sub.Tensor4
456Tensor<[1,96,112,112]>,
aten::sub.Tensor4
457Tensor<[1,96,56,56]>,
aten::sub.Tensor4
458Tensor<[1,144,56,56]>,
aten::sub.Tensor4
459Tensor<[1,144,28,28]>,
aten::sub.Tensor4
460Tensor<[1,32,28,28]>,
aten::sub.Tensor4
461Tensor<[1,192,28,28]>,
aten::sub.Tensor4
462Tensor<[1,192,14,14]>,
aten::sub.Tensor4
463Tensor<[1,64,14,14]>,
aten::sub.Tensor4
464Tensor<[1,384,14,14]>,
aten::sub.Tensor4
465Tensor<[1,96,14,14]>,
aten::sub.Tensor4
466Tensor<[1,576,14,14]>,
aten::sub.Tensor4
467Tensor<[1,576,7,7]>,
aten::sub.Tensor4
468Tensor<[1,960,7,7]>,
aten::sub.Tensor4
469Tensor<[1,320,7,7]>,
aten::sub.Tensor4
470Tensor<[1,1280,7,7]>,
aten::sub.Tensor4
471Tensor<[96]>,
prims::convert_element_type5
472Tensor<[144]>,
prims::convert_element_type5
473Tensor<[384]>,
prims::convert_element_type5
474Tensor<[576]>,
prims::convert_element_type5
475Tensor<[960]>,
prims::convert_element_type5
476Tensor<[1280,1000]>,
prims::convert_element_type5
477Tensor<[1,1,12,12]>,
aten::add.Tensor4
478Tensor<[1,12,128]>,
aten::embedding4
479Tensor<[1,12,768]>,
aten::sub.Tensor4
480Tensor<[12,128]>,
prims::convert_element_type5
481Tensor<[128,768]>,
prims::convert_element_type5
482Tensor<[12,768]>,
prims::convert_element_type5
483Tensor<[1,12,12,64]>,
prims::convert_element_type5
484Tensor<[12,3072]>,
prims::convert_element_type5
485Tensor<[12,2]>,
prims::convert_element_type5
486Tensor<[1,1,9,9]>,
aten::add.Tensor4
487Tensor<[1,9,128]>,
aten::embedding4
488Tensor<[1,9,768]>,
aten::sub.Tensor4
489Tensor<[9,128]>,
prims::convert_element_type5
490Tensor<[9,768]>,
prims::convert_element_type5
491Tensor<[1,12,9,64]>,
prims::convert_element_type5
492Tensor<[9,3072]>,
prims::convert_element_type5
493Tensor<[768,128]>,
prims::convert_element_type5
494Tensor<[30000]>,
prims::convert_element_type5
495Tensor<[128,30000]>,
prims::convert_element_type5
496Tensor<[9,30000]>,
prims::convert_element_type5
497Tensor<[1,9,2048]>,
aten::sub.Tensor4
498Tensor<[128,2048]>,
prims::convert_element_type5
499Tensor<[9,2048]>,
prims::convert_element_type5
500Tensor<[2048,2048]>,
prims::convert_element_type5
501Tensor<[1,16,9,128]>,
prims::convert_element_type5
502Tensor<[8192]>,
prims::convert_element_type5
503Tensor<[2048,8192]>,
prims::convert_element_type5
504Tensor<[9,8192]>,
prims::convert_element_type5
505Tensor<[8192,2048]>,
prims::convert_element_type5
506Tensor<[2048,128]>,
prims::convert_element_type5
507Tensor<[1,9,1024]>,
aten::sub.Tensor4
508Tensor<[128,1024]>,
prims::convert_element_type5
509Tensor<[9,1024]>,
prims::convert_element_type5
510Tensor<[1,16,9,64]>,
prims::convert_element_type5
511Tensor<[9,4096]>,
prims::convert_element_type5
512Tensor<[1024,128]>,
prims::convert_element_type5
513Tensor<[1,9,4096]>,
aten::sub.Tensor4
514Tensor<[128,4096]>,
prims::convert_element_type5
515Tensor<[4096,4096]>,
prims::convert_element_type5
516Tensor<[1,64,9,64]>,
prims::convert_element_type5
517Tensor<[16384]>,
prims::convert_element_type5
518Tensor<[4096,16384]>,
prims::convert_element_type5
519Tensor<[9,16384]>,
prims::convert_element_type5
520Tensor<[16384,4096]>,
prims::convert_element_type5
521Tensor<[4096,128]>,
prims::convert_element_type5
522Tensor<[1,2]>,
prims::convert_element_type5
523Tensor<[1,1,14,14]>,
aten::add.Tensor4
524Tensor<[1,14,128]>,
aten::embedding4
525Tensor<[1,14,768]>,
aten::sub.Tensor4
526Tensor<[14,128]>,
prims::convert_element_type5
527Tensor<[14,768]>,
prims::convert_element_type5
528Tensor<[1,12,14,64]>,
prims::convert_element_type5
529Tensor<[14,3072]>,
prims::convert_element_type5
530Tensor<[14,2]>,
prims::convert_element_type5
531Tensor<[2,1,7,7]>,
aten::add.Tensor4
532Tensor<[1,50,768]>,
aten::embedding4
533Tensor<[2,7,512]>,
aten::embedding4
534Tensor<[1,7,512]>,
aten::embedding4
535Tensor<[50,768]>,
prims::convert_element_type5
536Tensor<[1,12,50,64]>,
prims::convert_element_type5
537Tensor<[50,3072]>,
prims::convert_element_type5
538Tensor<[14,512]>,
prims::convert_element_type5
539Tensor<[2,8,7,64]>,
prims::convert_element_type5
540Tensor<[14,2048]>,
prims::convert_element_type5
541Tensor<[2048,512]>,
prims::convert_element_type5
542Tensor<[2,7]>,
prims::convert_element_type4
543Tensor<[1,16,197,197]>,
aten::_softmax4
544Tensor<[197]>,
aten::floor_divide4
545Tensor<[1,197,4096]>,
aten::gelu4
546Tensor<[1,197,1024]>,
aten::sub.Tensor4
547Tensor<[27]>,
aten::sub.Tensor4
548Tensor<[27,1]>,
aten::sub.Tensor4
549Tensor<[197,1024]>,
prims::convert_element_type5
550Tensor<[1,16,27,27]>,
prims::convert_element_type5
551Tensor<[197,4096]>,
prims::convert_element_type5
552Tensor<[1,12,197,197]>,
aten::_softmax4
553Tensor<[1,12,27,27]>,
prims::convert_element_type5
554Tensor<[1,784]>,
prims::convert_element_type5
555Tensor<[784,128]>,
prims::convert_element_type5
556Tensor<[128,64]>,
prims::convert_element_type5
557Tensor<[1,64]>,
prims::convert_element_type5
558Tensor<[64,12]>,
prims::convert_element_type5
559Tensor<[1,12]>,
prims::convert_element_type5
560Tensor<[12,3]>,
prims::convert_element_type5
561Tensor<[3,12]>,
prims::convert_element_type5
562Tensor<[12,64]>,
prims::convert_element_type5
563Tensor<[64,128]>,
prims::convert_element_type5
564Tensor<[784]>,
prims::convert_element_type5
565Tensor<[128,784]>,
prims::convert_element_type5

stablehlo.convolution::ttnn.conv2d

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,3,224,224]>,
Tensor<[32,3,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
1Tensor<[1,32,112,112]>,
Tensor<[32,1,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 32
ttnn.conv2daten::convolution5
2Tensor<[1,32,112,112]>,
Tensor<[64,32,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
3Tensor<[1,64,112,112]>,
Tensor<[64,1,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 64
ttnn.conv2daten::convolution5
4Tensor<[1,64,56,56]>,
Tensor<[128,64,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
5Tensor<[1,128,56,56]>,
Tensor<[128,1,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 128
ttnn.conv2daten::convolution5
6Tensor<[1,128,56,56]>,
Tensor<[128,128,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
7Tensor<[1,128,56,56]>,
Tensor<[128,1,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 128
ttnn.conv2daten::convolution5
8Tensor<[1,128,28,28]>,
Tensor<[256,128,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
9Tensor<[1,256,28,28]>,
Tensor<[256,1,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 256
ttnn.conv2daten::convolution5
10Tensor<[1,256,28,28]>,
Tensor<[256,256,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
11Tensor<[1,256,28,28]>,
Tensor<[512,256,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
12Tensor<[1,512,28,28]>,
Tensor<[512,1,3,3]>,
stride: [1, 1]
pad: [[2, 2], [2, 2]]
rhs_dilate: [2, 2]
batch_group_count: 1
feature_group_count: 512
ttnn.conv2daten::convolution5
13Tensor<[1,512,28,28]>,
Tensor<[512,512,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
14Tensor<[1,512,28,28]>,
Tensor<[512,1,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 512
ttnn.conv2daten::convolution5
15Tensor<[1,512,28,28]>,
Tensor<[128,512,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
16Tensor<[1,128,28,28]>,
Tensor<[128,1,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 128
ttnn.conv2daten::convolution5
17Tensor<[1,128,28,28]>,
Tensor<[128,128,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
18Tensor<[1,128,28,28]>,
Tensor<[128,128,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
19Tensor<[1,128,28,28]>,
Tensor<[512,128,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
20Tensor<[1,512,28,28]>,
Tensor<[19,512,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
21Tensor<[1,512,28,28]>,
Tensor<[38,512,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
22Tensor<[1,185,28,28]>,
Tensor<[128,185,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
23Tensor<[1,128,28,28]>,
Tensor<[128,128,3,3]>,
stride: [1, 1]
pad: [[2, 2], [2, 2]]
rhs_dilate: [2, 2]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
24Tensor<[1,128,28,28]>,
Tensor<[19,128,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
25Tensor<[1,128,28,28]>,
Tensor<[38,128,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
26Tensor<[1,256,512]>,
Tensor<[1024,256,1]>,
stride: [1]
pad: [[0, 0]]
rhs_dilate: [1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
27Tensor<[1,1024,512]>,
Tensor<[256,1024,1]>,
stride: [1]
pad: [[0, 0]]
rhs_dilate: [1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
28Tensor<[1,3,720,1280]>,
Tensor<[64,3,7,7]>,
stride: [2, 2]
pad: [[3, 3], [3, 3]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
29Tensor<[1,64,180,320]>,
Tensor<[64,64,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
30Tensor<[1,64,180,320]>,
Tensor<[64,64,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
31Tensor<[1,64,180,320]>,
Tensor<[256,64,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
32Tensor<[1,256,180,320]>,
Tensor<[64,256,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
33Tensor<[1,256,180,320]>,
Tensor<[128,256,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
34Tensor<[1,128,180,320]>,
Tensor<[128,128,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
35Tensor<[1,128,90,160]>,
Tensor<[512,128,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
36Tensor<[1,256,180,320]>,
Tensor<[512,256,1,1]>,
stride: [2, 2]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
37Tensor<[1,512,90,160]>,
Tensor<[128,512,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
38Tensor<[1,128,90,160]>,
Tensor<[128,128,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
39Tensor<[1,512,90,160]>,
Tensor<[256,512,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
40Tensor<[1,256,90,160]>,
Tensor<[256,256,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
41Tensor<[1,256,45,80]>,
Tensor<[1024,256,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
42Tensor<[1,512,90,160]>,
Tensor<[1024,512,1,1]>,
stride: [2, 2]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
43Tensor<[1,1024,45,80]>,
Tensor<[256,1024,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
44Tensor<[1,256,45,80]>,
Tensor<[256,256,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
45Tensor<[1,1024,45,80]>,
Tensor<[512,1024,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
46Tensor<[1,512,45,80]>,
Tensor<[512,512,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
47Tensor<[1,512,23,40]>,
Tensor<[2048,512,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
48Tensor<[1,1024,45,80]>,
Tensor<[2048,1024,1,1]>,
stride: [2, 2]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
49Tensor<[1,2048,23,40]>,
Tensor<[512,2048,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
50Tensor<[1,512,23,40]>,
Tensor<[512,512,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
51Tensor<[1,2048,23,40]>,
Tensor<[256,2048,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
52Tensor<[1,4,64,64]>,
Tensor<[320,4,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
53Tensor<[1,320,64,64]>,
Tensor<[320,320,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
54Tensor<[1,320,64,64]>,
Tensor<[320,320,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
55Tensor<[1,320,64,64]>,
Tensor<[320,320,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
56Tensor<[1,320,32,32]>,
Tensor<[640,320,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
57Tensor<[1,640,32,32]>,
Tensor<[640,640,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
58Tensor<[1,320,32,32]>,
Tensor<[640,320,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
59Tensor<[1,640,32,32]>,
Tensor<[640,640,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
60Tensor<[1,640,32,32]>,
Tensor<[640,640,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
61Tensor<[1,640,16,16]>,
Tensor<[1280,640,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
62Tensor<[1,1280,16,16]>,
Tensor<[1280,1280,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
63Tensor<[1,640,16,16]>,
Tensor<[1280,640,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
64Tensor<[1,1280,16,16]>,
Tensor<[1280,1280,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
65Tensor<[1,1280,16,16]>,
Tensor<[1280,1280,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
66Tensor<[1,1280,8,8]>,
Tensor<[1280,1280,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
67Tensor<[1,1280,8,8]>,
Tensor<[1280,1280,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
68Tensor<[1,2560,8,8]>,
Tensor<[1280,2560,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
69Tensor<[1,2560,8,8]>,
Tensor<[1280,2560,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
70Tensor<[1,2560,16,16]>,
Tensor<[1280,2560,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
71Tensor<[1,2560,16,16]>,
Tensor<[1280,2560,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
72Tensor<[1,1920,16,16]>,
Tensor<[1280,1920,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
73Tensor<[1,1920,16,16]>,
Tensor<[1280,1920,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
74Tensor<[1,1280,32,32]>,
Tensor<[1280,1280,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
75Tensor<[1,1920,32,32]>,
Tensor<[640,1920,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
76Tensor<[1,1920,32,32]>,
Tensor<[640,1920,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
77Tensor<[1,1280,32,32]>,
Tensor<[640,1280,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
78Tensor<[1,1280,32,32]>,
Tensor<[640,1280,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
79Tensor<[1,960,32,32]>,
Tensor<[640,960,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
80Tensor<[1,960,32,32]>,
Tensor<[640,960,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
81Tensor<[1,640,64,64]>,
Tensor<[640,640,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
82Tensor<[1,960,64,64]>,
Tensor<[320,960,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
83Tensor<[1,960,64,64]>,
Tensor<[320,960,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
84Tensor<[1,640,64,64]>,
Tensor<[320,640,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
85Tensor<[1,640,64,64]>,
Tensor<[320,640,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
86Tensor<[1,320,64,64]>,
Tensor<[4,320,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
87Tensor<[1,3,512,672]>,
Tensor<[192,3,16,16]>,
stride: [16, 16]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
88Tensor<[1,3,224,224]>,
Tensor<[64,3,7,7]>,
stride: [2, 2]
pad: [[3, 3], [3, 3]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
89Tensor<[1,64,56,56]>,
Tensor<[64,64,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
90Tensor<[1,64,56,56]>,
Tensor<[128,64,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
91Tensor<[1,64,56,56]>,
Tensor<[128,64,1,1]>,
stride: [2, 2]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
92Tensor<[1,128,28,28]>,
Tensor<[256,128,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
93Tensor<[1,256,14,14]>,
Tensor<[256,256,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
94Tensor<[1,128,28,28]>,
Tensor<[256,128,1,1]>,
stride: [2, 2]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
95Tensor<[1,256,14,14]>,
Tensor<[512,256,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
96Tensor<[1,512,7,7]>,
Tensor<[512,512,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
97Tensor<[1,256,14,14]>,
Tensor<[512,256,1,1]>,
stride: [2, 2]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
98Tensor<[1,768,8]>,
Tensor<[768,192,1]>,
stride: [1]
pad: [[0, 0]]
rhs_dilate: [1]
batch_group_count: 1
feature_group_count: 4
ttnn.conv2daten::convolution4
99Tensor<[1,768,8]>,
Tensor<[768,768,1]>,
stride: [1]
pad: [[0, 0]]
rhs_dilate: [1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
100Tensor<[1,768,8]>,
Tensor<[3072,192,1]>,
stride: [1]
pad: [[0, 0]]
rhs_dilate: [1]
batch_group_count: 1
feature_group_count: 4
ttnn.conv2daten::convolution4
101Tensor<[1,3072,8]>,
Tensor<[768,768,1]>,
stride: [1]
pad: [[0, 0]]
rhs_dilate: [1]
batch_group_count: 1
feature_group_count: 4
ttnn.conv2daten::convolution4
102Tensor<[1,64,56,56]>,
Tensor<[64,64,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
103Tensor<[1,64,56,56]>,
Tensor<[256,64,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
104Tensor<[1,256,56,56]>,
Tensor<[64,256,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
105Tensor<[1,256,56,56]>,
Tensor<[128,256,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
106Tensor<[1,128,56,56]>,
Tensor<[128,128,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
107Tensor<[1,256,56,56]>,
Tensor<[512,256,1,1]>,
stride: [2, 2]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
108Tensor<[1,512,28,28]>,
Tensor<[256,512,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
109Tensor<[1,256,28,28]>,
Tensor<[256,256,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
110Tensor<[1,256,14,14]>,
Tensor<[1024,256,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
111Tensor<[1,512,28,28]>,
Tensor<[1024,512,1,1]>,
stride: [2, 2]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
112Tensor<[1,1024,14,14]>,
Tensor<[256,1024,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
113Tensor<[1,1024,14,14]>,
Tensor<[512,1024,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
114Tensor<[1,512,14,14]>,
Tensor<[512,512,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
115Tensor<[1,512,7,7]>,
Tensor<[2048,512,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
116Tensor<[1,1024,14,14]>,
Tensor<[2048,1024,1,1]>,
stride: [2, 2]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
117Tensor<[1,2048,7,7]>,
Tensor<[512,2048,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
118Tensor<[1,3,384,512]>,
Tensor<[768,3,32,32]>,
stride: [32, 32]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
119Tensor<[1,1,28,28]>,
Tensor<[32,1,3,3]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
120Tensor<[1,32,26,26]>,
Tensor<[64,32,3,3]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
121Tensor<[1,32,112,112]>,
Tensor<[64,32,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
122Tensor<[1,64,56,56]>,
Tensor<[14,64,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
123Tensor<[1,78,56,56]>,
Tensor<[24,78,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
124Tensor<[1,24,56,56]>,
Tensor<[14,24,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
125Tensor<[1,102,56,56]>,
Tensor<[40,102,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
126Tensor<[1,40,56,56]>,
Tensor<[14,40,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
127Tensor<[1,54,56,56]>,
Tensor<[24,54,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
128Tensor<[1,142,56,56]>,
Tensor<[68,142,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
129Tensor<[1,124,56,56]>,
Tensor<[128,124,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
130Tensor<[1,128,28,28]>,
Tensor<[16,128,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
131Tensor<[1,144,28,28]>,
Tensor<[28,144,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
132Tensor<[1,28,28,28]>,
Tensor<[16,28,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
133Tensor<[1,172,28,28]>,
Tensor<[46,172,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
134Tensor<[1,46,28,28]>,
Tensor<[16,46,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
135Tensor<[1,62,28,28]>,
Tensor<[28,62,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
136Tensor<[1,218,28,28]>,
Tensor<[78,218,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
137Tensor<[1,78,28,28]>,
Tensor<[16,78,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
138Tensor<[1,94,28,28]>,
Tensor<[28,94,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
139Tensor<[1,122,28,28]>,
Tensor<[46,122,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
140Tensor<[1,296,28,28]>,
Tensor<[134,296,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
141Tensor<[1,262,28,28]>,
Tensor<[256,262,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
142Tensor<[1,256,28,28]>,
Tensor<[20,256,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
143Tensor<[1,276,28,28]>,
Tensor<[34,276,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
144Tensor<[1,34,28,28]>,
Tensor<[20,34,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
145Tensor<[1,310,28,28]>,
Tensor<[58,310,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
146Tensor<[1,58,28,28]>,
Tensor<[20,58,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
147Tensor<[1,78,28,28]>,
Tensor<[34,78,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
148Tensor<[1,368,28,28]>,
Tensor<[98,368,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
149Tensor<[1,98,28,28]>,
Tensor<[20,98,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
150Tensor<[1,118,28,28]>,
Tensor<[34,118,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
151Tensor<[1,152,28,28]>,
Tensor<[58,152,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
152Tensor<[1,466,28,28]>,
Tensor<[168,466,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
153Tensor<[1,328,28,28]>,
Tensor<[320,328,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
154Tensor<[1,320,14,14]>,
Tensor<[40,320,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
155Tensor<[1,360,14,14]>,
Tensor<[68,360,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
156Tensor<[1,68,14,14]>,
Tensor<[40,68,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
157Tensor<[1,428,14,14]>,
Tensor<[116,428,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
158Tensor<[1,116,14,14]>,
Tensor<[40,116,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
159Tensor<[1,156,14,14]>,
Tensor<[68,156,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
160Tensor<[1,544,14,14]>,
Tensor<[196,544,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
161Tensor<[1,196,14,14]>,
Tensor<[40,196,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
162Tensor<[1,236,14,14]>,
Tensor<[68,236,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
163Tensor<[1,304,14,14]>,
Tensor<[116,304,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
164Tensor<[1,740,14,14]>,
Tensor<[334,740,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
165Tensor<[1,654,14,14]>,
Tensor<[640,654,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
166Tensor<[1,640,7,7]>,
Tensor<[160,640,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
167Tensor<[1,800,7,7]>,
Tensor<[272,800,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
168Tensor<[1,272,7,7]>,
Tensor<[160,272,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
169Tensor<[1,1072,7,7]>,
Tensor<[462,1072,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
170Tensor<[1,782,7,7]>,
Tensor<[1024,782,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
171Tensor<[1,3,512,512]>,
Tensor<[32,3,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
172Tensor<[1,32,512,512]>,
Tensor<[64,32,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
173Tensor<[1,64,256,256]>,
Tensor<[32,64,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
174Tensor<[1,32,256,256]>,
Tensor<[64,32,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
175Tensor<[1,64,256,256]>,
Tensor<[128,64,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
176Tensor<[1,128,128,128]>,
Tensor<[64,128,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
177Tensor<[1,64,128,128]>,
Tensor<[128,64,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
178Tensor<[1,128,128,128]>,
Tensor<[256,128,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
179Tensor<[1,256,64,64]>,
Tensor<[128,256,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
180Tensor<[1,128,64,64]>,
Tensor<[256,128,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
181Tensor<[1,256,64,64]>,
Tensor<[512,256,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
182Tensor<[1,512,32,32]>,
Tensor<[256,512,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
183Tensor<[1,256,32,32]>,
Tensor<[512,256,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
184Tensor<[1,512,32,32]>,
Tensor<[1024,512,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
185Tensor<[1,1024,16,16]>,
Tensor<[512,1024,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
186Tensor<[1,512,16,16]>,
Tensor<[1024,512,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
187Tensor<[1,1024,16,16]>,
Tensor<[255,1024,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
188Tensor<[1,512,16,16]>,
Tensor<[256,512,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
189Tensor<[1,768,32,32]>,
Tensor<[256,768,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
190Tensor<[1,512,32,32]>,
Tensor<[255,512,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
191Tensor<[1,256,32,32]>,
Tensor<[128,256,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
192Tensor<[1,384,64,64]>,
Tensor<[128,384,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
193Tensor<[1,256,64,64]>,
Tensor<[255,256,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
194Tensor<[1,3,256,256]>,
Tensor<[32,3,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
195Tensor<[1,32,256,256]>,
Tensor<[32,32,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
196Tensor<[1,32,128,128]>,
Tensor<[64,32,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
197Tensor<[1,64,128,128]>,
Tensor<[64,64,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
198Tensor<[1,64,64,64]>,
Tensor<[128,64,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
199Tensor<[1,128,64,64]>,
Tensor<[128,128,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
200Tensor<[1,128,32,32]>,
Tensor<[256,128,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
201Tensor<[1,256,32,32]>,
Tensor<[256,256,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
202Tensor<[1,256,16,16]>,
Tensor<[512,256,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
203Tensor<[1,512,16,16]>,
Tensor<[512,512,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
204Tensor<[1,512,16,16]>,
Tensor<[2,2,256,512]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
lhs_dilate: [2, 2]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
205Tensor<[1,512,32,32]>,
Tensor<[256,512,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
206Tensor<[1,256,32,32]>,
Tensor<[2,2,128,256]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
lhs_dilate: [2, 2]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
207Tensor<[1,256,64,64]>,
Tensor<[128,256,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
208Tensor<[1,128,64,64]>,
Tensor<[2,2,64,128]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
lhs_dilate: [2, 2]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
209Tensor<[1,128,128,128]>,
Tensor<[64,128,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
210Tensor<[1,64,128,128]>,
Tensor<[2,2,32,64]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
lhs_dilate: [2, 2]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
211Tensor<[1,64,256,256]>,
Tensor<[32,64,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
212Tensor<[1,32,256,256]>,
Tensor<[1,32,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
213Tensor<[1,1,28,28]>,
Tensor<[16,1,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
214Tensor<[1,16,14,14]>,
Tensor<[4,16,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
215Tensor<[1,4,7,7]>,
Tensor<[2,2,16,4]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
lhs_dilate: [2, 2]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
216Tensor<[1,16,14,14]>,
Tensor<[2,2,1,16]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
lhs_dilate: [2, 2]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
217Tensor<[1,3,224,224]>,
Tensor<[64,3,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
218Tensor<[1,64,224,224]>,
Tensor<[64,64,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
219Tensor<[1,64,112,112]>,
Tensor<[128,64,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
220Tensor<[1,128,112,112]>,
Tensor<[128,128,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
221Tensor<[1,128,56,56]>,
Tensor<[256,128,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
222Tensor<[1,256,56,56]>,
Tensor<[256,256,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
223Tensor<[1,256,28,28]>,
Tensor<[512,256,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
224Tensor<[1,512,28,28]>,
Tensor<[512,512,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
225Tensor<[1,512,14,14]>,
Tensor<[1024,512,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
226Tensor<[1,1024,14,14]>,
Tensor<[1024,1024,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
227Tensor<[1,1024,14,14]>,
Tensor<[2,2,512,1024]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
lhs_dilate: [2, 2]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
228Tensor<[1,1024,28,28]>,
Tensor<[512,1024,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
229Tensor<[1,512,28,28]>,
Tensor<[2,2,256,512]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
lhs_dilate: [2, 2]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
230Tensor<[1,512,56,56]>,
Tensor<[256,512,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
231Tensor<[1,256,56,56]>,
Tensor<[2,2,128,256]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
lhs_dilate: [2, 2]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
232Tensor<[1,256,112,112]>,
Tensor<[128,256,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
233Tensor<[1,128,112,112]>,
Tensor<[2,2,64,128]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
lhs_dilate: [2, 2]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
234Tensor<[1,128,224,224]>,
Tensor<[64,128,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
235Tensor<[1,64,224,224]>,
Tensor<[1,64,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
236Tensor<[1,3,480,640]>,
Tensor<[64,3,7,7]>,
stride: [4, 4]
pad: [[3, 3], [3, 3]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
237Tensor<[1,64,120,160]>,
Tensor<[64,64,8,8]>,
stride: [8, 8]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
238Tensor<[1,256,120,160]>,
Tensor<[256,1,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 256
ttnn.conv2daten::convolution4
239Tensor<[1,64,120,160]>,
Tensor<[128,64,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
240Tensor<[1,128,60,80]>,
Tensor<[128,128,4,4]>,
stride: [4, 4]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
241Tensor<[1,512,60,80]>,
Tensor<[512,1,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 512
ttnn.conv2daten::convolution4
242Tensor<[1,128,60,80]>,
Tensor<[320,128,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
243Tensor<[1,320,30,40]>,
Tensor<[320,320,2,2]>,
stride: [2, 2]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
244Tensor<[1,1280,30,40]>,
Tensor<[1280,1,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1280
ttnn.conv2daten::convolution4
245Tensor<[1,320,30,40]>,
Tensor<[512,320,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
246Tensor<[1,2048,15,20]>,
Tensor<[2048,1,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 2048
ttnn.conv2daten::convolution4
247Tensor<[1,512,15,20]>,
Tensor<[64,512,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
248Tensor<[1,320,30,40]>,
Tensor<[64,320,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
249Tensor<[1,128,30,40]>,
Tensor<[64,128,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
250Tensor<[1,64,30,40]>,
Tensor<[32,64,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
251Tensor<[1,32,30,40]>,
Tensor<[2,32,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
252Tensor<[1,128,60,80]>,
Tensor<[64,128,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
253Tensor<[1,128,60,80]>,
Tensor<[64,128,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
254Tensor<[1,64,60,80]>,
Tensor<[32,64,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
255Tensor<[1,32,60,80]>,
Tensor<[2,32,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
256Tensor<[1,128,120,160]>,
Tensor<[64,128,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
257Tensor<[1,64,120,160]>,
Tensor<[32,64,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
258Tensor<[1,32,120,160]>,
Tensor<[2,32,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
259Tensor<[1,64,480,640]>,
Tensor<[64,64,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
260Tensor<[1,64,480,640]>,
Tensor<[1,64,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
261Tensor<[1,3,224,224]>,
Tensor<[768,3,16,16]>,
stride: [16, 16]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
262Tensor<[1,3,512,512]>,
Tensor<[32,3,7,7]>,
stride: [4, 4]
pad: [[3, 3], [3, 3]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
263Tensor<[1,32,128,128]>,
Tensor<[32,32,8,8]>,
stride: [8, 8]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
264Tensor<[1,128,128,128]>,
Tensor<[128,1,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 128
ttnn.conv2daten::convolution4
265Tensor<[1,32,128,128]>,
Tensor<[64,32,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
266Tensor<[1,64,64,64]>,
Tensor<[64,64,4,4]>,
stride: [4, 4]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
267Tensor<[1,256,64,64]>,
Tensor<[256,1,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 256
ttnn.conv2daten::convolution4
268Tensor<[1,64,64,64]>,
Tensor<[160,64,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
269Tensor<[1,160,32,32]>,
Tensor<[160,160,2,2]>,
stride: [2, 2]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
270Tensor<[1,640,32,32]>,
Tensor<[640,1,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 640
ttnn.conv2daten::convolution4
271Tensor<[1,160,32,32]>,
Tensor<[256,160,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
272Tensor<[1,1024,16,16]>,
Tensor<[1024,1,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1024
ttnn.conv2daten::convolution4
273Tensor<[1,1024,128,128]>,
Tensor<[256,1024,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
274Tensor<[1,256,128,128]>,
Tensor<[150,256,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4
275Tensor<[1,32,112,112]>,
Tensor<[16,32,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
276Tensor<[1,16,112,112]>,
Tensor<[96,16,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
277Tensor<[1,96,112,112]>,
Tensor<[96,1,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 96
ttnn.conv2daten::convolution5
278Tensor<[1,96,56,56]>,
Tensor<[24,96,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
279Tensor<[1,24,56,56]>,
Tensor<[144,24,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
280Tensor<[1,144,56,56]>,
Tensor<[144,1,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 144
ttnn.conv2daten::convolution5
281Tensor<[1,144,56,56]>,
Tensor<[24,144,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
282Tensor<[1,144,56,56]>,
Tensor<[144,1,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 144
ttnn.conv2daten::convolution5
283Tensor<[1,144,28,28]>,
Tensor<[32,144,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
284Tensor<[1,32,28,28]>,
Tensor<[192,32,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
285Tensor<[1,192,28,28]>,
Tensor<[192,1,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 192
ttnn.conv2daten::convolution5
286Tensor<[1,192,28,28]>,
Tensor<[32,192,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
287Tensor<[1,192,28,28]>,
Tensor<[192,1,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 192
ttnn.conv2daten::convolution5
288Tensor<[1,192,14,14]>,
Tensor<[64,192,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
289Tensor<[1,64,14,14]>,
Tensor<[384,64,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
290Tensor<[1,384,14,14]>,
Tensor<[384,1,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 384
ttnn.conv2daten::convolution5
291Tensor<[1,384,14,14]>,
Tensor<[64,384,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
292Tensor<[1,384,14,14]>,
Tensor<[96,384,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
293Tensor<[1,96,14,14]>,
Tensor<[576,96,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
294Tensor<[1,576,14,14]>,
Tensor<[576,1,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 576
ttnn.conv2daten::convolution5
295Tensor<[1,576,14,14]>,
Tensor<[96,576,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
296Tensor<[1,576,14,14]>,
Tensor<[576,1,3,3]>,
stride: [2, 2]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 576
ttnn.conv2daten::convolution5
297Tensor<[1,576,7,7]>,
Tensor<[160,576,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
298Tensor<[1,160,7,7]>,
Tensor<[960,160,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
299Tensor<[1,960,7,7]>,
Tensor<[960,1,3,3]>,
stride: [1, 1]
pad: [[1, 1], [1, 1]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 960
ttnn.conv2daten::convolution5
300Tensor<[1,960,7,7]>,
Tensor<[160,960,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
301Tensor<[1,960,7,7]>,
Tensor<[320,960,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
302Tensor<[1,320,7,7]>,
Tensor<[1280,320,1,1]>,
stride: [1, 1]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
303Tensor<[1,3,224,224]>,
Tensor<[768,3,32,32]>,
stride: [32, 32]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution5
304Tensor<[1,3,224,224]>,
Tensor<[1024,3,16,16]>,
stride: [16, 16]
pad: [[0, 0], [0, 0]]
rhs_dilate: [1, 1]
batch_group_count: 1
feature_group_count: 1
ttnn.conv2daten::convolution4

stablehlo.cosine::ttnn.cos

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,32,128]>,
ttnn.cosaten::cos4
1Tensor<[1,23,40,64]>,
ttnn.cosaten::cos4
2Tensor<[1,160]>,
ttnn.cosaten::cos4
3Tensor<[1,7,64]>,
ttnn.cosaten::cos4

stablehlo.divide::ttnn.div

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,32,32,32]>,
Tensor<[1,32,32,32]>,
ttnn.divaten::_safe_softmax4
1Scalar,
Scalar,
ttnn.divaten::arange4
2Tensor<[1,32,1]>,
Tensor<[1,32,1]>,
ttnn.divaten::mean.dim4
3Tensor<[1,12,7,7]>,
Tensor<[1,12,7,7]>,
ttnn.divaten::_safe_softmax4
4Tensor<[32]>,
Tensor<[32]>,
ttnn.divaten::reciprocal5
5Tensor<[64]>,
Tensor<[64]>,
ttnn.divaten::reciprocal5
6Tensor<[128]>,
Tensor<[128]>,
ttnn.divaten::reciprocal5
7Tensor<[256]>,
Tensor<[256]>,
ttnn.divaten::reciprocal5
8Tensor<[512]>,
Tensor<[512]>,
ttnn.divaten::reciprocal5
9Tensor<[1,1024,512]>,
Tensor<[1,1024,512]>,
ttnn.divaten::gelu4
10Tensor<[1,256,256]>,
Tensor<[1,256,256]>,
ttnn.divaten::gelu4
11Tensor<[1,512]>,
Tensor<[1,512]>,
ttnn.divaten::mean.dim4
12Tensor<[8,920,920]>,
Tensor<[8,920,920]>,
ttnn.divaten::_softmax4
13Tensor<[8,100,100]>,
Tensor<[8,100,100]>,
ttnn.divaten::_softmax4
14Tensor<[8,100,920]>,
Tensor<[8,100,920]>,
ttnn.divaten::_softmax4
15Tensor<[1,23,40]>,
Tensor<[1,23,40]>,
ttnn.divaten::div.Tensor4
16Tensor<[1,23,40,128]>,
Tensor<[1,23,40,128]>,
ttnn.divaten::div.Tensor4
17Tensor<[1,12,10,10]>,
Tensor<[1,12,10,10]>,
ttnn.divaten::_safe_softmax4
18Tensor<[1,10,3072]>,
Tensor<[1,10,3072]>,
ttnn.divaten::gelu4
19Tensor<[1,10,768]>,
Tensor<[1,10,768]>,
ttnn.divaten::gelu4
20Tensor<[1,8,4096,4096]>,
Tensor<[1,8,4096,4096]>,
ttnn.divaten::_safe_softmax4
21Tensor<[1,8,4096,9]>,
Tensor<[1,8,4096,9]>,
ttnn.divaten::_safe_softmax4
22Tensor<[1,8,1024,1024]>,
Tensor<[1,8,1024,1024]>,
ttnn.divaten::_safe_softmax4
23Tensor<[1,8,1024,9]>,
Tensor<[1,8,1024,9]>,
ttnn.divaten::_safe_softmax4
24Tensor<[1,8,256,256]>,
Tensor<[1,8,256,256]>,
ttnn.divaten::_safe_softmax4
25Tensor<[1,8,256,9]>,
Tensor<[1,8,256,9]>,
ttnn.divaten::_safe_softmax4
26Tensor<[1,8,64,64]>,
Tensor<[1,8,64,64]>,
ttnn.divaten::_safe_softmax4
27Tensor<[1,8,64,9]>,
Tensor<[1,8,64,9]>,
ttnn.divaten::_safe_softmax4
28Tensor<[160]>,
Tensor<[160]>,
ttnn.divaten::div.Tensor4
29Tensor<[1,320,64,64]>,
Tensor<[1,320,64,64]>,
ttnn.divaten::div.Tensor4
30Tensor<[1,4096,320]>,
Tensor<[1,4096,320]>,
ttnn.divaten::div.Tensor4
31Tensor<[1,640,32,32]>,
Tensor<[1,640,32,32]>,
ttnn.divaten::div.Tensor4
32Tensor<[1,1024,640]>,
Tensor<[1,1024,640]>,
ttnn.divaten::div.Tensor4
33Tensor<[1,1280,16,16]>,
Tensor<[1,1280,16,16]>,
ttnn.divaten::div.Tensor4
34Tensor<[1,256,1280]>,
Tensor<[1,256,1280]>,
ttnn.divaten::div.Tensor4
35Tensor<[1,1280,8,8]>,
Tensor<[1,1280,8,8]>,
ttnn.divaten::div.Tensor4
36Tensor<[1,64,1280]>,
Tensor<[1,64,1280]>,
ttnn.divaten::div.Tensor4
37Tensor<[1,4096,1280]>,
Tensor<[1,4096,1280]>,
ttnn.divaten::gelu4
38Tensor<[1,1024,2560]>,
Tensor<[1,1024,2560]>,
ttnn.divaten::gelu4
39Tensor<[1,256,5120]>,
Tensor<[1,256,5120]>,
ttnn.divaten::gelu4
40Tensor<[1,64,5120]>,
Tensor<[1,64,5120]>,
ttnn.divaten::gelu4
41Tensor<[1,12,25,25]>,
Tensor<[1,12,25,25]>,
ttnn.divaten::_safe_softmax4
42Tensor<[1,25,3072]>,
Tensor<[1,25,3072]>,
ttnn.divaten::gelu4
43Tensor<[1,3,1445,1445]>,
Tensor<[1,3,1445,1445]>,
ttnn.divaten::_safe_softmax4
44Tensor<[1,1445,768]>,
Tensor<[1,1445,768]>,
ttnn.divaten::gelu4
45Tensor<[1,512,1,1]>,
Tensor<[1,512,1,1]>,
ttnn.divaten::mean.dim4
46Tensor<[1,12,8,8]>,
Tensor<[1,12,8,8]>,
ttnn.divaten::_softmax4
47Tensor<[1,3072,8]>,
Tensor<[1,3072,8]>,
ttnn.divaten::gelu4
48Tensor<[1,8,256,2048]>,
Tensor<[1,8,256,2048]>,
ttnn.divaten::_softmax4
49Tensor<[1,8,2048,256]>,
Tensor<[1,8,2048,256]>,
ttnn.divaten::_softmax4
50Tensor<[1,2048,768]>,
Tensor<[1,2048,768]>,
ttnn.divaten::gelu4
51Tensor<[1,2048,1,1]>,
Tensor<[1,2048,1,1]>,
ttnn.divaten::mean.dim4
52Tensor<[1024]>,
Tensor<[1024]>,
ttnn.divaten::reciprocal5
53Tensor<[2048]>,
Tensor<[2048]>,
ttnn.divaten::reciprocal5
54Tensor<[1,12,201,201]>,
Tensor<[1,12,201,201]>,
ttnn.divaten::_softmax4
55Tensor<[1,201,3072]>,
Tensor<[1,201,3072]>,
ttnn.divaten::gelu4
56Tensor<[1,1536]>,
Tensor<[1,1536]>,
ttnn.divaten::gelu4
57Tensor<[16,19,19]>,
Tensor<[16,19,19]>,
ttnn.divaten::_softmax4
58Tensor<[19]>,
Tensor<[19]>,
ttnn.divaten::floor_divide4
59Tensor<[1,19,4096]>,
Tensor<[1,19,4096]>,
ttnn.divaten::gelu4
60Tensor<[1,1024,1,1]>,
Tensor<[1,1024,1,1]>,
ttnn.divaten::mean.dim4
61Tensor<[14]>,
Tensor<[14]>,
ttnn.divaten::reciprocal5
62Tensor<[24]>,
Tensor<[24]>,
ttnn.divaten::reciprocal5
63Tensor<[40]>,
Tensor<[40]>,
ttnn.divaten::reciprocal5
64Tensor<[68]>,
Tensor<[68]>,
ttnn.divaten::reciprocal5
65Tensor<[16]>,
Tensor<[16]>,
ttnn.divaten::reciprocal5
66Tensor<[28]>,
Tensor<[28]>,
ttnn.divaten::reciprocal5
67Tensor<[46]>,
Tensor<[46]>,
ttnn.divaten::reciprocal5
68Tensor<[78]>,
Tensor<[78]>,
ttnn.divaten::reciprocal5
69Tensor<[134]>,
Tensor<[134]>,
ttnn.divaten::reciprocal5
70Tensor<[20]>,
Tensor<[20]>,
ttnn.divaten::reciprocal5
71Tensor<[34]>,
Tensor<[34]>,
ttnn.divaten::reciprocal5
72Tensor<[58]>,
Tensor<[58]>,
ttnn.divaten::reciprocal5
73Tensor<[98]>,
Tensor<[98]>,
ttnn.divaten::reciprocal5
74Tensor<[168]>,
Tensor<[168]>,
ttnn.divaten::reciprocal5
75Tensor<[320]>,
Tensor<[320]>,
ttnn.divaten::reciprocal5
76Tensor<[116]>,
Tensor<[116]>,
ttnn.divaten::reciprocal5
77Tensor<[196]>,
Tensor<[196]>,
ttnn.divaten::reciprocal5
78Tensor<[334]>,
Tensor<[334]>,
ttnn.divaten::reciprocal5
79Tensor<[640]>,
Tensor<[640]>,
ttnn.divaten::reciprocal5
80Tensor<[272]>,
Tensor<[272]>,
ttnn.divaten::reciprocal5
81Tensor<[462]>,
Tensor<[462]>,
ttnn.divaten::reciprocal5
82Tensor<[1,16,32,32]>,
Tensor<[1,16,32,32]>,
ttnn.divaten::_softmax4
83Tensor<[1,12,16,16]>,
Tensor<[1,12,16,16]>,
ttnn.divaten::_safe_softmax4
84Tensor<[1,16,3072]>,
Tensor<[1,16,3072]>,
ttnn.divaten::gelu4
85Tensor<[1,1,19200,300]>,
Tensor<[1,1,19200,300]>,
ttnn.divaten::_softmax4
86Tensor<[1,2,4800,300]>,
Tensor<[1,2,4800,300]>,
ttnn.divaten::_softmax4
87Tensor<[1,5,1200,300]>,
Tensor<[1,5,1200,300]>,
ttnn.divaten::_softmax4
88Tensor<[1,8,300,300]>,
Tensor<[1,8,300,300]>,
ttnn.divaten::_softmax4
89Tensor<[1,19200,256]>,
Tensor<[1,19200,256]>,
ttnn.divaten::gelu4
90Tensor<[1,4800,512]>,
Tensor<[1,4800,512]>,
ttnn.divaten::gelu4
91Tensor<[1,1200,1280]>,
Tensor<[1,1200,1280]>,
ttnn.divaten::gelu4
92Tensor<[1,300,2048]>,
Tensor<[1,300,2048]>,
ttnn.divaten::gelu4
93Tensor<[1,12,197,197]>,
Tensor<[1,12,197,197]>,
ttnn.divaten::_safe_softmax4
94Tensor<[1,197,3072]>,
Tensor<[1,197,3072]>,
ttnn.divaten::gelu4
95Tensor<[1,1,16384,256]>,
Tensor<[1,1,16384,256]>,
ttnn.divaten::_softmax4
96Tensor<[1,2,4096,256]>,
Tensor<[1,2,4096,256]>,
ttnn.divaten::_softmax4
97Tensor<[1,5,1024,256]>,
Tensor<[1,5,1024,256]>,
ttnn.divaten::_softmax4
98Tensor<[1,16384,128]>,
Tensor<[1,16384,128]>,
ttnn.divaten::gelu4
99Tensor<[1,4096,256]>,
Tensor<[1,4096,256]>,
ttnn.divaten::gelu4
100Tensor<[1,256,1024]>,
Tensor<[1,256,1024]>,
ttnn.divaten::gelu4
101Tensor<[1,71,7,7]>,
Tensor<[1,71,7,7]>,
ttnn.divaten::_safe_softmax4
102Tensor<[1,7,18176]>,
Tensor<[1,7,18176]>,
ttnn.divaten::gelu4
103Tensor<[1,1280,1,1]>,
Tensor<[1,1280,1,1]>,
ttnn.divaten::mean.dim4
104Tensor<[96]>,
Tensor<[96]>,
ttnn.divaten::reciprocal5
105Tensor<[144]>,
Tensor<[144]>,
ttnn.divaten::reciprocal5
106Tensor<[192]>,
Tensor<[192]>,
ttnn.divaten::reciprocal5
107Tensor<[384]>,
Tensor<[384]>,
ttnn.divaten::reciprocal5
108Tensor<[576]>,
Tensor<[576]>,
ttnn.divaten::reciprocal5
109Tensor<[960]>,
Tensor<[960]>,
ttnn.divaten::reciprocal5
110Tensor<[1280]>,
Tensor<[1280]>,
ttnn.divaten::reciprocal5
111Tensor<[1,12,12,12]>,
Tensor<[1,12,12,12]>,
ttnn.divaten::_safe_softmax4
112Tensor<[1,12,9,9]>,
Tensor<[1,12,9,9]>,
ttnn.divaten::_safe_softmax4
113Tensor<[1,16,9,9]>,
Tensor<[1,16,9,9]>,
ttnn.divaten::_safe_softmax4
114Tensor<[1,64,9,9]>,
Tensor<[1,64,9,9]>,
ttnn.divaten::_safe_softmax4
115Tensor<[1,12,14,14]>,
Tensor<[1,12,14,14]>,
ttnn.divaten::_safe_softmax4
116Tensor<[1,12,50,50]>,
Tensor<[1,12,50,50]>,
ttnn.divaten::_safe_softmax4
117Tensor<[2,8,7,7]>,
Tensor<[2,8,7,7]>,
ttnn.divaten::_safe_softmax4
118Tensor<[2,512]>,
Tensor<[2,512]>,
ttnn.divaten::div.Tensor4
119Tensor<[1,16,197,197]>,
Tensor<[1,16,197,197]>,
ttnn.divaten::_softmax4
120Tensor<[197]>,
Tensor<[197]>,
ttnn.divaten::floor_divide4
121Tensor<[1,197,4096]>,
Tensor<[1,197,4096]>,
ttnn.divaten::gelu4
122Tensor<[1,1024]>,
Tensor<[1,1024]>,
ttnn.divaten::mean.dim4
123Tensor<[1,768]>,
Tensor<[1,768]>,
ttnn.divaten::mean.dim4

stablehlo.dot_general::ttnn.matmul

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,64,1]>,
Tensor<[1,1,32]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
1Tensor<[32,32,128]>,
Tensor<[32,128,32]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
2Tensor<[32,32,32]>,
Tensor<[32,32,128]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
3Tensor<[32,4096]>,
Tensor<[4096,4096]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
4Tensor<[32,4096]>,
Tensor<[4096,11008]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
5Tensor<[32,11008]>,
Tensor<[11008,4096]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
6Tensor<[32,4096]>,
Tensor<[4096,32000]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
7Tensor<[12,7,64]>,
Tensor<[12,64,7]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
8Tensor<[12,7,7]>,
Tensor<[12,7,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
9Tensor<[7,768]>,
Tensor<[768,2304]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
10Tensor<[7,768]>,
Tensor<[768,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
11Tensor<[7,768]>,
Tensor<[768,3072]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
12Tensor<[7,3072]>,
Tensor<[3072,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
13Tensor<[7,768]>,
Tensor<[768,2]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
14Tensor<[256,768]>,
Tensor<[768,512]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
15Tensor<[256,512]>,
Tensor<[512,256]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
16Tensor<[256,256]>,
Tensor<[256,512]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
17Tensor<[1,512]>,
Tensor<[512,1000]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
18Tensor<[8,920,32]>,
Tensor<[8,32,920]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::baddbmm4
19Tensor<[8,100,32]>,
Tensor<[8,32,920]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::baddbmm4
20Tensor<[920,1,256]>,
Tensor<[920,256,256]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
21Tensor<[8,920,920]>,
Tensor<[8,920,32]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
22Tensor<[8,100,32]>,
Tensor<[8,32,100]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
23Tensor<[8,100,100]>,
Tensor<[8,100,32]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
24Tensor<[8,100,920]>,
Tensor<[8,920,32]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
25Tensor<[6,100,256]>,
Tensor<[6,256,92]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
26Tensor<[6,100,256]>,
Tensor<[6,256,256]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
27Tensor<[920,256]>,
Tensor<[256,256]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
28Tensor<[920,256]>,
Tensor<[256,2048]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
29Tensor<[920,2048]>,
Tensor<[2048,256]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
30Tensor<[100,256]>,
Tensor<[256,256]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
31Tensor<[100,256]>,
Tensor<[256,2048]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
32Tensor<[100,2048]>,
Tensor<[2048,256]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
33Tensor<[600,256]>,
Tensor<[256,256]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
34Tensor<[600,256]>,
Tensor<[256,4]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
35Tensor<[12,10,64]>,
Tensor<[12,64,10]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
36Tensor<[12,10,10]>,
Tensor<[12,10,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
37Tensor<[10,768]>,
Tensor<[768,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
38Tensor<[10,768]>,
Tensor<[768,3072]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
39Tensor<[10,3072]>,
Tensor<[3072,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
40Tensor<[10,768]>,
Tensor<[768,250002]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
41Tensor<[8,4096,40]>,
Tensor<[8,40,4096]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
42Tensor<[8,4096,4096]>,
Tensor<[8,4096,40]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
43Tensor<[8,4096,40]>,
Tensor<[8,40,9]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
44Tensor<[8,4096,9]>,
Tensor<[8,9,40]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
45Tensor<[8,1024,80]>,
Tensor<[8,80,1024]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
46Tensor<[8,1024,1024]>,
Tensor<[8,1024,80]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
47Tensor<[8,1024,80]>,
Tensor<[8,80,9]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
48Tensor<[8,1024,9]>,
Tensor<[8,9,80]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
49Tensor<[8,256,160]>,
Tensor<[8,160,256]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
50Tensor<[8,256,256]>,
Tensor<[8,256,160]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
51Tensor<[8,256,160]>,
Tensor<[8,160,9]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
52Tensor<[8,256,9]>,
Tensor<[8,9,160]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
53Tensor<[8,64,160]>,
Tensor<[8,160,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
54Tensor<[8,64,64]>,
Tensor<[8,64,160]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
55Tensor<[8,64,160]>,
Tensor<[8,160,9]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
56Tensor<[8,64,9]>,
Tensor<[8,9,160]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
57Tensor<[1,320]>,
Tensor<[320,1280]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
58Tensor<[1,1280]>,
Tensor<[1280,1280]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
59Tensor<[1,1280]>,
Tensor<[1280,320]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
60Tensor<[4096,320]>,
Tensor<[320,320]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
61Tensor<[9,768]>,
Tensor<[768,320]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
62Tensor<[4096,320]>,
Tensor<[320,2560]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
63Tensor<[4096,1280]>,
Tensor<[1280,320]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
64Tensor<[1,1280]>,
Tensor<[1280,640]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
65Tensor<[1024,640]>,
Tensor<[640,640]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
66Tensor<[9,768]>,
Tensor<[768,640]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
67Tensor<[1024,640]>,
Tensor<[640,5120]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
68Tensor<[1024,2560]>,
Tensor<[2560,640]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
69Tensor<[256,1280]>,
Tensor<[1280,1280]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
70Tensor<[9,768]>,
Tensor<[768,1280]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
71Tensor<[256,1280]>,
Tensor<[1280,10240]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
72Tensor<[256,5120]>,
Tensor<[5120,1280]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
73Tensor<[64,1280]>,
Tensor<[1280,1280]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
74Tensor<[64,1280]>,
Tensor<[1280,10240]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
75Tensor<[64,5120]>,
Tensor<[5120,1280]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
76Tensor<[12,25,64]>,
Tensor<[12,64,25]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
77Tensor<[12,25,25]>,
Tensor<[12,25,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
78Tensor<[25,768]>,
Tensor<[768,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
79Tensor<[25,768]>,
Tensor<[768,3072]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
80Tensor<[25,3072]>,
Tensor<[3072,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
81Tensor<[25,768]>,
Tensor<[768,2]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
82Tensor<[1,768]>,
Tensor<[768,1]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
83Tensor<[3,1445,64]>,
Tensor<[3,64,1445]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
84Tensor<[3,1445,1445]>,
Tensor<[3,1445,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
85Tensor<[1445,192]>,
Tensor<[192,192]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
86Tensor<[1445,192]>,
Tensor<[192,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
87Tensor<[1445,768]>,
Tensor<[768,192]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
88Tensor<[100,192]>,
Tensor<[192,192]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
89Tensor<[100,192]>,
Tensor<[192,92]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
90Tensor<[100,192]>,
Tensor<[192,4]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
91Tensor<[12,8,64]>,
Tensor<[12,64,8]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
92Tensor<[12,8,8]>,
Tensor<[12,8,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
93Tensor<[1,768]>,
Tensor<[768,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
94Tensor<[1,768]>,
Tensor<[768,3]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
95Tensor<[8,256,32]>,
Tensor<[8,32,2048]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
96Tensor<[8,256,2048]>,
Tensor<[8,2048,160]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
97Tensor<[8,256,32]>,
Tensor<[8,32,256]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
98Tensor<[8,2048,32]>,
Tensor<[8,32,256]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
99Tensor<[8,2048,256]>,
Tensor<[8,256,96]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
100Tensor<[256,1280]>,
Tensor<[1280,256]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
101Tensor<[2048,768]>,
Tensor<[768,256]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
102Tensor<[2048,768]>,
Tensor<[768,1280]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
103Tensor<[256,1280]>,
Tensor<[1280,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
104Tensor<[2048,768]>,
Tensor<[768,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
105Tensor<[2048,768]>,
Tensor<[768,262]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
106Tensor<[1,2048]>,
Tensor<[2048,1000]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
107Tensor<[12,201,64]>,
Tensor<[12,64,201]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
108Tensor<[12,201,201]>,
Tensor<[12,201,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
109Tensor<[201,768]>,
Tensor<[768,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
110Tensor<[201,768]>,
Tensor<[768,3072]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
111Tensor<[201,3072]>,
Tensor<[3072,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
112Tensor<[1,768]>,
Tensor<[768,1536]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
113Tensor<[1,1536]>,
Tensor<[1536,3129]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
114Tensor<[1,9216]>,
Tensor<[9216,128]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
115Tensor<[1,128]>,
Tensor<[128,10]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
116Tensor<[16,19,64]>,
Tensor<[16,64,19]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
117Tensor<[16,19,19]>,
Tensor<[16,19,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
118Tensor<[19,1024]>,
Tensor<[1024,1024]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
119Tensor<[19,1024]>,
Tensor<[1024,4096]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
120Tensor<[19,4096]>,
Tensor<[4096,1024]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
121Tensor<[19,1024]>,
Tensor<[1024,256008]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
122Tensor<[1,1024]>,
Tensor<[1024,1000]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
123Tensor<[16,32,96]>,
Tensor<[16,96,32]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::baddbmm4
124Tensor<[16,32,32]>,
Tensor<[16,32,96]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
125Tensor<[32,1536]>,
Tensor<[1536,4608]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
126Tensor<[32,1536]>,
Tensor<[1536,1536]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
127Tensor<[32,1536]>,
Tensor<[1536,6144]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
128Tensor<[32,6144]>,
Tensor<[6144,1536]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
129Tensor<[32,1536]>,
Tensor<[1536,250880]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
130Tensor<[12,16,64]>,
Tensor<[12,64,16]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
131Tensor<[12,16,16]>,
Tensor<[12,16,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
132Tensor<[16,768]>,
Tensor<[768,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
133Tensor<[16,768]>,
Tensor<[768,3072]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
134Tensor<[16,3072]>,
Tensor<[3072,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
135Tensor<[1,19200,64]>,
Tensor<[1,64,300]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
136Tensor<[1,19200,300]>,
Tensor<[1,300,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
137Tensor<[1,19200,256]>,
Tensor<[1,256,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
138Tensor<[2,4800,64]>,
Tensor<[2,64,300]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
139Tensor<[2,4800,300]>,
Tensor<[2,300,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
140Tensor<[1,4800,512]>,
Tensor<[1,512,128]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
141Tensor<[5,1200,64]>,
Tensor<[5,64,300]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
142Tensor<[5,1200,300]>,
Tensor<[5,300,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
143Tensor<[1,1200,1280]>,
Tensor<[1,1280,320]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
144Tensor<[8,300,64]>,
Tensor<[8,64,300]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
145Tensor<[8,300,300]>,
Tensor<[8,300,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
146Tensor<[1,300,2048]>,
Tensor<[1,2048,512]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
147Tensor<[19200,64]>,
Tensor<[64,64]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
148Tensor<[300,64]>,
Tensor<[64,64]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
149Tensor<[19200,64]>,
Tensor<[64,256]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
150Tensor<[4800,128]>,
Tensor<[128,128]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
151Tensor<[300,128]>,
Tensor<[128,128]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
152Tensor<[4800,128]>,
Tensor<[128,512]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
153Tensor<[1200,320]>,
Tensor<[320,320]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
154Tensor<[300,320]>,
Tensor<[320,320]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
155Tensor<[1200,320]>,
Tensor<[320,1280]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
156Tensor<[300,512]>,
Tensor<[512,512]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
157Tensor<[300,512]>,
Tensor<[512,2048]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
158Tensor<[12,197,64]>,
Tensor<[12,64,197]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
159Tensor<[12,197,197]>,
Tensor<[12,197,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
160Tensor<[197,768]>,
Tensor<[768,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
161Tensor<[197,768]>,
Tensor<[768,3072]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
162Tensor<[197,3072]>,
Tensor<[3072,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
163Tensor<[1,768]>,
Tensor<[768,1000]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
164Tensor<[1,16384,32]>,
Tensor<[1,32,256]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
165Tensor<[1,16384,256]>,
Tensor<[1,256,32]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
166Tensor<[1,16384,128]>,
Tensor<[1,128,32]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
167Tensor<[2,4096,32]>,
Tensor<[2,32,256]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
168Tensor<[2,4096,256]>,
Tensor<[2,256,32]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
169Tensor<[1,4096,256]>,
Tensor<[1,256,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
170Tensor<[5,1024,32]>,
Tensor<[5,32,256]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
171Tensor<[5,1024,256]>,
Tensor<[5,256,32]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
172Tensor<[1,1024,640]>,
Tensor<[1,640,160]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
173Tensor<[8,256,256]>,
Tensor<[8,256,32]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
174Tensor<[1,256,1024]>,
Tensor<[1,1024,256]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
175Tensor<[1,4096,64]>,
Tensor<[1,64,256]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
176Tensor<[1,1024,160]>,
Tensor<[1,160,256]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
177Tensor<[1,256,256]>,
Tensor<[1,256,256]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
178Tensor<[16384,32]>,
Tensor<[32,32]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
179Tensor<[256,32]>,
Tensor<[32,32]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
180Tensor<[16384,32]>,
Tensor<[32,128]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
181Tensor<[4096,64]>,
Tensor<[64,64]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
182Tensor<[256,64]>,
Tensor<[64,64]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
183Tensor<[4096,64]>,
Tensor<[64,256]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
184Tensor<[1024,160]>,
Tensor<[160,160]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
185Tensor<[256,160]>,
Tensor<[160,160]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
186Tensor<[1024,160]>,
Tensor<[160,640]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
187Tensor<[256,256]>,
Tensor<[256,256]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
188Tensor<[256,256]>,
Tensor<[256,1024]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
189Tensor<[1,32,1]>,
Tensor<[1,1,7]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
190Tensor<[71,7,64]>,
Tensor<[71,64,7]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
191Tensor<[71,7,7]>,
Tensor<[71,7,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
192Tensor<[7,4544]>,
Tensor<[4544,4672]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
193Tensor<[7,4544]>,
Tensor<[4544,4544]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
194Tensor<[7,4544]>,
Tensor<[4544,18176]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
195Tensor<[7,18176]>,
Tensor<[18176,4544]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
196Tensor<[7,4544]>,
Tensor<[4544,65024]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
197Tensor<[1,1280]>,
Tensor<[1280,1000]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
198Tensor<[12,12,64]>,
Tensor<[12,64,12]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
199Tensor<[12,12,12]>,
Tensor<[12,12,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
200Tensor<[12,128]>,
Tensor<[128,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
201Tensor<[12,768]>,
Tensor<[768,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
202Tensor<[12,768]>,
Tensor<[768,3072]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
203Tensor<[12,3072]>,
Tensor<[3072,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
204Tensor<[12,768]>,
Tensor<[768,2]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
205Tensor<[12,9,64]>,
Tensor<[12,64,9]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
206Tensor<[12,9,9]>,
Tensor<[12,9,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
207Tensor<[9,128]>,
Tensor<[128,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
208Tensor<[9,768]>,
Tensor<[768,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
209Tensor<[9,768]>,
Tensor<[768,3072]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
210Tensor<[9,3072]>,
Tensor<[3072,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
211Tensor<[9,768]>,
Tensor<[768,128]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
212Tensor<[9,128]>,
Tensor<[128,30000]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
213Tensor<[16,9,128]>,
Tensor<[16,128,9]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
214Tensor<[16,9,9]>,
Tensor<[16,9,128]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
215Tensor<[9,128]>,
Tensor<[128,2048]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
216Tensor<[9,2048]>,
Tensor<[2048,2048]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
217Tensor<[9,2048]>,
Tensor<[2048,8192]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
218Tensor<[9,8192]>,
Tensor<[8192,2048]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
219Tensor<[9,2048]>,
Tensor<[2048,128]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
220Tensor<[16,9,64]>,
Tensor<[16,64,9]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
221Tensor<[16,9,9]>,
Tensor<[16,9,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
222Tensor<[9,128]>,
Tensor<[128,1024]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
223Tensor<[9,1024]>,
Tensor<[1024,1024]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
224Tensor<[9,1024]>,
Tensor<[1024,4096]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
225Tensor<[9,4096]>,
Tensor<[4096,1024]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
226Tensor<[9,1024]>,
Tensor<[1024,128]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
227Tensor<[64,9,64]>,
Tensor<[64,64,9]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
228Tensor<[64,9,9]>,
Tensor<[64,9,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
229Tensor<[9,128]>,
Tensor<[128,4096]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
230Tensor<[9,4096]>,
Tensor<[4096,4096]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
231Tensor<[9,4096]>,
Tensor<[4096,16384]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
232Tensor<[9,16384]>,
Tensor<[16384,4096]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
233Tensor<[9,4096]>,
Tensor<[4096,128]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
234Tensor<[1,768]>,
Tensor<[768,2]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
235Tensor<[12,14,64]>,
Tensor<[12,64,14]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
236Tensor<[12,14,14]>,
Tensor<[12,14,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
237Tensor<[14,128]>,
Tensor<[128,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
238Tensor<[14,768]>,
Tensor<[768,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
239Tensor<[14,768]>,
Tensor<[768,3072]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
240Tensor<[14,3072]>,
Tensor<[3072,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
241Tensor<[14,768]>,
Tensor<[768,2]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
242Tensor<[12,50,64]>,
Tensor<[12,64,50]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
243Tensor<[12,50,50]>,
Tensor<[12,50,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
244Tensor<[16,7,64]>,
Tensor<[16,64,7]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
245Tensor<[16,7,7]>,
Tensor<[16,7,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
246Tensor<[50,768]>,
Tensor<[768,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
247Tensor<[50,768]>,
Tensor<[768,3072]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
248Tensor<[50,3072]>,
Tensor<[3072,768]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
249Tensor<[14,512]>,
Tensor<[512,512]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
250Tensor<[14,512]>,
Tensor<[512,2048]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
251Tensor<[14,2048]>,
Tensor<[2048,512]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
252Tensor<[1,768]>,
Tensor<[768,512]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
253Tensor<[2,512]>,
Tensor<[512,512]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
254Tensor<[2,512]>,
Tensor<[512,1]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
255Tensor<[16,197,64]>,
Tensor<[16,64,197]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
256Tensor<[16,197,197]>,
Tensor<[16,197,64]>,
batching_dims: [0] x [0]
contracting_dims: [2] x [1]
ttnn.matmulaten::bmm4
257Tensor<[197,1024]>,
Tensor<[1024,1024]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
258Tensor<[197,1024]>,
Tensor<[1024,4096]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
259Tensor<[197,4096]>,
Tensor<[4096,1024]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
260Tensor<[1,784]>,
Tensor<[784,128]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
261Tensor<[1,128]>,
Tensor<[128,64]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
262Tensor<[1,64]>,
Tensor<[64,12]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
263Tensor<[1,12]>,
Tensor<[12,3]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
264Tensor<[1,3]>,
Tensor<[3,12]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
265Tensor<[1,12]>,
Tensor<[12,64]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
266Tensor<[1,64]>,
Tensor<[64,128]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5
267Tensor<[1,128]>,
Tensor<[128,784]>,
contracting_dims: [1] x [0]
ttnn.matmulaten::mm5

stablehlo.dynamic_iota::ttnn.arange

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1]>,
dim: 0
ttnn.arangeaten::arange4

stablehlo.exponential::ttnn.exp

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,32,32,32]>,
ttnn.expaten::_safe_softmax4
1Tensor<[1,12,7,7]>,
ttnn.expaten::_safe_softmax4
2Tensor<[1,128,28,28]>,
ttnn.expaten::elu4
3Tensor<[8,920,920]>,
ttnn.expaten::_softmax4
4Tensor<[8,100,100]>,
ttnn.expaten::_softmax4
5Tensor<[8,100,920]>,
ttnn.expaten::_softmax4
6Tensor<[1,12,10,10]>,
ttnn.expaten::_safe_softmax4
7Tensor<[1,8,4096,4096]>,
ttnn.expaten::_safe_softmax4
8Tensor<[1,8,4096,9]>,
ttnn.expaten::_safe_softmax4
9Tensor<[1,8,1024,1024]>,
ttnn.expaten::_safe_softmax4
10Tensor<[1,8,1024,9]>,
ttnn.expaten::_safe_softmax4
11Tensor<[1,8,256,256]>,
ttnn.expaten::_safe_softmax4
12Tensor<[1,8,256,9]>,
ttnn.expaten::_safe_softmax4
13Tensor<[1,8,64,64]>,
ttnn.expaten::_safe_softmax4
14Tensor<[1,8,64,9]>,
ttnn.expaten::_safe_softmax4
15Tensor<[160]>,
ttnn.expaten::exp5
16Tensor<[1,12,25,25]>,
ttnn.expaten::_safe_softmax4
17Tensor<[1,3,1445,1445]>,
ttnn.expaten::_safe_softmax4
18Tensor<[1,12,8,8]>,
ttnn.expaten::_softmax4
19Tensor<[1,8,256,2048]>,
ttnn.expaten::_softmax4
20Tensor<[1,8,2048,256]>,
ttnn.expaten::_softmax4
21Tensor<[1,12,201,201]>,
ttnn.expaten::_softmax4
22Tensor<[1,10]>,
ttnn.expaten::exp5
23Tensor<[16,19,19]>,
ttnn.expaten::_softmax4
24Tensor<[19,256008]>,
ttnn.expaten::exp5
25Tensor<[1,16,32,32]>,
ttnn.expaten::_softmax4
26Tensor<[1,12,16,16]>,
ttnn.expaten::_safe_softmax4
27Tensor<[1,1,19200,300]>,
ttnn.expaten::_softmax4
28Tensor<[1,2,4800,300]>,
ttnn.expaten::_softmax4
29Tensor<[1,5,1200,300]>,
ttnn.expaten::_softmax4
30Tensor<[1,8,300,300]>,
ttnn.expaten::_softmax4
31Tensor<[1,12,197,197]>,
ttnn.expaten::_safe_softmax4
32Tensor<[1,1,16384,256]>,
ttnn.expaten::_softmax4
33Tensor<[1,2,4096,256]>,
ttnn.expaten::_softmax4
34Tensor<[1,5,1024,256]>,
ttnn.expaten::_softmax4
35Tensor<[1,71,7,7]>,
ttnn.expaten::_safe_softmax4
36Tensor<[1,12,12,12]>,
ttnn.expaten::_safe_softmax4
37Tensor<[1,12,9,9]>,
ttnn.expaten::_safe_softmax4
38Tensor<[1,16,9,9]>,
ttnn.expaten::_safe_softmax4
39Tensor<[1,64,9,9]>,
ttnn.expaten::_safe_softmax4
40Tensor<[1,12,14,14]>,
ttnn.expaten::_safe_softmax4
41Tensor<[1,12,50,50]>,
ttnn.expaten::_safe_softmax4
42Tensor<[2,8,7,7]>,
ttnn.expaten::_safe_softmax4
43Scalar,
ttnn.expaten::exp5
44Tensor<[1,16,197,197]>,
ttnn.expaten::_softmax4

stablehlo.floor::ttnn.floor

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[128]>,
ttnn.flooraten::floor_divide4
1Tensor<[19]>,
ttnn.flooraten::floor_divide4
2Tensor<[197]>,
ttnn.flooraten::floor_divide4

stablehlo.gather::ttnn.embedding

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[32000,4096]>,
Tensor<[1,32]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
1Tensor<[50257,768]>,
Tensor<[1,7]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
2Tensor<[1024,768]>,
Tensor<[1,7]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
3Tensor<[1,7,2]>,
Tensor<[1,2]>,
offset_dims: [1]
collapsed_slice_dims: [0, 1]
start_index_map: [0, 1]
index_vector_dim: 1
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
4Tensor<[1,1,720,1280]>,
Tensor<[1,1,23,40,4]>,
collapsed_slice_dims: [0, 1, 2, 3]
start_index_map: [0, 1, 2, 3]
index_vector_dim: 4
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
5Tensor<[250002,768]>,
Tensor<[1,10]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
6Tensor<[1,768]>,
Tensor<[1,10]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
7Tensor<[514,768]>,
Tensor<[1,10]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
8Tensor<[1,1280,8,8]>,
Tensor<[1,1280,16,16,4]>,
collapsed_slice_dims: [0, 1, 2, 3]
start_index_map: [0, 1, 2, 3]
index_vector_dim: 4
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
9Tensor<[1,1280,16,16]>,
Tensor<[1,1280,32,32,4]>,
collapsed_slice_dims: [0, 1, 2, 3]
start_index_map: [0, 1, 2, 3]
index_vector_dim: 4
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
10Tensor<[1,640,32,32]>,
Tensor<[1,640,64,64,4]>,
collapsed_slice_dims: [0, 1, 2, 3]
start_index_map: [0, 1, 2, 3]
index_vector_dim: 4
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
11Tensor<[30522,768]>,
Tensor<[1,25]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
12Tensor<[2,768]>,
Tensor<[1,25]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
13Tensor<[512,768]>,
Tensor<[1,25]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
14Tensor<[30528,768]>,
Tensor<[1,8]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
15Tensor<[512,768]>,
Tensor<[1,8]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
16Tensor<[2,768]>,
Tensor<[1,8]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
17Tensor<[262,768]>,
Tensor<[1,2048]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
18Tensor<[2048,768]>,
Tensor<[2048]>,
offset_dims: [1]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 1
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
19Tensor<[30522,768]>,
Tensor<[1,8]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
20Tensor<[40,768]>,
Tensor<[1,8]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
21Tensor<[2,768]>,
Tensor<[1,193]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
22Tensor<[1,1,384,512]>,
Tensor<[1,1,12,16,4]>,
collapsed_slice_dims: [0, 1, 2, 3]
start_index_map: [0, 1, 2, 3]
index_vector_dim: 4
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
23Tensor<[256008,1024]>,
Tensor<[1,19]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
24Tensor<[19,256008]>,
Tensor<[19,1,2]>,
collapsed_slice_dims: [0, 1]
start_index_map: [0, 1]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::gather4
25Tensor<[2050,1024]>,
Tensor<[19]>,
offset_dims: [1]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 1
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index_select4
26Tensor<[1,256,16,16]>,
Tensor<[1,256,32,32,4]>,
collapsed_slice_dims: [0, 1, 2, 3]
start_index_map: [0, 1, 2, 3]
index_vector_dim: 4
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
27Tensor<[1,128,32,32]>,
Tensor<[1,128,64,64,4]>,
collapsed_slice_dims: [0, 1, 2, 3]
start_index_map: [0, 1, 2, 3]
index_vector_dim: 4
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
28Tensor<[250880,1536]>,
Tensor<[1,32]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
29Tensor<[30522,768]>,
Tensor<[1,16]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
30Tensor<[512,768]>,
Tensor<[1,16]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
31Tensor<[1,64,15,20]>,
Tensor<[1,64,30,40,4]>,
collapsed_slice_dims: [0, 1, 2, 3]
start_index_map: [0, 1, 2, 3]
index_vector_dim: 4
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
32Tensor<[1,64,30,40]>,
Tensor<[1,64,60,80,4]>,
collapsed_slice_dims: [0, 1, 2, 3]
start_index_map: [0, 1, 2, 3]
index_vector_dim: 4
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
33Tensor<[1,64,60,80]>,
Tensor<[1,64,120,160,4]>,
collapsed_slice_dims: [0, 1, 2, 3]
start_index_map: [0, 1, 2, 3]
index_vector_dim: 4
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
34Tensor<[1,64,120,160]>,
Tensor<[1,64,240,320,4]>,
collapsed_slice_dims: [0, 1, 2, 3]
start_index_map: [0, 1, 2, 3]
index_vector_dim: 4
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
35Tensor<[1,64,240,320]>,
Tensor<[1,64,480,640,4]>,
collapsed_slice_dims: [0, 1, 2, 3]
start_index_map: [0, 1, 2, 3]
index_vector_dim: 4
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
36Tensor<[1,256,128,128]>,
Tensor<[1,256,128,128,4]>,
collapsed_slice_dims: [0, 1, 2, 3]
start_index_map: [0, 1, 2, 3]
index_vector_dim: 4
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
37Tensor<[1,256,64,64]>,
Tensor<[1,256,128,128,4]>,
collapsed_slice_dims: [0, 1, 2, 3]
start_index_map: [0, 1, 2, 3]
index_vector_dim: 4
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
38Tensor<[1,256,32,32]>,
Tensor<[1,256,128,128,4]>,
collapsed_slice_dims: [0, 1, 2, 3]
start_index_map: [0, 1, 2, 3]
index_vector_dim: 4
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
39Tensor<[1,256,16,16]>,
Tensor<[1,256,128,128,4]>,
collapsed_slice_dims: [0, 1, 2, 3]
start_index_map: [0, 1, 2, 3]
index_vector_dim: 4
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
40Tensor<[65024,4544]>,
Tensor<[1,7]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
41Tensor<[1,7,73,64]>,
Tensor<[1,7,1,64,4]>,
collapsed_slice_dims: [0, 1, 2, 3]
start_index_map: [0, 1, 2, 3]
index_vector_dim: 4
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
42Tensor<[30000,128]>,
Tensor<[1,12]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
43Tensor<[2,128]>,
Tensor<[1,12]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
44Tensor<[512,128]>,
Tensor<[1,12]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
45Tensor<[30000,128]>,
Tensor<[1,9]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
46Tensor<[2,128]>,
Tensor<[1,9]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
47Tensor<[512,128]>,
Tensor<[1,9]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
48Tensor<[30000,128]>,
Tensor<[1,14]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
49Tensor<[2,128]>,
Tensor<[1,14]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
50Tensor<[512,128]>,
Tensor<[1,14]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
51Tensor<[50,768]>,
Tensor<[1,50]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
52Tensor<[49408,512]>,
Tensor<[2,7]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
53Tensor<[77,512]>,
Tensor<[1,7]>,
offset_dims: [2]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 2
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::embedding4
54Tensor<[2,7,512]>,
Tensor<[2,2]>,
offset_dims: [1]
collapsed_slice_dims: [0, 1]
start_index_map: [0, 1]
index_vector_dim: 1
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
55Tensor<[1,16,27,27]>,
Tensor<[1,16,27,27,4]>,
collapsed_slice_dims: [0, 1, 2, 3]
start_index_map: [0, 1, 2, 3]
index_vector_dim: 4
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
56Tensor<[732,16]>,
Tensor<[38809,1]>,
offset_dims: [1]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 1
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
57Tensor<[1,12,27,27]>,
Tensor<[1,12,27,27,4]>,
collapsed_slice_dims: [0, 1, 2, 3]
start_index_map: [0, 1, 2, 3]
index_vector_dim: 4
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4
58Tensor<[732,12]>,
Tensor<[38809,1]>,
offset_dims: [1]
collapsed_slice_dims: [0]
start_index_map: [0]
index_vector_dim: 1
indices_are_sorted: false
slice_sizes: array<i64
ttnn.embeddingaten::index.Tensor4

stablehlo.iota::ttnn.arange

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[19,1,1]>,
Tensor<[19,1,1]>,
dim: 0
ttnn.arangeaten::gather4

stablehlo.log::ttnn.log

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,1]>,
ttnn.logaten::log4
1Tensor<[19,1]>,
ttnn.logaten::log4

stablehlo.logistic::ttnn.sigmoig

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,32,11008]>,
ttnn.sigmoigaten::silu4
1Tensor<[6,1,100,4]>,
ttnn.sigmoigaten::sigmoid4
2Tensor<[1,1280]>,
ttnn.sigmoigaten::silu4
3Tensor<[1,320,64,64]>,
ttnn.sigmoigaten::silu4
4Tensor<[1,320,32,32]>,
ttnn.sigmoigaten::silu4
5Tensor<[1,640,32,32]>,
ttnn.sigmoigaten::silu4
6Tensor<[1,640,16,16]>,
ttnn.sigmoigaten::silu4
7Tensor<[1,1280,16,16]>,
ttnn.sigmoigaten::silu4
8Tensor<[1,1280,8,8]>,
ttnn.sigmoigaten::silu4
9Tensor<[1,2560,8,8]>,
ttnn.sigmoigaten::silu4
10Tensor<[1,2560,16,16]>,
ttnn.sigmoigaten::silu4
11Tensor<[1,1920,16,16]>,
ttnn.sigmoigaten::silu4
12Tensor<[1,1920,32,32]>,
ttnn.sigmoigaten::silu4
13Tensor<[1,1280,32,32]>,
ttnn.sigmoigaten::silu4
14Tensor<[1,960,32,32]>,
ttnn.sigmoigaten::silu4
15Tensor<[1,960,64,64]>,
ttnn.sigmoigaten::silu4
16Tensor<[1,640,64,64]>,
ttnn.sigmoigaten::silu4
17Tensor<[1,100,4]>,
ttnn.sigmoigaten::sigmoid4
18Tensor<[1,1,256,256]>,
ttnn.sigmoigaten::sigmoid4
19Tensor<[1,2,30,40]>,
ttnn.sigmoigaten::sigmoid4
20Tensor<[1,2,60,80]>,
ttnn.sigmoigaten::sigmoid4
21Tensor<[1,2,120,160]>,
ttnn.sigmoigaten::sigmoid4
22Tensor<[1,1,480,640]>,
ttnn.sigmoigaten::sigmoid4
23Tensor<[1,50,3072]>,
ttnn.sigmoigaten::sigmoid4
24Tensor<[2,7,2048]>,
ttnn.sigmoigaten::sigmoid4

stablehlo.maximum::ttnn.maximum

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,128,28,28]>,
Tensor<[1,128,28,28]>,
ttnn.maximumaten::elu4
1Tensor<[1,32,112,112]>,
Tensor<[1,32,112,112]>,
ttnn.maximumaten::relu4
2Tensor<[1,64,112,112]>,
Tensor<[1,64,112,112]>,
ttnn.maximumaten::relu4
3Tensor<[1,64,56,56]>,
Tensor<[1,64,56,56]>,
ttnn.maximumaten::relu4
4Tensor<[1,128,56,56]>,
Tensor<[1,128,56,56]>,
ttnn.maximumaten::relu4
5Tensor<[1,256,28,28]>,
Tensor<[1,256,28,28]>,
ttnn.maximumaten::relu4
6Tensor<[1,512,28,28]>,
Tensor<[1,512,28,28]>,
ttnn.maximumaten::relu4
7Tensor<[1,64,360,640]>,
Tensor<[1,64,360,640]>,
ttnn.maximumaten::relu4
8Tensor<[1,64,180,320]>,
Tensor<[1,64,180,320]>,
ttnn.maximumaten::relu4
9Tensor<[1,256,180,320]>,
Tensor<[1,256,180,320]>,
ttnn.maximumaten::relu4
10Tensor<[1,128,180,320]>,
Tensor<[1,128,180,320]>,
ttnn.maximumaten::relu4
11Tensor<[1,128,90,160]>,
Tensor<[1,128,90,160]>,
ttnn.maximumaten::relu4
12Tensor<[1,512,90,160]>,
Tensor<[1,512,90,160]>,
ttnn.maximumaten::relu4
13Tensor<[1,256,90,160]>,
Tensor<[1,256,90,160]>,
ttnn.maximumaten::relu4
14Tensor<[1,256,45,80]>,
Tensor<[1,256,45,80]>,
ttnn.maximumaten::relu4
15Tensor<[1,1024,45,80]>,
Tensor<[1,1024,45,80]>,
ttnn.maximumaten::relu4
16Tensor<[1,512,45,80]>,
Tensor<[1,512,45,80]>,
ttnn.maximumaten::relu4
17Tensor<[1,512,23,40]>,
Tensor<[1,512,23,40]>,
ttnn.maximumaten::relu4
18Tensor<[1,2048,23,40]>,
Tensor<[1,2048,23,40]>,
ttnn.maximumaten::relu4
19Tensor<[920,1,2048]>,
Tensor<[920,1,2048]>,
ttnn.maximumaten::relu4
20Tensor<[100,1,2048]>,
Tensor<[100,1,2048]>,
ttnn.maximumaten::relu4
21Tensor<[6,1,100,256]>,
Tensor<[6,1,100,256]>,
ttnn.maximumaten::relu4
22Tensor<[1,100,192]>,
Tensor<[1,100,192]>,
ttnn.maximumaten::relu4
23Tensor<[1,256,14,14]>,
Tensor<[1,256,14,14]>,
ttnn.maximumaten::relu4
24Tensor<[1,512,7,7]>,
Tensor<[1,512,7,7]>,
ttnn.maximumaten::relu4
25Tensor<[1,256,56,56]>,
Tensor<[1,256,56,56]>,
ttnn.maximumaten::relu4
26Tensor<[1,1024,14,14]>,
Tensor<[1,1024,14,14]>,
ttnn.maximumaten::relu4
27Tensor<[1,512,14,14]>,
Tensor<[1,512,14,14]>,
ttnn.maximumaten::relu4
28Tensor<[1,2048,7,7]>,
Tensor<[1,2048,7,7]>,
ttnn.maximumaten::relu4
29Tensor<[1,32,26,26]>,
Tensor<[1,32,26,26]>,
ttnn.maximumaten::relu4
30Tensor<[1,64,24,24]>,
Tensor<[1,64,24,24]>,
ttnn.maximumaten::relu4
31Tensor<[1,128]>,
Tensor<[1,128]>,
ttnn.maximumaten::relu4
32Tensor<[1,16,19,19]>,
Tensor<[1,16,19,19]>,
ttnn.maximumaten::maximum4
33Tensor<[1,14,56,56]>,
Tensor<[1,14,56,56]>,
ttnn.maximumaten::hardtanh4
34Tensor<[1,24,56,56]>,
Tensor<[1,24,56,56]>,
ttnn.maximumaten::hardtanh4
35Tensor<[1,40,56,56]>,
Tensor<[1,40,56,56]>,
ttnn.maximumaten::hardtanh4
36Tensor<[1,68,56,56]>,
Tensor<[1,68,56,56]>,
ttnn.maximumaten::hardtanh4
37Tensor<[1,16,28,28]>,
Tensor<[1,16,28,28]>,
ttnn.maximumaten::hardtanh4
38Tensor<[1,28,28,28]>,
Tensor<[1,28,28,28]>,
ttnn.maximumaten::hardtanh4
39Tensor<[1,46,28,28]>,
Tensor<[1,46,28,28]>,
ttnn.maximumaten::hardtanh4
40Tensor<[1,78,28,28]>,
Tensor<[1,78,28,28]>,
ttnn.maximumaten::hardtanh4
41Tensor<[1,134,28,28]>,
Tensor<[1,134,28,28]>,
ttnn.maximumaten::hardtanh4
42Tensor<[1,20,28,28]>,
Tensor<[1,20,28,28]>,
ttnn.maximumaten::hardtanh4
43Tensor<[1,34,28,28]>,
Tensor<[1,34,28,28]>,
ttnn.maximumaten::hardtanh4
44Tensor<[1,58,28,28]>,
Tensor<[1,58,28,28]>,
ttnn.maximumaten::hardtanh4
45Tensor<[1,98,28,28]>,
Tensor<[1,98,28,28]>,
ttnn.maximumaten::hardtanh4
46Tensor<[1,168,28,28]>,
Tensor<[1,168,28,28]>,
ttnn.maximumaten::hardtanh4
47Tensor<[1,320,28,28]>,
Tensor<[1,320,28,28]>,
ttnn.maximumaten::hardtanh4
48Tensor<[1,40,14,14]>,
Tensor<[1,40,14,14]>,
ttnn.maximumaten::hardtanh4
49Tensor<[1,68,14,14]>,
Tensor<[1,68,14,14]>,
ttnn.maximumaten::hardtanh4
50Tensor<[1,116,14,14]>,
Tensor<[1,116,14,14]>,
ttnn.maximumaten::hardtanh4
51Tensor<[1,196,14,14]>,
Tensor<[1,196,14,14]>,
ttnn.maximumaten::hardtanh4
52Tensor<[1,334,14,14]>,
Tensor<[1,334,14,14]>,
ttnn.maximumaten::hardtanh4
53Tensor<[1,640,14,14]>,
Tensor<[1,640,14,14]>,
ttnn.maximumaten::hardtanh4
54Tensor<[1,160,7,7]>,
Tensor<[1,160,7,7]>,
ttnn.maximumaten::hardtanh4
55Tensor<[1,272,7,7]>,
Tensor<[1,272,7,7]>,
ttnn.maximumaten::hardtanh4
56Tensor<[1,462,7,7]>,
Tensor<[1,462,7,7]>,
ttnn.maximumaten::hardtanh4
57Tensor<[1,1024,7,7]>,
Tensor<[1,1024,7,7]>,
ttnn.maximumaten::hardtanh4
58Tensor<[1,32,512,512]>,
Tensor<[1,32,512,512]>,
ttnn.maximumaten::leaky_relu4
59Tensor<[1,64,256,256]>,
Tensor<[1,64,256,256]>,
ttnn.maximumaten::leaky_relu4
60Tensor<[1,32,256,256]>,
Tensor<[1,32,256,256]>,
ttnn.maximumaten::leaky_relu4
61Tensor<[1,128,128,128]>,
Tensor<[1,128,128,128]>,
ttnn.maximumaten::leaky_relu4
62Tensor<[1,64,128,128]>,
Tensor<[1,64,128,128]>,
ttnn.maximumaten::leaky_relu4
63Tensor<[1,256,64,64]>,
Tensor<[1,256,64,64]>,
ttnn.maximumaten::leaky_relu4
64Tensor<[1,128,64,64]>,
Tensor<[1,128,64,64]>,
ttnn.maximumaten::leaky_relu4
65Tensor<[1,512,32,32]>,
Tensor<[1,512,32,32]>,
ttnn.maximumaten::leaky_relu4
66Tensor<[1,256,32,32]>,
Tensor<[1,256,32,32]>,
ttnn.maximumaten::leaky_relu4
67Tensor<[1,1024,16,16]>,
Tensor<[1,1024,16,16]>,
ttnn.maximumaten::leaky_relu4
68Tensor<[1,512,16,16]>,
Tensor<[1,512,16,16]>,
ttnn.maximumaten::leaky_relu4
69Tensor<[1,256,16,16]>,
Tensor<[1,256,16,16]>,
ttnn.maximumaten::leaky_relu4
70Tensor<[1,128,32,32]>,
Tensor<[1,128,32,32]>,
ttnn.maximumaten::leaky_relu4
71Tensor<[1,4,14,14]>,
Tensor<[1,4,14,14]>,
ttnn.maximumaten::relu4
72Tensor<[1,16,14,14]>,
Tensor<[1,16,14,14]>,
ttnn.maximumaten::relu4
73Tensor<[1,64,224,224]>,
Tensor<[1,64,224,224]>,
ttnn.maximumaten::relu4
74Tensor<[1,128,112,112]>,
Tensor<[1,128,112,112]>,
ttnn.maximumaten::relu4
75Tensor<[1,64,30,40]>,
Tensor<[1,64,30,40]>,
ttnn.maximumaten::relu4
76Tensor<[1,32,30,40]>,
Tensor<[1,32,30,40]>,
ttnn.maximumaten::relu4
77Tensor<[1,64,60,80]>,
Tensor<[1,64,60,80]>,
ttnn.maximumaten::relu4
78Tensor<[1,32,60,80]>,
Tensor<[1,32,60,80]>,
ttnn.maximumaten::relu4
79Tensor<[1,64,120,160]>,
Tensor<[1,64,120,160]>,
ttnn.maximumaten::relu4
80Tensor<[1,32,120,160]>,
Tensor<[1,32,120,160]>,
ttnn.maximumaten::relu4
81Tensor<[1,64,480,640]>,
Tensor<[1,64,480,640]>,
ttnn.maximumaten::relu4
82Tensor<[1,256,128,128]>,
Tensor<[1,256,128,128]>,
ttnn.maximumaten::relu4
83Tensor<[1,96,112,112]>,
Tensor<[1,96,112,112]>,
ttnn.maximumaten::hardtanh4
84Tensor<[1,96,56,56]>,
Tensor<[1,96,56,56]>,
ttnn.maximumaten::hardtanh4
85Tensor<[1,144,56,56]>,
Tensor<[1,144,56,56]>,
ttnn.maximumaten::hardtanh4
86Tensor<[1,144,28,28]>,
Tensor<[1,144,28,28]>,
ttnn.maximumaten::hardtanh4
87Tensor<[1,192,28,28]>,
Tensor<[1,192,28,28]>,
ttnn.maximumaten::hardtanh4
88Tensor<[1,192,14,14]>,
Tensor<[1,192,14,14]>,
ttnn.maximumaten::hardtanh4
89Tensor<[1,384,14,14]>,
Tensor<[1,384,14,14]>,
ttnn.maximumaten::hardtanh4
90Tensor<[1,576,14,14]>,
Tensor<[1,576,14,14]>,
ttnn.maximumaten::hardtanh4
91Tensor<[1,576,7,7]>,
Tensor<[1,576,7,7]>,
ttnn.maximumaten::hardtanh4
92Tensor<[1,960,7,7]>,
Tensor<[1,960,7,7]>,
ttnn.maximumaten::hardtanh4
93Tensor<[1,1280,7,7]>,
Tensor<[1,1280,7,7]>,
ttnn.maximumaten::hardtanh4
94Tensor<[1,64]>,
Tensor<[1,64]>,
ttnn.maximumaten::relu4
95Tensor<[1,12]>,
Tensor<[1,12]>,
ttnn.maximumaten::relu4

stablehlo.minimum::ttnn.minimum

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,128,28,28]>,
Tensor<[1,128,28,28]>,
ttnn.minimumaten::elu4
1Tensor<[1,32,112,112]>,
Tensor<[1,32,112,112]>,
ttnn.minimumaten::hardtanh4
2Tensor<[1,64,112,112]>,
Tensor<[1,64,112,112]>,
ttnn.minimumaten::hardtanh4
3Tensor<[1,14,56,56]>,
Tensor<[1,14,56,56]>,
ttnn.minimumaten::hardtanh4
4Tensor<[1,24,56,56]>,
Tensor<[1,24,56,56]>,
ttnn.minimumaten::hardtanh4
5Tensor<[1,40,56,56]>,
Tensor<[1,40,56,56]>,
ttnn.minimumaten::hardtanh4
6Tensor<[1,68,56,56]>,
Tensor<[1,68,56,56]>,
ttnn.minimumaten::hardtanh4
7Tensor<[1,128,56,56]>,
Tensor<[1,128,56,56]>,
ttnn.minimumaten::hardtanh4
8Tensor<[1,16,28,28]>,
Tensor<[1,16,28,28]>,
ttnn.minimumaten::hardtanh4
9Tensor<[1,28,28,28]>,
Tensor<[1,28,28,28]>,
ttnn.minimumaten::hardtanh4
10Tensor<[1,46,28,28]>,
Tensor<[1,46,28,28]>,
ttnn.minimumaten::hardtanh4
11Tensor<[1,78,28,28]>,
Tensor<[1,78,28,28]>,
ttnn.minimumaten::hardtanh4
12Tensor<[1,134,28,28]>,
Tensor<[1,134,28,28]>,
ttnn.minimumaten::hardtanh4
13Tensor<[1,256,28,28]>,
Tensor<[1,256,28,28]>,
ttnn.minimumaten::hardtanh4
14Tensor<[1,20,28,28]>,
Tensor<[1,20,28,28]>,
ttnn.minimumaten::hardtanh4
15Tensor<[1,34,28,28]>,
Tensor<[1,34,28,28]>,
ttnn.minimumaten::hardtanh4
16Tensor<[1,58,28,28]>,
Tensor<[1,58,28,28]>,
ttnn.minimumaten::hardtanh4
17Tensor<[1,98,28,28]>,
Tensor<[1,98,28,28]>,
ttnn.minimumaten::hardtanh4
18Tensor<[1,168,28,28]>,
Tensor<[1,168,28,28]>,
ttnn.minimumaten::hardtanh4
19Tensor<[1,320,28,28]>,
Tensor<[1,320,28,28]>,
ttnn.minimumaten::hardtanh4
20Tensor<[1,40,14,14]>,
Tensor<[1,40,14,14]>,
ttnn.minimumaten::hardtanh4
21Tensor<[1,68,14,14]>,
Tensor<[1,68,14,14]>,
ttnn.minimumaten::hardtanh4
22Tensor<[1,116,14,14]>,
Tensor<[1,116,14,14]>,
ttnn.minimumaten::hardtanh4
23Tensor<[1,196,14,14]>,
Tensor<[1,196,14,14]>,
ttnn.minimumaten::hardtanh4
24Tensor<[1,334,14,14]>,
Tensor<[1,334,14,14]>,
ttnn.minimumaten::hardtanh4
25Tensor<[1,640,14,14]>,
Tensor<[1,640,14,14]>,
ttnn.minimumaten::hardtanh4
26Tensor<[1,160,7,7]>,
Tensor<[1,160,7,7]>,
ttnn.minimumaten::hardtanh4
27Tensor<[1,272,7,7]>,
Tensor<[1,272,7,7]>,
ttnn.minimumaten::hardtanh4
28Tensor<[1,462,7,7]>,
Tensor<[1,462,7,7]>,
ttnn.minimumaten::hardtanh4
29Tensor<[1,1024,7,7]>,
Tensor<[1,1024,7,7]>,
ttnn.minimumaten::hardtanh4
30Tensor<[1,32,512,512]>,
Tensor<[1,32,512,512]>,
ttnn.minimumaten::leaky_relu4
31Tensor<[1,64,256,256]>,
Tensor<[1,64,256,256]>,
ttnn.minimumaten::leaky_relu4
32Tensor<[1,32,256,256]>,
Tensor<[1,32,256,256]>,
ttnn.minimumaten::leaky_relu4
33Tensor<[1,128,128,128]>,
Tensor<[1,128,128,128]>,
ttnn.minimumaten::leaky_relu4
34Tensor<[1,64,128,128]>,
Tensor<[1,64,128,128]>,
ttnn.minimumaten::leaky_relu4
35Tensor<[1,256,64,64]>,
Tensor<[1,256,64,64]>,
ttnn.minimumaten::leaky_relu4
36Tensor<[1,128,64,64]>,
Tensor<[1,128,64,64]>,
ttnn.minimumaten::leaky_relu4
37Tensor<[1,512,32,32]>,
Tensor<[1,512,32,32]>,
ttnn.minimumaten::leaky_relu4
38Tensor<[1,256,32,32]>,
Tensor<[1,256,32,32]>,
ttnn.minimumaten::leaky_relu4
39Tensor<[1,1024,16,16]>,
Tensor<[1,1024,16,16]>,
ttnn.minimumaten::leaky_relu4
40Tensor<[1,512,16,16]>,
Tensor<[1,512,16,16]>,
ttnn.minimumaten::leaky_relu4
41Tensor<[1,256,16,16]>,
Tensor<[1,256,16,16]>,
ttnn.minimumaten::leaky_relu4
42Tensor<[1,128,32,32]>,
Tensor<[1,128,32,32]>,
ttnn.minimumaten::leaky_relu4
43Tensor<[1,96,112,112]>,
Tensor<[1,96,112,112]>,
ttnn.minimumaten::hardtanh4
44Tensor<[1,96,56,56]>,
Tensor<[1,96,56,56]>,
ttnn.minimumaten::hardtanh4
45Tensor<[1,144,56,56]>,
Tensor<[1,144,56,56]>,
ttnn.minimumaten::hardtanh4
46Tensor<[1,144,28,28]>,
Tensor<[1,144,28,28]>,
ttnn.minimumaten::hardtanh4
47Tensor<[1,192,28,28]>,
Tensor<[1,192,28,28]>,
ttnn.minimumaten::hardtanh4
48Tensor<[1,192,14,14]>,
Tensor<[1,192,14,14]>,
ttnn.minimumaten::hardtanh4
49Tensor<[1,384,14,14]>,
Tensor<[1,384,14,14]>,
ttnn.minimumaten::hardtanh4
50Tensor<[1,576,14,14]>,
Tensor<[1,576,14,14]>,
ttnn.minimumaten::hardtanh4
51Tensor<[1,576,7,7]>,
Tensor<[1,576,7,7]>,
ttnn.minimumaten::hardtanh4
52Tensor<[1,960,7,7]>,
Tensor<[1,960,7,7]>,
ttnn.minimumaten::hardtanh4
53Tensor<[1,1280,7,7]>,
Tensor<[1,1280,7,7]>,
ttnn.minimumaten::hardtanh4

stablehlo.multiply::ttnn.multiply

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[32]>,
Tensor<[32]>,
ttnn.multiplyaten::arange4
1Tensor<[1,32,32,128]>,
Tensor<[1,32,32,128]>,
ttnn.multiplyaten::mul.Scalar4
2Tensor<[1,32,128,32]>,
Tensor<[1,32,128,32]>,
ttnn.multiplyaten::mul.Scalar4
3Tensor<[32,32]>,
Tensor<[32,32]>,
ttnn.multiplyaten::mul.Tensor5
4Tensor<[1,32,128]>,
Tensor<[1,32,128]>,
ttnn.multiplyaten::mul.Tensor4
5Tensor<[1,32,4096]>,
Tensor<[1,32,4096]>,
ttnn.multiplyaten::mul.Tensor4
6Tensor<[1,32,11008]>,
Tensor<[1,32,11008]>,
ttnn.multiplyaten::mul.Tensor5
7Tensor<[7]>,
Tensor<[7]>,
ttnn.multiplyaten::arange4
8Tensor<[1]>,
Tensor<[1]>,
ttnn.multiplyaten::arange4
9Tensor<[1,12,7,64]>,
Tensor<[1,12,7,64]>,
ttnn.multiplyaten::mul.Scalar4
10Tensor<[1,12,64,7]>,
Tensor<[1,12,64,7]>,
ttnn.multiplyaten::mul.Scalar4
11Tensor<[1,7,768]>,
Tensor<[1,7,768]>,
ttnn.multiplyaten::mul.Tensor4
12Tensor<[7,2304]>,
Tensor<[7,2304]>,
ttnn.multiplyaten::mul.Tensor4
13Tensor<[2304]>,
Tensor<[2304]>,
ttnn.multiplyaten::mul.Tensor4
14Tensor<[7,768]>,
Tensor<[7,768]>,
ttnn.multiplyaten::mul.Tensor4
15Tensor<[768]>,
Tensor<[768]>,
ttnn.multiplyaten::mul.Tensor4
16Tensor<[7,3072]>,
Tensor<[7,3072]>,
ttnn.multiplyaten::mul.Tensor4
17Tensor<[3072]>,
Tensor<[3072]>,
ttnn.multiplyaten::mul.Tensor4
18Tensor<[1,7,3072]>,
Tensor<[1,7,3072]>,
ttnn.multiplyaten::mul.Tensor4
19Tensor<[1,128,28,28]>,
Tensor<[1,128,28,28]>,
ttnn.multiplyaten::elu4
20Tensor<[1,32,112,112]>,
Tensor<[1,32,112,112]>,
ttnn.multiplyaten::mul.Tensor4
21Tensor<[64]>,
Tensor<[64]>,
ttnn.multiplyaten::mul.Tensor4
22Tensor<[1,64,112,112]>,
Tensor<[1,64,112,112]>,
ttnn.multiplyaten::mul.Tensor4
23Tensor<[1,64,56,56]>,
Tensor<[1,64,56,56]>,
ttnn.multiplyaten::mul.Tensor4
24Tensor<[128]>,
Tensor<[128]>,
ttnn.multiplyaten::mul.Tensor4
25Tensor<[1,128,56,56]>,
Tensor<[1,128,56,56]>,
ttnn.multiplyaten::mul.Tensor4
26Tensor<[256]>,
Tensor<[256]>,
ttnn.multiplyaten::mul.Tensor4
27Tensor<[1,256,28,28]>,
Tensor<[1,256,28,28]>,
ttnn.multiplyaten::mul.Tensor4
28Tensor<[512]>,
Tensor<[512]>,
ttnn.multiplyaten::mul.Tensor4
29Tensor<[1,512,28,28]>,
Tensor<[1,512,28,28]>,
ttnn.multiplyaten::mul.Tensor4
30Tensor<[1,1024,512]>,
Tensor<[1,1024,512]>,
ttnn.multiplyaten::gelu4
31Tensor<[1,256,256]>,
Tensor<[1,256,256]>,
ttnn.multiplyaten::gelu4
32Tensor<[256,512]>,
Tensor<[256,512]>,
ttnn.multiplyaten::mul.Tensor4
33Tensor<[1,256,512]>,
Tensor<[1,256,512]>,
ttnn.multiplyaten::mul.Tensor4
34Tensor<[256,256]>,
Tensor<[256,256]>,
ttnn.multiplyaten::mul.Tensor4
35Tensor<[1,1000]>,
Tensor<[1,1000]>,
ttnn.multiplyaten::mul.Tensor4
36Tensor<[1000]>,
Tensor<[1000]>,
ttnn.multiplyaten::mul.Tensor4
37Tensor<[23]>,
Tensor<[23]>,
ttnn.multiplyaten::arange4
38Tensor<[40]>,
Tensor<[40]>,
ttnn.multiplyaten::arange4
39Tensor<[8,920,920]>,
Tensor<[8,920,920]>,
ttnn.multiplyaten::baddbmm4
40Tensor<[8,100,920]>,
Tensor<[8,100,920]>,
ttnn.multiplyaten::baddbmm4
41Tensor<[1,64,1,1]>,
Tensor<[1,64,1,1]>,
ttnn.multiplyaten::mul.Tensor5
42Tensor<[1,64,360,640]>,
Tensor<[1,64,360,640]>,
ttnn.multiplyaten::mul.Tensor4
43Tensor<[1,64,180,320]>,
Tensor<[1,64,180,320]>,
ttnn.multiplyaten::mul.Tensor4
44Tensor<[1,256,1,1]>,
Tensor<[1,256,1,1]>,
ttnn.multiplyaten::mul.Tensor5
45Tensor<[1,256,180,320]>,
Tensor<[1,256,180,320]>,
ttnn.multiplyaten::mul.Tensor4
46Tensor<[1,128,1,1]>,
Tensor<[1,128,1,1]>,
ttnn.multiplyaten::mul.Tensor5
47Tensor<[1,128,180,320]>,
Tensor<[1,128,180,320]>,
ttnn.multiplyaten::mul.Tensor4
48Tensor<[1,128,90,160]>,
Tensor<[1,128,90,160]>,
ttnn.multiplyaten::mul.Tensor4
49Tensor<[1,512,1,1]>,
Tensor<[1,512,1,1]>,
ttnn.multiplyaten::mul.Tensor5
50Tensor<[1,512,90,160]>,
Tensor<[1,512,90,160]>,
ttnn.multiplyaten::mul.Tensor4
51Tensor<[1,256,90,160]>,
Tensor<[1,256,90,160]>,
ttnn.multiplyaten::mul.Tensor4
52Tensor<[1,256,45,80]>,
Tensor<[1,256,45,80]>,
ttnn.multiplyaten::mul.Tensor4
53Tensor<[1,1024,1,1]>,
Tensor<[1,1024,1,1]>,
ttnn.multiplyaten::mul.Tensor5
54Tensor<[1,1024,45,80]>,
Tensor<[1,1024,45,80]>,
ttnn.multiplyaten::mul.Tensor4
55Tensor<[1,512,45,80]>,
Tensor<[1,512,45,80]>,
ttnn.multiplyaten::mul.Tensor4
56Tensor<[1,512,23,40]>,
Tensor<[1,512,23,40]>,
ttnn.multiplyaten::mul.Tensor4
57Tensor<[1,2048,1,1]>,
Tensor<[1,2048,1,1]>,
ttnn.multiplyaten::mul.Tensor5
58Tensor<[1,2048,23,40]>,
Tensor<[1,2048,23,40]>,
ttnn.multiplyaten::mul.Tensor4
59Tensor<[1,23,40]>,
Tensor<[1,23,40]>,
ttnn.multiplyaten::mul.Tensor4
60Tensor<[8,920,32]>,
Tensor<[8,920,32]>,
ttnn.multiplyaten::mul.Tensor4
61Tensor<[920,256]>,
Tensor<[920,256]>,
ttnn.multiplyaten::mul.Tensor4
62Tensor<[920,1,256]>,
Tensor<[920,1,256]>,
ttnn.multiplyaten::mul.Tensor4
63Tensor<[920,2048]>,
Tensor<[920,2048]>,
ttnn.multiplyaten::mul.Tensor4
64Tensor<[2048]>,
Tensor<[2048]>,
ttnn.multiplyaten::mul.Tensor4
65Tensor<[100,256]>,
Tensor<[100,256]>,
ttnn.multiplyaten::mul.Tensor4
66Tensor<[8,100,32]>,
Tensor<[8,100,32]>,
ttnn.multiplyaten::mul.Tensor4
67Tensor<[100,1,256]>,
Tensor<[100,1,256]>,
ttnn.multiplyaten::mul.Tensor4
68Tensor<[100,2048]>,
Tensor<[100,2048]>,
ttnn.multiplyaten::mul.Tensor4
69Tensor<[1,10,3072]>,
Tensor<[1,10,3072]>,
ttnn.multiplyaten::gelu4
70Tensor<[1,10,768]>,
Tensor<[1,10,768]>,
ttnn.multiplyaten::gelu4
71Tensor<[1,12,10,64]>,
Tensor<[1,12,10,64]>,
ttnn.multiplyaten::mul.Scalar4
72Tensor<[1,12,64,10]>,
Tensor<[1,12,64,10]>,
ttnn.multiplyaten::mul.Scalar4
73Tensor<[1,10]>,
Tensor<[1,10]>,
ttnn.multiplyaten::mul.Tensor5
74Tensor<[10,768]>,
Tensor<[10,768]>,
ttnn.multiplyaten::mul.Tensor4
75Tensor<[10,3072]>,
Tensor<[10,3072]>,
ttnn.multiplyaten::mul.Tensor4
76Tensor<[10,250002]>,
Tensor<[10,250002]>,
ttnn.multiplyaten::mul.Tensor4
77Tensor<[250002]>,
Tensor<[250002]>,
ttnn.multiplyaten::mul.Tensor4
78Tensor<[16]>,
Tensor<[16]>,
ttnn.multiplyaten::arange4
79Tensor<[160]>,
Tensor<[160]>,
ttnn.multiplyaten::arange.start4
80Tensor<[1,4096,1280]>,
Tensor<[1,4096,1280]>,
ttnn.multiplyaten::gelu4
81Tensor<[1,1024,2560]>,
Tensor<[1,1024,2560]>,
ttnn.multiplyaten::gelu4
82Tensor<[1,256,5120]>,
Tensor<[1,256,5120]>,
ttnn.multiplyaten::gelu4
83Tensor<[1,64,5120]>,
Tensor<[1,64,5120]>,
ttnn.multiplyaten::gelu4
84Tensor<[1280]>,
Tensor<[1280]>,
ttnn.multiplyaten::index.Tensor4
85Tensor<[640]>,
Tensor<[640]>,
ttnn.multiplyaten::index.Tensor4
86Tensor<[1,8,4096,40]>,
Tensor<[1,8,4096,40]>,
ttnn.multiplyaten::mul.Scalar4
87Tensor<[1,8,40,4096]>,
Tensor<[1,8,40,4096]>,
ttnn.multiplyaten::mul.Scalar4
88Tensor<[1,8,40,9]>,
Tensor<[1,8,40,9]>,
ttnn.multiplyaten::mul.Scalar4
89Tensor<[1,8,1024,80]>,
Tensor<[1,8,1024,80]>,
ttnn.multiplyaten::mul.Scalar4
90Tensor<[1,8,80,1024]>,
Tensor<[1,8,80,1024]>,
ttnn.multiplyaten::mul.Scalar4
91Tensor<[1,8,80,9]>,
Tensor<[1,8,80,9]>,
ttnn.multiplyaten::mul.Scalar4
92Tensor<[1,8,256,160]>,
Tensor<[1,8,256,160]>,
ttnn.multiplyaten::mul.Scalar4
93Tensor<[1,8,160,256]>,
Tensor<[1,8,160,256]>,
ttnn.multiplyaten::mul.Scalar4
94Tensor<[1,8,160,9]>,
Tensor<[1,8,160,9]>,
ttnn.multiplyaten::mul.Scalar4
95Tensor<[1,8,64,160]>,
Tensor<[1,8,64,160]>,
ttnn.multiplyaten::mul.Scalar4
96Tensor<[1,8,160,64]>,
Tensor<[1,8,160,64]>,
ttnn.multiplyaten::mul.Scalar4
97Tensor<[1,160]>,
Tensor<[1,160]>,
ttnn.multiplyaten::mul.Tensor4
98Tensor<[1,1280]>,
Tensor<[1,1280]>,
ttnn.multiplyaten::mul.Tensor4
99Tensor<[1,32,10,4096]>,
Tensor<[1,32,10,4096]>,
ttnn.multiplyaten::mul.Tensor4
100Tensor<[1,320,64,64]>,
Tensor<[1,320,64,64]>,
ttnn.multiplyaten::mul.Tensor4
101Tensor<[1,320]>,
Tensor<[1,320]>,
ttnn.multiplyaten::mul.Tensor4
102Tensor<[320]>,
Tensor<[320]>,
ttnn.multiplyaten::mul.Tensor4
103Tensor<[1,4096,320]>,
Tensor<[1,4096,320]>,
ttnn.multiplyaten::mul.Tensor4
104Tensor<[4096,320]>,
Tensor<[4096,320]>,
ttnn.multiplyaten::mul.Tensor4
105Tensor<[4096,2560]>,
Tensor<[4096,2560]>,
ttnn.multiplyaten::mul.Tensor4
106Tensor<[2560]>,
Tensor<[2560]>,
ttnn.multiplyaten::mul.Tensor4
107Tensor<[1,32,10,1024]>,
Tensor<[1,32,10,1024]>,
ttnn.multiplyaten::mul.Tensor4
108Tensor<[1,320,32,32]>,
Tensor<[1,320,32,32]>,
ttnn.multiplyaten::mul.Tensor4
109Tensor<[1,640]>,
Tensor<[1,640]>,
ttnn.multiplyaten::mul.Tensor4
110Tensor<[1,32,20,1024]>,
Tensor<[1,32,20,1024]>,
ttnn.multiplyaten::mul.Tensor4
111Tensor<[1,640,32,32]>,
Tensor<[1,640,32,32]>,
ttnn.multiplyaten::mul.Tensor4
112Tensor<[1,1024,640]>,
Tensor<[1,1024,640]>,
ttnn.multiplyaten::mul.Tensor4
113Tensor<[1024,640]>,
Tensor<[1024,640]>,
ttnn.multiplyaten::mul.Tensor4
114Tensor<[1024,5120]>,
Tensor<[1024,5120]>,
ttnn.multiplyaten::mul.Tensor4
115Tensor<[5120]>,
Tensor<[5120]>,
ttnn.multiplyaten::mul.Tensor4
116Tensor<[1,32,20,256]>,
Tensor<[1,32,20,256]>,
ttnn.multiplyaten::mul.Tensor4
117Tensor<[1,640,16,16]>,
Tensor<[1,640,16,16]>,
ttnn.multiplyaten::mul.Tensor4
118Tensor<[1,32,40,256]>,
Tensor<[1,32,40,256]>,
ttnn.multiplyaten::mul.Tensor4
119Tensor<[1,1280,16,16]>,
Tensor<[1,1280,16,16]>,
ttnn.multiplyaten::mul.Tensor4
120Tensor<[1,256,1280]>,
Tensor<[1,256,1280]>,
ttnn.multiplyaten::mul.Tensor4
121Tensor<[256,1280]>,
Tensor<[256,1280]>,
ttnn.multiplyaten::mul.Tensor4
122Tensor<[256,10240]>,
Tensor<[256,10240]>,
ttnn.multiplyaten::mul.Tensor4
123Tensor<[10240]>,
Tensor<[10240]>,
ttnn.multiplyaten::mul.Tensor4
124Tensor<[1,32,40,64]>,
Tensor<[1,32,40,64]>,
ttnn.multiplyaten::mul.Tensor4
125Tensor<[1,1280,8,8]>,
Tensor<[1,1280,8,8]>,
ttnn.multiplyaten::mul.Tensor4
126Tensor<[1,64,1280]>,
Tensor<[1,64,1280]>,
ttnn.multiplyaten::mul.Tensor4
127Tensor<[64,1280]>,
Tensor<[64,1280]>,
ttnn.multiplyaten::mul.Tensor4
128Tensor<[64,10240]>,
Tensor<[64,10240]>,
ttnn.multiplyaten::mul.Tensor4
129Tensor<[1,32,80,64]>,
Tensor<[1,32,80,64]>,
ttnn.multiplyaten::mul.Tensor4
130Tensor<[1,2560,8,8]>,
Tensor<[1,2560,8,8]>,
ttnn.multiplyaten::mul.Tensor4
131Tensor<[1,32,80,256]>,
Tensor<[1,32,80,256]>,
ttnn.multiplyaten::mul.Tensor4
132Tensor<[1,2560,16,16]>,
Tensor<[1,2560,16,16]>,
ttnn.multiplyaten::mul.Tensor4
133Tensor<[1,32,60,256]>,
Tensor<[1,32,60,256]>,
ttnn.multiplyaten::mul.Tensor4
134Tensor<[1,1920,16,16]>,
Tensor<[1,1920,16,16]>,
ttnn.multiplyaten::mul.Tensor4
135Tensor<[1,32,60,1024]>,
Tensor<[1,32,60,1024]>,
ttnn.multiplyaten::mul.Tensor4
136Tensor<[1,1920,32,32]>,
Tensor<[1,1920,32,32]>,
ttnn.multiplyaten::mul.Tensor4
137Tensor<[1,32,40,1024]>,
Tensor<[1,32,40,1024]>,
ttnn.multiplyaten::mul.Tensor4
138Tensor<[1,1280,32,32]>,
Tensor<[1,1280,32,32]>,
ttnn.multiplyaten::mul.Tensor4
139Tensor<[1,32,30,1024]>,
Tensor<[1,32,30,1024]>,
ttnn.multiplyaten::mul.Tensor4
140Tensor<[1,960,32,32]>,
Tensor<[1,960,32,32]>,
ttnn.multiplyaten::mul.Tensor4
141Tensor<[1,32,30,4096]>,
Tensor<[1,32,30,4096]>,
ttnn.multiplyaten::mul.Tensor4
142Tensor<[1,960,64,64]>,
Tensor<[1,960,64,64]>,
ttnn.multiplyaten::mul.Tensor4
143Tensor<[1,32,20,4096]>,
Tensor<[1,32,20,4096]>,
ttnn.multiplyaten::mul.Tensor4
144Tensor<[1,640,64,64]>,
Tensor<[1,640,64,64]>,
ttnn.multiplyaten::mul.Tensor4
145Tensor<[1,25,3072]>,
Tensor<[1,25,3072]>,
ttnn.multiplyaten::gelu4
146Tensor<[1,12,25,64]>,
Tensor<[1,12,25,64]>,
ttnn.multiplyaten::mul.Scalar4
147Tensor<[1,12,64,25]>,
Tensor<[1,12,64,25]>,
ttnn.multiplyaten::mul.Scalar4
148Tensor<[1,25,768]>,
Tensor<[1,25,768]>,
ttnn.multiplyaten::mul.Tensor4
149Tensor<[25,768]>,
Tensor<[25,768]>,
ttnn.multiplyaten::mul.Tensor4
150Tensor<[25,3072]>,
Tensor<[25,3072]>,
ttnn.multiplyaten::mul.Tensor4
151Tensor<[25,2]>,
Tensor<[25,2]>,
ttnn.multiplyaten::mul.Tensor4
152Tensor<[2]>,
Tensor<[2]>,
ttnn.multiplyaten::mul.Tensor4
153Tensor<[1,1]>,
Tensor<[1,1]>,
ttnn.multiplyaten::mul.Tensor4
154Tensor<[1,1445,768]>,
Tensor<[1,1445,768]>,
ttnn.multiplyaten::gelu4
155Tensor<[1,3,1445,64]>,
Tensor<[1,3,1445,64]>,
ttnn.multiplyaten::mul.Scalar4
156Tensor<[1,3,64,1445]>,
Tensor<[1,3,64,1445]>,
ttnn.multiplyaten::mul.Scalar4
157Tensor<[1,1445,192]>,
Tensor<[1,1445,192]>,
ttnn.multiplyaten::mul.Tensor4
158Tensor<[1445,192]>,
Tensor<[1445,192]>,
ttnn.multiplyaten::mul.Tensor4
159Tensor<[192]>,
Tensor<[192]>,
ttnn.multiplyaten::mul.Tensor4
160Tensor<[1445,768]>,
Tensor<[1445,768]>,
ttnn.multiplyaten::mul.Tensor4
161Tensor<[100,192]>,
Tensor<[100,192]>,
ttnn.multiplyaten::mul.Tensor4
162Tensor<[100,92]>,
Tensor<[100,92]>,
ttnn.multiplyaten::mul.Tensor4
163Tensor<[92]>,
Tensor<[92]>,
ttnn.multiplyaten::mul.Tensor4
164Tensor<[100,4]>,
Tensor<[100,4]>,
ttnn.multiplyaten::mul.Tensor4
165Tensor<[4]>,
Tensor<[4]>,
ttnn.multiplyaten::mul.Tensor4
166Tensor<[1,256,14,14]>,
Tensor<[1,256,14,14]>,
ttnn.multiplyaten::mul.Tensor4
167Tensor<[1,512,7,7]>,
Tensor<[1,512,7,7]>,
ttnn.multiplyaten::mul.Tensor4
168Tensor<[1,3072,8]>,
Tensor<[1,3072,8]>,
ttnn.multiplyaten::gelu4
169Tensor<[1,1,1,8]>,
Tensor<[1,1,1,8]>,
ttnn.multiplyaten::mul.Tensor4
170Tensor<[1,8,768]>,
Tensor<[1,8,768]>,
ttnn.multiplyaten::mul.Tensor4
171Tensor<[1,768]>,
Tensor<[1,768]>,
ttnn.multiplyaten::mul.Tensor4
172Tensor<[1,3]>,
Tensor<[1,3]>,
ttnn.multiplyaten::mul.Tensor4
173Tensor<[3]>,
Tensor<[3]>,
ttnn.multiplyaten::mul.Tensor4
174Tensor<[1,2048,768]>,
Tensor<[1,2048,768]>,
ttnn.multiplyaten::gelu4
175Tensor<[1,1,1,2048]>,
Tensor<[1,1,1,2048]>,
ttnn.multiplyaten::mul.Tensor4
176Tensor<[2048,256]>,
Tensor<[2048,256]>,
ttnn.multiplyaten::mul.Tensor4
177Tensor<[2048,1280]>,
Tensor<[2048,1280]>,
ttnn.multiplyaten::mul.Tensor4
178Tensor<[256,768]>,
Tensor<[256,768]>,
ttnn.multiplyaten::mul.Tensor4
179Tensor<[2048,768]>,
Tensor<[2048,768]>,
ttnn.multiplyaten::mul.Tensor4
180Tensor<[1,256,56,56]>,
Tensor<[1,256,56,56]>,
ttnn.multiplyaten::mul.Tensor4
181Tensor<[1024]>,
Tensor<[1024]>,
ttnn.multiplyaten::mul.Tensor4
182Tensor<[1,1024,14,14]>,
Tensor<[1,1024,14,14]>,
ttnn.multiplyaten::mul.Tensor4
183Tensor<[1,512,14,14]>,
Tensor<[1,512,14,14]>,
ttnn.multiplyaten::mul.Tensor4
184Tensor<[1,2048,7,7]>,
Tensor<[1,2048,7,7]>,
ttnn.multiplyaten::mul.Tensor4
185Tensor<[12]>,
Tensor<[12]>,
ttnn.multiplyaten::arange4
186Tensor<[1,201,3072]>,
Tensor<[1,201,3072]>,
ttnn.multiplyaten::gelu4
187Tensor<[1,1536]>,
Tensor<[1,1536]>,
ttnn.multiplyaten::gelu4
188Tensor<[1,1,1,201]>,
Tensor<[1,1,1,201]>,
ttnn.multiplyaten::mul.Tensor4
189Tensor<[1,201,768]>,
Tensor<[1,201,768]>,
ttnn.multiplyaten::mul.Tensor4
190Tensor<[201,768]>,
Tensor<[201,768]>,
ttnn.multiplyaten::mul.Tensor4
191Tensor<[201,3072]>,
Tensor<[201,3072]>,
ttnn.multiplyaten::mul.Tensor4
192Tensor<[1536]>,
Tensor<[1536]>,
ttnn.multiplyaten::mul.Tensor4
193Tensor<[1,3129]>,
Tensor<[1,3129]>,
ttnn.multiplyaten::mul.Tensor4
194Tensor<[3129]>,
Tensor<[3129]>,
ttnn.multiplyaten::mul.Tensor4
195Tensor<[1,128]>,
Tensor<[1,128]>,
ttnn.multiplyaten::mul.Tensor4
196Tensor<[10]>,
Tensor<[10]>,
ttnn.multiplyaten::mul.Tensor4
197Tensor<[19]>,
Tensor<[19]>,
ttnn.multiplyaten::arange4
198Tensor<[1,19,4096]>,
Tensor<[1,19,4096]>,
ttnn.multiplyaten::gelu4
199Tensor<[1,19,1024]>,
Tensor<[1,19,1024]>,
ttnn.multiplyaten::mul.Tensor4
200Tensor<[19,1024]>,
Tensor<[19,1024]>,
ttnn.multiplyaten::mul.Tensor4
201Tensor<[19,4096]>,
Tensor<[19,4096]>,
ttnn.multiplyaten::mul.Tensor4
202Tensor<[4096]>,
Tensor<[4096]>,
ttnn.multiplyaten::mul.Tensor4
203Tensor<[14]>,
Tensor<[14]>,
ttnn.multiplyaten::mul.Tensor4
204Tensor<[1,14,56,56]>,
Tensor<[1,14,56,56]>,
ttnn.multiplyaten::mul.Tensor4
205Tensor<[24]>,
Tensor<[24]>,
ttnn.multiplyaten::mul.Tensor4
206Tensor<[1,24,56,56]>,
Tensor<[1,24,56,56]>,
ttnn.multiplyaten::mul.Tensor4
207Tensor<[1,40,56,56]>,
Tensor<[1,40,56,56]>,
ttnn.multiplyaten::mul.Tensor4
208Tensor<[68]>,
Tensor<[68]>,
ttnn.multiplyaten::mul.Tensor4
209Tensor<[1,68,56,56]>,
Tensor<[1,68,56,56]>,
ttnn.multiplyaten::mul.Tensor4
210Tensor<[1,16,28,28]>,
Tensor<[1,16,28,28]>,
ttnn.multiplyaten::mul.Tensor4
211Tensor<[28]>,
Tensor<[28]>,
ttnn.multiplyaten::mul.Tensor4
212Tensor<[1,28,28,28]>,
Tensor<[1,28,28,28]>,
ttnn.multiplyaten::mul.Tensor4
213Tensor<[46]>,
Tensor<[46]>,
ttnn.multiplyaten::mul.Tensor4
214Tensor<[1,46,28,28]>,
Tensor<[1,46,28,28]>,
ttnn.multiplyaten::mul.Tensor4
215Tensor<[78]>,
Tensor<[78]>,
ttnn.multiplyaten::mul.Tensor4
216Tensor<[1,78,28,28]>,
Tensor<[1,78,28,28]>,
ttnn.multiplyaten::mul.Tensor4
217Tensor<[134]>,
Tensor<[134]>,
ttnn.multiplyaten::mul.Tensor4
218Tensor<[1,134,28,28]>,
Tensor<[1,134,28,28]>,
ttnn.multiplyaten::mul.Tensor4
219Tensor<[20]>,
Tensor<[20]>,
ttnn.multiplyaten::mul.Tensor4
220Tensor<[1,20,28,28]>,
Tensor<[1,20,28,28]>,
ttnn.multiplyaten::mul.Tensor4
221Tensor<[34]>,
Tensor<[34]>,
ttnn.multiplyaten::mul.Tensor4
222Tensor<[1,34,28,28]>,
Tensor<[1,34,28,28]>,
ttnn.multiplyaten::mul.Tensor4
223Tensor<[58]>,
Tensor<[58]>,
ttnn.multiplyaten::mul.Tensor4
224Tensor<[1,58,28,28]>,
Tensor<[1,58,28,28]>,
ttnn.multiplyaten::mul.Tensor4
225Tensor<[98]>,
Tensor<[98]>,
ttnn.multiplyaten::mul.Tensor4
226Tensor<[1,98,28,28]>,
Tensor<[1,98,28,28]>,
ttnn.multiplyaten::mul.Tensor4
227Tensor<[168]>,
Tensor<[168]>,
ttnn.multiplyaten::mul.Tensor4
228Tensor<[1,168,28,28]>,
Tensor<[1,168,28,28]>,
ttnn.multiplyaten::mul.Tensor4
229Tensor<[1,320,28,28]>,
Tensor<[1,320,28,28]>,
ttnn.multiplyaten::mul.Tensor4
230Tensor<[1,40,14,14]>,
Tensor<[1,40,14,14]>,
ttnn.multiplyaten::mul.Tensor4
231Tensor<[1,68,14,14]>,
Tensor<[1,68,14,14]>,
ttnn.multiplyaten::mul.Tensor4
232Tensor<[116]>,
Tensor<[116]>,
ttnn.multiplyaten::mul.Tensor4
233Tensor<[1,116,14,14]>,
Tensor<[1,116,14,14]>,
ttnn.multiplyaten::mul.Tensor4
234Tensor<[196]>,
Tensor<[196]>,
ttnn.multiplyaten::mul.Tensor4
235Tensor<[1,196,14,14]>,
Tensor<[1,196,14,14]>,
ttnn.multiplyaten::mul.Tensor4
236Tensor<[334]>,
Tensor<[334]>,
ttnn.multiplyaten::mul.Tensor4
237Tensor<[1,334,14,14]>,
Tensor<[1,334,14,14]>,
ttnn.multiplyaten::mul.Tensor4
238Tensor<[1,640,14,14]>,
Tensor<[1,640,14,14]>,
ttnn.multiplyaten::mul.Tensor4
239Tensor<[1,160,7,7]>,
Tensor<[1,160,7,7]>,
ttnn.multiplyaten::mul.Tensor4
240Tensor<[272]>,
Tensor<[272]>,
ttnn.multiplyaten::mul.Tensor4
241Tensor<[1,272,7,7]>,
Tensor<[1,272,7,7]>,
ttnn.multiplyaten::mul.Tensor4
242Tensor<[462]>,
Tensor<[462]>,
ttnn.multiplyaten::mul.Tensor4
243Tensor<[1,462,7,7]>,
Tensor<[1,462,7,7]>,
ttnn.multiplyaten::mul.Tensor4
244Tensor<[1,1024,7,7]>,
Tensor<[1,1024,7,7]>,
ttnn.multiplyaten::mul.Tensor4
245Tensor<[1,32,512,512]>,
Tensor<[1,32,512,512]>,
ttnn.multiplyaten::leaky_relu4
246Tensor<[1,64,256,256]>,
Tensor<[1,64,256,256]>,
ttnn.multiplyaten::leaky_relu4
247Tensor<[1,32,256,256]>,
Tensor<[1,32,256,256]>,
ttnn.multiplyaten::leaky_relu4
248Tensor<[1,128,128,128]>,
Tensor<[1,128,128,128]>,
ttnn.multiplyaten::leaky_relu4
249Tensor<[1,64,128,128]>,
Tensor<[1,64,128,128]>,
ttnn.multiplyaten::leaky_relu4
250Tensor<[1,256,64,64]>,
Tensor<[1,256,64,64]>,
ttnn.multiplyaten::leaky_relu4
251Tensor<[1,128,64,64]>,
Tensor<[1,128,64,64]>,
ttnn.multiplyaten::leaky_relu4
252Tensor<[1,512,32,32]>,
Tensor<[1,512,32,32]>,
ttnn.multiplyaten::leaky_relu4
253Tensor<[1,256,32,32]>,
Tensor<[1,256,32,32]>,
ttnn.multiplyaten::leaky_relu4
254Tensor<[1,1024,16,16]>,
Tensor<[1,1024,16,16]>,
ttnn.multiplyaten::leaky_relu4
255Tensor<[1,512,16,16]>,
Tensor<[1,512,16,16]>,
ttnn.multiplyaten::leaky_relu4
256Tensor<[1,256,16,16]>,
Tensor<[1,256,16,16]>,
ttnn.multiplyaten::leaky_relu4
257Tensor<[1,128,32,32]>,
Tensor<[1,128,32,32]>,
ttnn.multiplyaten::leaky_relu4
258Tensor<[16,32,32]>,
Tensor<[16,32,32]>,
ttnn.multiplyaten::baddbmm4
259Tensor<[1,32,1536]>,
Tensor<[1,32,1536]>,
ttnn.multiplyaten::mul.Tensor4
260Tensor<[1,32]>,
Tensor<[1,32]>,
ttnn.multiplyaten::mul.Tensor4
261Tensor<[1,16,32]>,
Tensor<[1,16,32]>,
ttnn.multiplyaten::mul.Tensor4
262Tensor<[32,4608]>,
Tensor<[32,4608]>,
ttnn.multiplyaten::mul.Tensor4
263Tensor<[4608]>,
Tensor<[4608]>,
ttnn.multiplyaten::mul.Tensor4
264Tensor<[32,1536]>,
Tensor<[32,1536]>,
ttnn.multiplyaten::mul.Tensor4
265Tensor<[32,6144]>,
Tensor<[32,6144]>,
ttnn.multiplyaten::mul.Tensor4
266Tensor<[6144]>,
Tensor<[6144]>,
ttnn.multiplyaten::mul.Tensor4
267Tensor<[1,32,6144]>,
Tensor<[1,32,6144]>,
ttnn.multiplyaten::mul.Tensor4
268Tensor<[1,16,3072]>,
Tensor<[1,16,3072]>,
ttnn.multiplyaten::gelu4
269Tensor<[1,12,16,64]>,
Tensor<[1,12,16,64]>,
ttnn.multiplyaten::mul.Scalar4
270Tensor<[1,12,64,16]>,
Tensor<[1,12,64,16]>,
ttnn.multiplyaten::mul.Scalar4
271Tensor<[1,16,768]>,
Tensor<[1,16,768]>,
ttnn.multiplyaten::mul.Tensor4
272Tensor<[16,768]>,
Tensor<[16,768]>,
ttnn.multiplyaten::mul.Tensor4
273Tensor<[16,3072]>,
Tensor<[16,3072]>,
ttnn.multiplyaten::mul.Tensor4
274Tensor<[1,64,224,224]>,
Tensor<[1,64,224,224]>,
ttnn.multiplyaten::mul.Tensor4
275Tensor<[1,128,112,112]>,
Tensor<[1,128,112,112]>,
ttnn.multiplyaten::mul.Tensor4
276Tensor<[30]>,
Tensor<[30]>,
ttnn.multiplyaten::arange4
277Tensor<[60]>,
Tensor<[60]>,
ttnn.multiplyaten::arange4
278Tensor<[80]>,
Tensor<[80]>,
ttnn.multiplyaten::arange4
279Tensor<[120]>,
Tensor<[120]>,
ttnn.multiplyaten::arange4
280Tensor<[240]>,
Tensor<[240]>,
ttnn.multiplyaten::arange4
281Tensor<[480]>,
Tensor<[480]>,
ttnn.multiplyaten::arange4
282Tensor<[1,19200,256]>,
Tensor<[1,19200,256]>,
ttnn.multiplyaten::gelu4
283Tensor<[1,4800,512]>,
Tensor<[1,4800,512]>,
ttnn.multiplyaten::gelu4
284Tensor<[1,1200,1280]>,
Tensor<[1,1200,1280]>,
ttnn.multiplyaten::gelu4
285Tensor<[1,300,2048]>,
Tensor<[1,300,2048]>,
ttnn.multiplyaten::gelu4
286Tensor<[1,19200,64]>,
Tensor<[1,19200,64]>,
ttnn.multiplyaten::mul.Tensor4
287Tensor<[19200,64]>,
Tensor<[19200,64]>,
ttnn.multiplyaten::mul.Tensor4
288Tensor<[1,300,64]>,
Tensor<[1,300,64]>,
ttnn.multiplyaten::mul.Tensor4
289Tensor<[300,64]>,
Tensor<[300,64]>,
ttnn.multiplyaten::mul.Tensor4
290Tensor<[19200,256]>,
Tensor<[19200,256]>,
ttnn.multiplyaten::mul.Tensor4
291Tensor<[1,4800,128]>,
Tensor<[1,4800,128]>,
ttnn.multiplyaten::mul.Tensor4
292Tensor<[4800,128]>,
Tensor<[4800,128]>,
ttnn.multiplyaten::mul.Tensor4
293Tensor<[1,300,128]>,
Tensor<[1,300,128]>,
ttnn.multiplyaten::mul.Tensor4
294Tensor<[300,128]>,
Tensor<[300,128]>,
ttnn.multiplyaten::mul.Tensor4
295Tensor<[4800,512]>,
Tensor<[4800,512]>,
ttnn.multiplyaten::mul.Tensor4
296Tensor<[1,1200,320]>,
Tensor<[1,1200,320]>,
ttnn.multiplyaten::mul.Tensor4
297Tensor<[1200,320]>,
Tensor<[1200,320]>,
ttnn.multiplyaten::mul.Tensor4
298Tensor<[1,300,320]>,
Tensor<[1,300,320]>,
ttnn.multiplyaten::mul.Tensor4
299Tensor<[300,320]>,
Tensor<[300,320]>,
ttnn.multiplyaten::mul.Tensor4
300Tensor<[1200,1280]>,
Tensor<[1200,1280]>,
ttnn.multiplyaten::mul.Tensor4
301Tensor<[1,300,512]>,
Tensor<[1,300,512]>,
ttnn.multiplyaten::mul.Tensor4
302Tensor<[300,512]>,
Tensor<[300,512]>,
ttnn.multiplyaten::mul.Tensor4
303Tensor<[300,2048]>,
Tensor<[300,2048]>,
ttnn.multiplyaten::mul.Tensor4
304Tensor<[1,64,30,40]>,
Tensor<[1,64,30,40]>,
ttnn.multiplyaten::mul.Tensor4
305Tensor<[1,32,30,40]>,
Tensor<[1,32,30,40]>,
ttnn.multiplyaten::mul.Tensor4
306Tensor<[1,64,60,80]>,
Tensor<[1,64,60,80]>,
ttnn.multiplyaten::mul.Tensor4
307Tensor<[1,32,60,80]>,
Tensor<[1,32,60,80]>,
ttnn.multiplyaten::mul.Tensor4
308Tensor<[1,64,120,160]>,
Tensor<[1,64,120,160]>,
ttnn.multiplyaten::mul.Tensor4
309Tensor<[1,32,120,160]>,
Tensor<[1,32,120,160]>,
ttnn.multiplyaten::mul.Tensor4
310Tensor<[1,64,240,320]>,
Tensor<[1,64,240,320]>,
ttnn.multiplyaten::mul.Tensor4
311Tensor<[1,64,480,640]>,
Tensor<[1,64,480,640]>,
ttnn.multiplyaten::mul.Tensor4
312Tensor<[1,1,480,640]>,
Tensor<[1,1,480,640]>,
ttnn.multiplyaten::mul.Tensor4
313Tensor<[1,197,3072]>,
Tensor<[1,197,3072]>,
ttnn.multiplyaten::gelu4
314Tensor<[1,12,197,64]>,
Tensor<[1,12,197,64]>,
ttnn.multiplyaten::mul.Scalar4
315Tensor<[1,12,64,197]>,
Tensor<[1,12,64,197]>,
ttnn.multiplyaten::mul.Scalar4
316Tensor<[1,197,768]>,
Tensor<[1,197,768]>,
ttnn.multiplyaten::mul.Tensor4
317Tensor<[197,768]>,
Tensor<[197,768]>,
ttnn.multiplyaten::mul.Tensor4
318Tensor<[197,3072]>,
Tensor<[197,3072]>,
ttnn.multiplyaten::mul.Tensor4
319Tensor<[1,16384,128]>,
Tensor<[1,16384,128]>,
ttnn.multiplyaten::gelu4
320Tensor<[1,4096,256]>,
Tensor<[1,4096,256]>,
ttnn.multiplyaten::gelu4
321Tensor<[1,256,1024]>,
Tensor<[1,256,1024]>,
ttnn.multiplyaten::gelu4
322Tensor<[1,16384,32]>,
Tensor<[1,16384,32]>,
ttnn.multiplyaten::mul.Tensor4
323Tensor<[16384,32]>,
Tensor<[16384,32]>,
ttnn.multiplyaten::mul.Tensor4
324Tensor<[1,256,32]>,
Tensor<[1,256,32]>,
ttnn.multiplyaten::mul.Tensor4
325Tensor<[256,32]>,
Tensor<[256,32]>,
ttnn.multiplyaten::mul.Tensor4
326Tensor<[16384,128]>,
Tensor<[16384,128]>,
ttnn.multiplyaten::mul.Tensor4
327Tensor<[1,4096,64]>,
Tensor<[1,4096,64]>,
ttnn.multiplyaten::mul.Tensor4
328Tensor<[4096,64]>,
Tensor<[4096,64]>,
ttnn.multiplyaten::mul.Tensor4
329Tensor<[1,256,64]>,
Tensor<[1,256,64]>,
ttnn.multiplyaten::mul.Tensor4
330Tensor<[256,64]>,
Tensor<[256,64]>,
ttnn.multiplyaten::mul.Tensor4
331Tensor<[4096,256]>,
Tensor<[4096,256]>,
ttnn.multiplyaten::mul.Tensor4
332Tensor<[1,1024,160]>,
Tensor<[1,1024,160]>,
ttnn.multiplyaten::mul.Tensor4
333Tensor<[1024,160]>,
Tensor<[1024,160]>,
ttnn.multiplyaten::mul.Tensor4
334Tensor<[1,256,160]>,
Tensor<[1,256,160]>,
ttnn.multiplyaten::mul.Tensor4
335Tensor<[256,160]>,
Tensor<[256,160]>,
ttnn.multiplyaten::mul.Tensor4
336Tensor<[256,1024]>,
Tensor<[256,1024]>,
ttnn.multiplyaten::mul.Tensor4
337Tensor<[1,256,128,128]>,
Tensor<[1,256,128,128]>,
ttnn.multiplyaten::mul.Tensor4
338Tensor<[1,7,18176]>,
Tensor<[1,7,18176]>,
ttnn.multiplyaten::gelu4
339Tensor<[1,71,7,64]>,
Tensor<[1,71,7,64]>,
ttnn.multiplyaten::mul.Scalar4
340Tensor<[1,1,64,7]>,
Tensor<[1,1,64,7]>,
ttnn.multiplyaten::mul.Scalar4
341Tensor<[7,7]>,
Tensor<[7,7]>,
ttnn.multiplyaten::mul.Tensor5
342Tensor<[1,7,64]>,
Tensor<[1,7,64]>,
ttnn.multiplyaten::mul.Tensor4
343Tensor<[1,7,4544]>,
Tensor<[1,7,4544]>,
ttnn.multiplyaten::mul.Tensor4
344Tensor<[1,1,7,64]>,
Tensor<[1,1,7,64]>,
ttnn.multiplyaten::mul.Tensor5
345Tensor<[1,16,112,112]>,
Tensor<[1,16,112,112]>,
ttnn.multiplyaten::mul.Tensor4
346Tensor<[96]>,
Tensor<[96]>,
ttnn.multiplyaten::mul.Tensor4
347Tensor<[1,96,112,112]>,
Tensor<[1,96,112,112]>,
ttnn.multiplyaten::mul.Tensor4
348Tensor<[1,96,56,56]>,
Tensor<[1,96,56,56]>,
ttnn.multiplyaten::mul.Tensor4
349Tensor<[144]>,
Tensor<[144]>,
ttnn.multiplyaten::mul.Tensor4
350Tensor<[1,144,56,56]>,
Tensor<[1,144,56,56]>,
ttnn.multiplyaten::mul.Tensor4
351Tensor<[1,144,28,28]>,
Tensor<[1,144,28,28]>,
ttnn.multiplyaten::mul.Tensor4
352Tensor<[1,32,28,28]>,
Tensor<[1,32,28,28]>,
ttnn.multiplyaten::mul.Tensor4
353Tensor<[1,192,28,28]>,
Tensor<[1,192,28,28]>,
ttnn.multiplyaten::mul.Tensor4
354Tensor<[1,192,14,14]>,
Tensor<[1,192,14,14]>,
ttnn.multiplyaten::mul.Tensor4
355Tensor<[1,64,14,14]>,
Tensor<[1,64,14,14]>,
ttnn.multiplyaten::mul.Tensor4
356Tensor<[384]>,
Tensor<[384]>,
ttnn.multiplyaten::mul.Tensor4
357Tensor<[1,384,14,14]>,
Tensor<[1,384,14,14]>,
ttnn.multiplyaten::mul.Tensor4
358Tensor<[1,96,14,14]>,
Tensor<[1,96,14,14]>,
ttnn.multiplyaten::mul.Tensor4
359Tensor<[576]>,
Tensor<[576]>,
ttnn.multiplyaten::mul.Tensor4
360Tensor<[1,576,14,14]>,
Tensor<[1,576,14,14]>,
ttnn.multiplyaten::mul.Tensor4
361Tensor<[1,576,7,7]>,
Tensor<[1,576,7,7]>,
ttnn.multiplyaten::mul.Tensor4
362Tensor<[960]>,
Tensor<[960]>,
ttnn.multiplyaten::mul.Tensor4
363Tensor<[1,960,7,7]>,
Tensor<[1,960,7,7]>,
ttnn.multiplyaten::mul.Tensor4
364Tensor<[1,320,7,7]>,
Tensor<[1,320,7,7]>,
ttnn.multiplyaten::mul.Tensor4
365Tensor<[1,1280,7,7]>,
Tensor<[1,1280,7,7]>,
ttnn.multiplyaten::mul.Tensor4
366Tensor<[1,12,12,64]>,
Tensor<[1,12,12,64]>,
ttnn.multiplyaten::mul.Scalar4
367Tensor<[1,12,64,12]>,
Tensor<[1,12,64,12]>,
ttnn.multiplyaten::mul.Scalar4
368Tensor<[1,12,128]>,
Tensor<[1,12,128]>,
ttnn.multiplyaten::mul.Tensor4
369Tensor<[12,768]>,
Tensor<[12,768]>,
ttnn.multiplyaten::mul.Tensor4
370Tensor<[1,12,768]>,
Tensor<[1,12,768]>,
ttnn.multiplyaten::mul.Tensor4
371Tensor<[12,3072]>,
Tensor<[12,3072]>,
ttnn.multiplyaten::mul.Tensor4
372Tensor<[1,12,3072]>,
Tensor<[1,12,3072]>,
ttnn.multiplyaten::mul.Tensor4
373Tensor<[12,2]>,
Tensor<[12,2]>,
ttnn.multiplyaten::mul.Tensor4
374Tensor<[1,12,9,64]>,
Tensor<[1,12,9,64]>,
ttnn.multiplyaten::mul.Scalar4
375Tensor<[1,12,64,9]>,
Tensor<[1,12,64,9]>,
ttnn.multiplyaten::mul.Scalar4
376Tensor<[1,9,128]>,
Tensor<[1,9,128]>,
ttnn.multiplyaten::mul.Tensor4
377Tensor<[9,768]>,
Tensor<[9,768]>,
ttnn.multiplyaten::mul.Tensor4
378Tensor<[1,9,768]>,
Tensor<[1,9,768]>,
ttnn.multiplyaten::mul.Tensor4
379Tensor<[9,3072]>,
Tensor<[9,3072]>,
ttnn.multiplyaten::mul.Tensor4
380Tensor<[1,9,3072]>,
Tensor<[1,9,3072]>,
ttnn.multiplyaten::mul.Tensor4
381Tensor<[9,128]>,
Tensor<[9,128]>,
ttnn.multiplyaten::mul.Tensor4
382Tensor<[9,30000]>,
Tensor<[9,30000]>,
ttnn.multiplyaten::mul.Tensor4
383Tensor<[30000]>,
Tensor<[30000]>,
ttnn.multiplyaten::mul.Tensor4
384Tensor<[1,16,9,128]>,
Tensor<[1,16,9,128]>,
ttnn.multiplyaten::mul.Scalar4
385Tensor<[1,16,128,9]>,
Tensor<[1,16,128,9]>,
ttnn.multiplyaten::mul.Scalar4
386Tensor<[9,2048]>,
Tensor<[9,2048]>,
ttnn.multiplyaten::mul.Tensor4
387Tensor<[1,9,2048]>,
Tensor<[1,9,2048]>,
ttnn.multiplyaten::mul.Tensor4
388Tensor<[9,8192]>,
Tensor<[9,8192]>,
ttnn.multiplyaten::mul.Tensor4
389Tensor<[8192]>,
Tensor<[8192]>,
ttnn.multiplyaten::mul.Tensor4
390Tensor<[1,9,8192]>,
Tensor<[1,9,8192]>,
ttnn.multiplyaten::mul.Tensor4
391Tensor<[1,16,9,64]>,
Tensor<[1,16,9,64]>,
ttnn.multiplyaten::mul.Scalar4
392Tensor<[1,16,64,9]>,
Tensor<[1,16,64,9]>,
ttnn.multiplyaten::mul.Scalar4
393Tensor<[9,1024]>,
Tensor<[9,1024]>,
ttnn.multiplyaten::mul.Tensor4
394Tensor<[1,9,1024]>,
Tensor<[1,9,1024]>,
ttnn.multiplyaten::mul.Tensor4
395Tensor<[9,4096]>,
Tensor<[9,4096]>,
ttnn.multiplyaten::mul.Tensor4
396Tensor<[1,9,4096]>,
Tensor<[1,9,4096]>,
ttnn.multiplyaten::mul.Tensor4
397Tensor<[1,64,9,64]>,
Tensor<[1,64,9,64]>,
ttnn.multiplyaten::mul.Scalar4
398Tensor<[1,64,64,9]>,
Tensor<[1,64,64,9]>,
ttnn.multiplyaten::mul.Scalar4
399Tensor<[9,16384]>,
Tensor<[9,16384]>,
ttnn.multiplyaten::mul.Tensor4
400Tensor<[16384]>,
Tensor<[16384]>,
ttnn.multiplyaten::mul.Tensor4
401Tensor<[1,9,16384]>,
Tensor<[1,9,16384]>,
ttnn.multiplyaten::mul.Tensor4
402Tensor<[1,2]>,
Tensor<[1,2]>,
ttnn.multiplyaten::mul.Tensor4
403Tensor<[1,12,14,64]>,
Tensor<[1,12,14,64]>,
ttnn.multiplyaten::mul.Scalar4
404Tensor<[1,12,64,14]>,
Tensor<[1,12,64,14]>,
ttnn.multiplyaten::mul.Scalar4
405Tensor<[1,14,128]>,
Tensor<[1,14,128]>,
ttnn.multiplyaten::mul.Tensor4
406Tensor<[14,768]>,
Tensor<[14,768]>,
ttnn.multiplyaten::mul.Tensor4
407Tensor<[1,14,768]>,
Tensor<[1,14,768]>,
ttnn.multiplyaten::mul.Tensor4
408Tensor<[14,3072]>,
Tensor<[14,3072]>,
ttnn.multiplyaten::mul.Tensor4
409Tensor<[1,14,3072]>,
Tensor<[1,14,3072]>,
ttnn.multiplyaten::mul.Tensor4
410Tensor<[14,2]>,
Tensor<[14,2]>,
ttnn.multiplyaten::mul.Tensor4
411Tensor<[1,12,50,64]>,
Tensor<[1,12,50,64]>,
ttnn.multiplyaten::mul.Scalar4
412Tensor<[1,12,64,50]>,
Tensor<[1,12,64,50]>,
ttnn.multiplyaten::mul.Scalar4
413Tensor<[2,8,7,64]>,
Tensor<[2,8,7,64]>,
ttnn.multiplyaten::mul.Scalar4
414Tensor<[2,8,64,7]>,
Tensor<[2,8,64,7]>,
ttnn.multiplyaten::mul.Scalar4
415Tensor<[1,50,768]>,
Tensor<[1,50,768]>,
ttnn.multiplyaten::mul.Tensor4
416Tensor<[50,768]>,
Tensor<[50,768]>,
ttnn.multiplyaten::mul.Tensor4
417Tensor<[50,3072]>,
Tensor<[50,3072]>,
ttnn.multiplyaten::mul.Tensor4
418Tensor<[1,50,3072]>,
Tensor<[1,50,3072]>,
ttnn.multiplyaten::mul.Tensor4
419Tensor<[2,7,512]>,
Tensor<[2,7,512]>,
ttnn.multiplyaten::mul.Tensor4
420Tensor<[14,512]>,
Tensor<[14,512]>,
ttnn.multiplyaten::mul.Tensor4
421Tensor<[14,2048]>,
Tensor<[14,2048]>,
ttnn.multiplyaten::mul.Tensor4
422Tensor<[2,7,2048]>,
Tensor<[2,7,2048]>,
ttnn.multiplyaten::mul.Tensor4
423Tensor<[2,1]>,
Tensor<[2,1]>,
ttnn.multiplyaten::mul.Tensor4
424Tensor<[27]>,
Tensor<[27]>,
ttnn.multiplyaten::arange4
425Tensor<[197]>,
Tensor<[197]>,
ttnn.multiplyaten::arange4
426Tensor<[1,197,4096]>,
Tensor<[1,197,4096]>,
ttnn.multiplyaten::gelu4
427Tensor<[1,197,1024]>,
Tensor<[1,197,1024]>,
ttnn.multiplyaten::mul.Tensor4
428Tensor<[197,1024]>,
Tensor<[197,1024]>,
ttnn.multiplyaten::mul.Tensor4
429Tensor<[1,16,27,27]>,
Tensor<[1,16,27,27]>,
ttnn.multiplyaten::mul.Tensor4
430Tensor<[196,196]>,
Tensor<[196,196]>,
ttnn.multiplyaten::mul.Tensor4
431Tensor<[197,4096]>,
Tensor<[197,4096]>,
ttnn.multiplyaten::mul.Tensor4
432Tensor<[1,1024]>,
Tensor<[1,1024]>,
ttnn.multiplyaten::mul.Tensor4
433Tensor<[1,12,27,27]>,
Tensor<[1,12,27,27]>,
ttnn.multiplyaten::mul.Tensor4
434Tensor<[1,64]>,
Tensor<[1,64]>,
ttnn.multiplyaten::mul.Tensor4
435Tensor<[1,12]>,
Tensor<[1,12]>,
ttnn.multiplyaten::mul.Tensor4
436Tensor<[1,784]>,
Tensor<[1,784]>,
ttnn.multiplyaten::mul.Tensor4
437Tensor<[784]>,
Tensor<[784]>,
ttnn.multiplyaten::mul.Tensor4

stablehlo.negate::ttnn.neg

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,32,32,64]>,
ttnn.negaten::neg5
1Tensor<[19]>,
ttnn.negaten::neg5
2Tensor<[1,71,7,32]>,
ttnn.negaten::neg5
3Tensor<[1,1,7,32]>,
ttnn.negaten::neg5

stablehlo.not::ttnn.not

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,23,40]>,
ttnn.notaten::bitwise_not5

stablehlo.power::ttnn.pow

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,32,4096]>,
Tensor<[1,32,4096]>,
ttnn.powaten::pow.Tensor_Scalar4
1Tensor<[1,7,3072]>,
Tensor<[1,7,3072]>,
ttnn.powaten::pow.Tensor_Scalar4
2Tensor<[128]>,
Tensor<[128]>,
ttnn.powaten::pow.Scalar4
3Tensor<[16]>,
Tensor<[16]>,
ttnn.powaten::pow.Tensor_Tensor4
4Tensor<[1,12,3072]>,
Tensor<[1,12,3072]>,
ttnn.powaten::pow.Tensor_Scalar4
5Tensor<[1,9,3072]>,
Tensor<[1,9,3072]>,
ttnn.powaten::pow.Tensor_Scalar4
6Tensor<[1,9,128]>,
Tensor<[1,9,128]>,
ttnn.powaten::pow.Tensor_Scalar4
7Tensor<[1,9,8192]>,
Tensor<[1,9,8192]>,
ttnn.powaten::pow.Tensor_Scalar4
8Tensor<[1,9,4096]>,
Tensor<[1,9,4096]>,
ttnn.powaten::pow.Tensor_Scalar4
9Tensor<[1,9,16384]>,
Tensor<[1,9,16384]>,
ttnn.powaten::pow.Tensor_Scalar4
10Tensor<[1,14,3072]>,
Tensor<[1,14,3072]>,
ttnn.powaten::pow.Tensor_Scalar4
11Tensor<[1,512]>,
Tensor<[1,512]>,
ttnn.powaten::pow.Tensor_Scalar4
12Tensor<[1,1]>,
Tensor<[1,1]>,
ttnn.powaten::pow.Tensor_Scalar4
13Tensor<[2,512]>,
Tensor<[2,512]>,
ttnn.powaten::pow.Tensor_Scalar4
14Tensor<[2,1]>,
Tensor<[2,1]>,
ttnn.powaten::pow.Tensor_Scalar4

stablehlo.reduce_stablehlo.add::ttnn.sum

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,32,32,32]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
1Tensor<[1,32,4096]>,
Scalar,
dim: [2]
ttnn.sumaten::mean.dim4
2Tensor<[1,12,7,7]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
3Tensor<[1,512,256]>,
Scalar,
dim: [2]
ttnn.sumaten::mean.dim4
4Tensor<[8,920,920]>,
Scalar,
dim: [2]
ttnn.sumaten::_softmax4
5Tensor<[8,100,100]>,
Scalar,
dim: [2]
ttnn.sumaten::_softmax4
6Tensor<[8,100,920]>,
Scalar,
dim: [2]
ttnn.sumaten::_softmax4
7Tensor<[1,12,10,10]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
8Tensor<[1,8,4096,4096]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
9Tensor<[1,8,4096,9]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
10Tensor<[1,8,1024,1024]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
11Tensor<[1,8,1024,9]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
12Tensor<[1,8,256,256]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
13Tensor<[1,8,256,9]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
14Tensor<[1,8,64,64]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
15Tensor<[1,8,64,9]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
16Tensor<[1,12,25,25]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
17Tensor<[1,3,1445,1445]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
18Tensor<[1,512,7,7]>,
Scalar,
dim: [2,
ttnn.sumaten::mean.dim4
19Tensor<[1,12,8,8]>,
Scalar,
dim: [3]
ttnn.sumaten::_softmax4
20Tensor<[1,8,256,2048]>,
Scalar,
dim: [3]
ttnn.sumaten::_softmax4
21Tensor<[1,8,2048,256]>,
Scalar,
dim: [3]
ttnn.sumaten::_softmax4
22Tensor<[1,2048,7,7]>,
Scalar,
dim: [2,
ttnn.sumaten::mean.dim4
23Tensor<[1,12,201,201]>,
Scalar,
dim: [3]
ttnn.sumaten::_softmax4
24Tensor<[1,12,16]>,
Scalar,
dim: [1]
ttnn.sumaten::sum.dim_IntList4
25Tensor<[1,12,16]>,
Scalar,
dim: [2]
ttnn.sumaten::sum.dim_IntList4
26Tensor<[1,10]>,
Scalar,
dim: [1]
ttnn.sumaten::sum.dim_IntList5
27Tensor<[16,19,19]>,
Scalar,
dim: [2]
ttnn.sumaten::_softmax4
28Tensor<[19]>,
Scalar,
dim: [0]
ttnn.sumaten::sum4
29Tensor<[19,256008]>,
Scalar,
dim: [1]
ttnn.sumaten::sum.dim_IntList5
30Tensor<[1,1024,7,7]>,
Scalar,
dim: [2,
ttnn.sumaten::mean.dim4
31Tensor<[1,16,32,32]>,
Scalar,
dim: [3]
ttnn.sumaten::_softmax4
32Tensor<[1,12,16,16]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
33Tensor<[1,1,19200,300]>,
Scalar,
dim: [3]
ttnn.sumaten::_softmax4
34Tensor<[1,2,4800,300]>,
Scalar,
dim: [3]
ttnn.sumaten::_softmax4
35Tensor<[1,5,1200,300]>,
Scalar,
dim: [3]
ttnn.sumaten::_softmax4
36Tensor<[1,8,300,300]>,
Scalar,
dim: [3]
ttnn.sumaten::_softmax4
37Tensor<[1,12,197,197]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
38Tensor<[1,1,16384,256]>,
Scalar,
dim: [3]
ttnn.sumaten::_softmax4
39Tensor<[1,2,4096,256]>,
Scalar,
dim: [3]
ttnn.sumaten::_softmax4
40Tensor<[1,5,1024,256]>,
Scalar,
dim: [3]
ttnn.sumaten::_softmax4
41Tensor<[1,71,7,7]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
42Tensor<[1,1280,7,7]>,
Scalar,
dim: [2,
ttnn.sumaten::mean.dim4
43Tensor<[1,12,12,12]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
44Tensor<[1,12,9,9]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
45Tensor<[1,16,9,9]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
46Tensor<[1,64,9,9]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
47Tensor<[1,12,14,14]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
48Tensor<[1,12,50,50]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
49Tensor<[2,8,7,7]>,
Scalar,
dim: [3]
ttnn.sumaten::_safe_softmax4
50Tensor<[1,512]>,
Scalar,
dim: [1]
ttnn.sumaten::sum.dim_IntList4
51Tensor<[2,512]>,
Scalar,
dim: [1]
ttnn.sumaten::sum.dim_IntList4
52Tensor<[1,16,197,197]>,
Scalar,
dim: [3]
ttnn.sumaten::_softmax4
53Tensor<[1,196,1024]>,
Scalar,
dim: [1]
ttnn.sumaten::mean.dim4
54Tensor<[196,196,2]>,
Scalar,
dim: [2]
ttnn.sumaten::sum.dim_IntList4
55Tensor<[1,196,768]>,
Scalar,
dim: [1]
ttnn.sumaten::mean.dim4

stablehlo.reduce_stablehlo.and::ttnn.?

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,32,32,32]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
1Tensor<[1,12,7,7]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
2Tensor<[1,12,10,10]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
3Tensor<[1,8,4096,4096]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
4Tensor<[1,8,4096,9]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
5Tensor<[1,8,1024,1024]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
6Tensor<[1,8,1024,9]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
7Tensor<[1,8,256,256]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
8Tensor<[1,8,256,9]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
9Tensor<[1,8,64,64]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
10Tensor<[1,8,64,9]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
11Tensor<[1,12,25,25]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
12Tensor<[1,3,1445,1445]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
13Tensor<[1,12,16,16]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
14Tensor<[1,12,197,197]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
15Tensor<[1,71,7,7]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
16Tensor<[1,12,12,12]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
17Tensor<[1,12,9,9]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
18Tensor<[1,16,9,9]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
19Tensor<[1,64,9,9]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
20Tensor<[1,12,14,14]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
21Tensor<[1,12,50,50]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4
22Tensor<[2,8,7,7]>,
Scalar,
dim: [3]
ttnn.?aten::_safe_softmax4

stablehlo.reduce_stablehlo.maximum::ttnn.max

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,32,32,32]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
1Tensor<[1,12,7,7]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
2Tensor<[8,920,920]>,
Scalar,
dim: [2]
ttnn.maxaten::_softmax4
3Tensor<[8,100,100]>,
Scalar,
dim: [2]
ttnn.maxaten::_softmax4
4Tensor<[8,100,920]>,
Scalar,
dim: [2]
ttnn.maxaten::_softmax4
5Tensor<[1,12,10,10]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
6Tensor<[1,8,4096,4096]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
7Tensor<[1,8,4096,9]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
8Tensor<[1,8,1024,1024]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
9Tensor<[1,8,1024,9]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
10Tensor<[1,8,256,256]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
11Tensor<[1,8,256,9]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
12Tensor<[1,8,64,64]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
13Tensor<[1,8,64,9]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
14Tensor<[1,12,25,25]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
15Tensor<[1,3,1445,1445]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
16Tensor<[1,12,8,8]>,
Scalar,
dim: [3]
ttnn.maxaten::_softmax4
17Tensor<[1,8,256,2048]>,
Scalar,
dim: [3]
ttnn.maxaten::_softmax4
18Tensor<[1,8,2048,256]>,
Scalar,
dim: [3]
ttnn.maxaten::_softmax4
19Tensor<[1,12,201,201]>,
Scalar,
dim: [3]
ttnn.maxaten::_softmax4
20Tensor<[1]>,
Scalar,
dim: [0]
ttnn.maxaten::max4
21Tensor<[1,10]>,
Scalar,
dim: [1]
ttnn.maxaten::amax5
22Tensor<[16,19,19]>,
Scalar,
dim: [2]
ttnn.maxaten::_softmax4
23Tensor<[19,256008]>,
Scalar,
dim: [1]
ttnn.maxaten::amax5
24Tensor<[1,16,32,32]>,
Scalar,
dim: [3]
ttnn.maxaten::_softmax4
25Tensor<[1,12,16,16]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
26Tensor<[1,1,19200,300]>,
Scalar,
dim: [3]
ttnn.maxaten::_softmax4
27Tensor<[1,2,4800,300]>,
Scalar,
dim: [3]
ttnn.maxaten::_softmax4
28Tensor<[1,5,1200,300]>,
Scalar,
dim: [3]
ttnn.maxaten::_softmax4
29Tensor<[1,8,300,300]>,
Scalar,
dim: [3]
ttnn.maxaten::_softmax4
30Tensor<[1,12,197,197]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
31Tensor<[1,1,16384,256]>,
Scalar,
dim: [3]
ttnn.maxaten::_softmax4
32Tensor<[1,2,4096,256]>,
Scalar,
dim: [3]
ttnn.maxaten::_softmax4
33Tensor<[1,5,1024,256]>,
Scalar,
dim: [3]
ttnn.maxaten::_softmax4
34Tensor<[1,71,7,7]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
35Tensor<[1,12,12,12]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
36Tensor<[1,12,9,9]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
37Tensor<[1,16,9,9]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
38Tensor<[1,64,9,9]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
39Tensor<[1,12,14,14]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
40Tensor<[1,12,50,50]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
41Tensor<[2,8,7,7]>,
Scalar,
dim: [3]
ttnn.maxaten::_safe_softmax4
42Tensor<[1,16,197,197]>,
Scalar,
dim: [3]
ttnn.maxaten::_softmax4

stablehlo.reduce_window_stablehlo.add::ttnn.avg_pool2d

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,23,40]>,
Scalar,
ttnn.avg_pool2daten::cumsum4
1Tensor<[1,10]>,
Scalar,
ttnn.avg_pool2daten::cumsum4
2Tensor<[1,32]>,
Scalar,
ttnn.avg_pool2daten::cumsum4

stablehlo.remainder::ttnn.remainder

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1]>,
Tensor<[1]>,
ttnn.remainderaten::remainder.Scalar4

stablehlo.reshape::ttnn.reshape

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,32,32]>,
Tensor<[1,32,32,1]>,
ttnn.reshapeaten::_safe_softmax4
1Tensor<[1]>,
Scalar,
ttnn.reshapeaten::_safe_softmax4
2Tensor<[1,64,32]>,
Tensor<[1,64,32]>,
ttnn.reshapeaten::_unsafe_view5
3Tensor<[32,4096]>,
Tensor<[1,32,4096]>,
ttnn.reshapeaten::_unsafe_view5
4Tensor<[32,11008]>,
Tensor<[1,32,11008]>,
ttnn.reshapeaten::_unsafe_view5
5Tensor<[32,32000]>,
Tensor<[1,32,32000]>,
ttnn.reshapeaten::_unsafe_view5
6Scalar,
Tensor<[1]>,
ttnn.reshapeaten::arange4
7Tensor<[1,32]>,
Tensor<[1,32,1]>,
ttnn.reshapeaten::mean.dim4
8Tensor<[32]>,
Tensor<[32,1]>,
ttnn.reshapeaten::triu4
9Tensor<[32]>,
Tensor<[1,32]>,
ttnn.reshapeaten::triu4
10Tensor<[32,32]>,
Tensor<[1,32,32]>,
ttnn.reshapeaten::unsqueeze5
11Tensor<[1,32,32]>,
Tensor<[1,1,32,32]>,
ttnn.reshapeaten::unsqueeze5
12Tensor<[1,32]>,
Tensor<[1,1,32]>,
ttnn.reshapeaten::unsqueeze4
13Tensor<[1,1,32]>,
Tensor<[1,1,1,32]>,
ttnn.reshapeaten::unsqueeze4
14Tensor<[64]>,
Tensor<[1,64]>,
ttnn.reshapeaten::unsqueeze5
15Tensor<[1,64]>,
Tensor<[1,64,1]>,
ttnn.reshapeaten::unsqueeze5
16Tensor<[1,32,128]>,
Tensor<[1,1,32,128]>,
ttnn.reshapeaten::unsqueeze5
17Tensor<[1,64,1]>,
Tensor<[1,64,1]>,
ttnn.reshapeaten::view5
18Tensor<[1,1,32]>,
Tensor<[1,1,32]>,
ttnn.reshapeaten::view5
19Tensor<[1,32,4096]>,
Tensor<[32,4096]>,
ttnn.reshapeaten::view5
20Tensor<[1,32,4096]>,
Tensor<[1,32,32,128]>,
ttnn.reshapeaten::view5
21Tensor<[1,32,32,128]>,
Tensor<[32,32,128]>,
ttnn.reshapeaten::view5
22Tensor<[1,32,128,32]>,
Tensor<[32,128,32]>,
ttnn.reshapeaten::view5
23Tensor<[32,32,32]>,
Tensor<[1,32,32,32]>,
ttnn.reshapeaten::view5
24Tensor<[1,32,32,32]>,
Tensor<[32,32,32]>,
ttnn.reshapeaten::view5
25Tensor<[32,32,128]>,
Tensor<[1,32,32,128]>,
ttnn.reshapeaten::view5
26Tensor<[1,32,32,128]>,
Tensor<[1,32,4096]>,
ttnn.reshapeaten::view5
27Tensor<[1,32,11008]>,
Tensor<[32,11008]>,
ttnn.reshapeaten::view5
28Tensor<[1,12,7]>,
Tensor<[1,12,7,1]>,
ttnn.reshapeaten::_safe_softmax4
29Tensor<[7,2]>,
Tensor<[1,7,2]>,
ttnn.reshapeaten::_unsafe_view5
30Tensor<[1]>,
Tensor<[1,1]>,
ttnn.reshapeaten::index.Tensor4
31Tensor<[7]>,
Tensor<[1,7]>,
ttnn.reshapeaten::unsqueeze4
32Tensor<[7,7]>,
Tensor<[1,7,7]>,
ttnn.reshapeaten::unsqueeze5
33Tensor<[1,7,7]>,
Tensor<[1,1,7,7]>,
ttnn.reshapeaten::unsqueeze5
34Tensor<[1,7]>,
Tensor<[1,1,7]>,
ttnn.reshapeaten::unsqueeze4
35Tensor<[1,1,7]>,
Tensor<[1,1,1,7]>,
ttnn.reshapeaten::unsqueeze4
36Tensor<[1,7]>,
Tensor<[1,7]>,
ttnn.reshapeaten::view4
37Tensor<[7]>,
Tensor<[7,1]>,
ttnn.reshapeaten::view4
38Tensor<[1,7,768]>,
Tensor<[7,768]>,
ttnn.reshapeaten::view5
39Tensor<[7,2304]>,
Tensor<[1,7,2304]>,
ttnn.reshapeaten::view5
40Tensor<[1,7,768]>,
Tensor<[1,7,12,64]>,
ttnn.reshapeaten::view5
41Tensor<[1,12,7,64]>,
Tensor<[12,7,64]>,
ttnn.reshapeaten::view5
42Tensor<[1,12,64,7]>,
Tensor<[12,64,7]>,
ttnn.reshapeaten::view5
43Tensor<[12,7,7]>,
Tensor<[1,12,7,7]>,
ttnn.reshapeaten::view5
44Tensor<[1,12,7,7]>,
Tensor<[12,7,7]>,
ttnn.reshapeaten::view5
45Tensor<[12,7,64]>,
Tensor<[1,12,7,64]>,
ttnn.reshapeaten::view5
46Tensor<[1,7,12,64]>,
Tensor<[1,7,768]>,
ttnn.reshapeaten::view5
47Tensor<[7,768]>,
Tensor<[1,7,768]>,
ttnn.reshapeaten::view5
48Tensor<[7,3072]>,
Tensor<[1,7,3072]>,
ttnn.reshapeaten::view5
49Tensor<[1,7,3072]>,
Tensor<[7,3072]>,
ttnn.reshapeaten::view5
50Tensor<[1,7,768]>,
Tensor<[1,7,768]>,
ttnn.reshapeaten::view5
51Tensor<[128]>,
Tensor<[128,1,1]>,
ttnn.reshapeaten::convolution4
52Tensor<[512]>,
Tensor<[512,1,1]>,
ttnn.reshapeaten::convolution4
53Tensor<[19]>,
Tensor<[19,1,1]>,
ttnn.reshapeaten::convolution4
54Tensor<[38]>,
Tensor<[38,1,1]>,
ttnn.reshapeaten::convolution4
55Tensor<[32,1]>,
Tensor<[32,1,1]>,
ttnn.reshapeaten::unsqueeze5
56Tensor<[64]>,
Tensor<[64,1]>,
ttnn.reshapeaten::unsqueeze5
57Tensor<[64,1]>,
Tensor<[64,1,1]>,
ttnn.reshapeaten::unsqueeze5
58Tensor<[128]>,
Tensor<[128,1]>,
ttnn.reshapeaten::unsqueeze5
59Tensor<[128,1]>,
Tensor<[128,1,1]>,
ttnn.reshapeaten::unsqueeze5
60Tensor<[256]>,
Tensor<[256,1]>,
ttnn.reshapeaten::unsqueeze5
61Tensor<[256,1]>,
Tensor<[256,1,1]>,
ttnn.reshapeaten::unsqueeze5
62Tensor<[512]>,
Tensor<[512,1]>,
ttnn.reshapeaten::unsqueeze5
63Tensor<[512,1]>,
Tensor<[512,1,1]>,
ttnn.reshapeaten::unsqueeze5
64Tensor<[1,16,16,16,16,3]>,
Tensor<[1,256,768]>,
ttnn.reshapeaten::_unsafe_view5
65Tensor<[1024]>,
Tensor<[1024,1]>,
ttnn.reshapeaten::convolution4
66Tensor<[1,3,256,256]>,
Tensor<[1,3,16,16,16,16]>,
ttnn.reshapeaten::view4
67Tensor<[1,256,768]>,
Tensor<[256,768]>,
ttnn.reshapeaten::view5
68Tensor<[256,512]>,
Tensor<[1,256,512]>,
ttnn.reshapeaten::view5
69Tensor<[1,256,512]>,
Tensor<[256,512]>,
ttnn.reshapeaten::view5
70Tensor<[256,256]>,
Tensor<[1,256,256]>,
ttnn.reshapeaten::view5
71Tensor<[1,256,256]>,
Tensor<[256,256]>,
ttnn.reshapeaten::view5
72Tensor<[8,920]>,
Tensor<[8,920,1]>,
ttnn.reshapeaten::_softmax4
73Tensor<[8,100]>,
Tensor<[8,100,1]>,
ttnn.reshapeaten::_softmax4
74Tensor<[920,1,256]>,
Tensor<[920,1,256]>,
ttnn.reshapeaten::_unsafe_view5
75Tensor<[6,100,92]>,
Tensor<[6,1,100,92]>,
ttnn.reshapeaten::_unsafe_view5
76Tensor<[6,1,100,92]>,
Tensor<[6,100,92]>,
ttnn.reshapeaten::_unsafe_view5
77Tensor<[6,100,256]>,
Tensor<[6,1,100,256]>,
ttnn.reshapeaten::_unsafe_view5
78Tensor<[6,1,100,256]>,
Tensor<[6,100,256]>,
ttnn.reshapeaten::_unsafe_view5
79Tensor<[600,256]>,
Tensor<[6,1,100,256]>,
ttnn.reshapeaten::_unsafe_view5
80Tensor<[6,1,100,256]>,
Tensor<[600,256]>,
ttnn.reshapeaten::_unsafe_view5
81Tensor<[600,4]>,
Tensor<[6,1,100,4]>,
ttnn.reshapeaten::_unsafe_view5
82Tensor<[6,1,100,4]>,
Tensor<[600,4]>,
ttnn.reshapeaten::_unsafe_view5
83Tensor<[256]>,
Tensor<[256,1,1]>,
ttnn.reshapeaten::convolution4
84Tensor<[1,1]>,
Tensor<[1,1,1]>,
ttnn.reshapeaten::index.Tensor4
85Tensor<[1,1,1]>,
Tensor<[1,1,1,1]>,
ttnn.reshapeaten::index.Tensor4
86Tensor<[1,1,23,40]>,
Tensor<[1,1,23,40,1]>,
ttnn.reshapeaten::index.Tensor4
87Tensor<[100,1,256]>,
Tensor<[1,100,1,256]>,
ttnn.reshapeaten::repeat4
88Tensor<[1,100,1,256]>,
Tensor<[1,100,1,1,256]>,
ttnn.reshapeaten::repeat4
89Tensor<[1,100,1,1,256]>,
Tensor<[1,100,1,1,1,256]>,
ttnn.reshapeaten::repeat4
90Tensor<[1,100,1,1,1,256]>,
Tensor<[100,1,1,1,256]>,
ttnn.reshapeaten::repeat4
91Tensor<[100,1,1,1,256]>,
Tensor<[100,1,1,256]>,
ttnn.reshapeaten::repeat4
92Tensor<[100,1,1,256]>,
Tensor<[100,1,256]>,
ttnn.reshapeaten::repeat4
93Tensor<[1,3,720,1280]>,
Tensor<[3,720,1280]>,
ttnn.reshapeaten::select.int5
94Tensor<[1,720,1280]>,
Tensor<[720,1280]>,
ttnn.reshapeaten::select.int4
95Tensor<[1,1,23,40]>,
Tensor<[1,23,40]>,
ttnn.reshapeaten::select.int4
96Tensor<[1,1,100,4]>,
Tensor<[1,100,4]>,
ttnn.reshapeaten::select.int4
97Tensor<[1,1,100,92]>,
Tensor<[1,100,92]>,
ttnn.reshapeaten::select.int4
98Tensor<[3,720,1280]>,
Tensor<[1,3,720,1280]>,
ttnn.reshapeaten::select_scatter4
99Tensor<[720,1280]>,
Tensor<[1,720,1280]>,
ttnn.reshapeaten::select_scatter4
100Tensor<[1,23,40,64]>,
Tensor<[1,23,40,64,1]>,
ttnn.reshapeaten::stack5
101Tensor<[1,720,1280]>,
Tensor<[1,1,720,1280]>,
ttnn.reshapeaten::unsqueeze4
102Tensor<[23]>,
Tensor<[23,1]>,
ttnn.reshapeaten::unsqueeze4
103Tensor<[1,23,40]>,
Tensor<[1,23,40,1]>,
ttnn.reshapeaten::unsqueeze5
104Tensor<[100,256]>,
Tensor<[100,1,256]>,
ttnn.reshapeaten::unsqueeze5
105Tensor<[64]>,
Tensor<[1,64,1,1]>,
ttnn.reshapeaten::view5
106Tensor<[256]>,
Tensor<[1,256,1,1]>,
ttnn.reshapeaten::view5
107Tensor<[128]>,
Tensor<[1,128,1,1]>,
ttnn.reshapeaten::view5
108Tensor<[512]>,
Tensor<[1,512,1,1]>,
ttnn.reshapeaten::view5
109Tensor<[1024]>,
Tensor<[1,1024,1,1]>,
ttnn.reshapeaten::view5
110Tensor<[2048]>,
Tensor<[1,2048,1,1]>,
ttnn.reshapeaten::view5
111Tensor<[1,23,40,64,2]>,
Tensor<[1,23,40,128]>,
ttnn.reshapeaten::view5
112Tensor<[1,256,23,40]>,
Tensor<[1,256,920]>,
ttnn.reshapeaten::view5
113Tensor<[1,23,40]>,
Tensor<[1,920]>,
ttnn.reshapeaten::view4
114Tensor<[920,256,256]>,
Tensor<[920,256,256]>,
ttnn.reshapeaten::view5
115Tensor<[920,1,256]>,
Tensor<[920,8,32]>,
ttnn.reshapeaten::view5
116Tensor<[1,920]>,
Tensor<[1,1,1,920]>,
ttnn.reshapeaten::view5
117Tensor<[1,8,1,920]>,
Tensor<[8,1,920]>,
ttnn.reshapeaten::view5
118Tensor<[920,8,32]>,
Tensor<[920,256]>,
ttnn.reshapeaten::view5
119Tensor<[920,256]>,
Tensor<[920,1,256]>,
ttnn.reshapeaten::view5
120Tensor<[920,1,256]>,
Tensor<[920,256]>,
ttnn.reshapeaten::view5
121Tensor<[920,2048]>,
Tensor<[920,1,2048]>,
ttnn.reshapeaten::view5
122Tensor<[920,1,2048]>,
Tensor<[920,2048]>,
ttnn.reshapeaten::view5
123Tensor<[100,1,256]>,
Tensor<[100,256]>,
ttnn.reshapeaten::view5
124Tensor<[100,1,256]>,
Tensor<[100,8,32]>,
ttnn.reshapeaten::view5
125Tensor<[100,8,32]>,
Tensor<[100,256]>,
ttnn.reshapeaten::view5
126Tensor<[100,2048]>,
Tensor<[100,1,2048]>,
ttnn.reshapeaten::view5
127Tensor<[100,1,2048]>,
Tensor<[100,2048]>,
ttnn.reshapeaten::view5
128Tensor<[6,1,256,92]>,
Tensor<[6,256,92]>,
ttnn.reshapeaten::view5
129Tensor<[6,1,256,256]>,
Tensor<[6,256,256]>,
ttnn.reshapeaten::view5
130Tensor<[1,12,10]>,
Tensor<[1,12,10,1]>,
ttnn.reshapeaten::_safe_softmax4
131Tensor<[1,10]>,
Tensor<[1,1,10]>,
ttnn.reshapeaten::unsqueeze4
132Tensor<[1,1,10]>,
Tensor<[1,1,1,10]>,
ttnn.reshapeaten::unsqueeze4
133Tensor<[1,10,768]>,
Tensor<[10,768]>,
ttnn.reshapeaten::view5
134Tensor<[10,768]>,
Tensor<[1,10,768]>,
ttnn.reshapeaten::view5
135Tensor<[1,10,768]>,
Tensor<[1,10,12,64]>,
ttnn.reshapeaten::view5
136Tensor<[1,12,10,64]>,
Tensor<[12,10,64]>,
ttnn.reshapeaten::view5
137Tensor<[1,12,64,10]>,
Tensor<[12,64,10]>,
ttnn.reshapeaten::view5
138Tensor<[12,10,10]>,
Tensor<[1,12,10,10]>,
ttnn.reshapeaten::view5
139Tensor<[1,12,10,10]>,
Tensor<[12,10,10]>,
ttnn.reshapeaten::view5
140Tensor<[12,10,64]>,
Tensor<[1,12,10,64]>,
ttnn.reshapeaten::view5
141Tensor<[1,10,12,64]>,
Tensor<[1,10,768]>,
ttnn.reshapeaten::view5
142Tensor<[10,3072]>,
Tensor<[1,10,3072]>,
ttnn.reshapeaten::view5
143Tensor<[1,10,3072]>,
Tensor<[10,3072]>,
ttnn.reshapeaten::view5
144Tensor<[10,250002]>,
Tensor<[1,10,250002]>,
ttnn.reshapeaten::view5
145Tensor<[1,8,4096]>,
Tensor<[1,8,4096,1]>,
ttnn.reshapeaten::_safe_softmax4
146Tensor<[1,8,1024]>,
Tensor<[1,8,1024,1]>,
ttnn.reshapeaten::_safe_softmax4
147Tensor<[1,8,256]>,
Tensor<[1,8,256,1]>,
ttnn.reshapeaten::_safe_softmax4
148Tensor<[1,8,64]>,
Tensor<[1,8,64,1]>,
ttnn.reshapeaten::_safe_softmax4
149Tensor<[4096,320]>,
Tensor<[1,4096,320]>,
ttnn.reshapeaten::_unsafe_view5
150Tensor<[9,320]>,
Tensor<[1,9,320]>,
ttnn.reshapeaten::_unsafe_view5
151Tensor<[1024,640]>,
Tensor<[1,1024,640]>,
ttnn.reshapeaten::_unsafe_view5
152Tensor<[9,640]>,
Tensor<[1,9,640]>,
ttnn.reshapeaten::_unsafe_view5
153Tensor<[256,1280]>,
Tensor<[1,256,1280]>,
ttnn.reshapeaten::_unsafe_view5
154Tensor<[9,1280]>,
Tensor<[1,9,1280]>,
ttnn.reshapeaten::_unsafe_view5
155Tensor<[64,1280]>,
Tensor<[1,64,1280]>,
ttnn.reshapeaten::_unsafe_view5
156Tensor<[320]>,
Tensor<[320,1,1]>,
ttnn.reshapeaten::convolution4
157Tensor<[640]>,
Tensor<[640,1,1]>,
ttnn.reshapeaten::convolution4
158Tensor<[1280]>,
Tensor<[1280,1,1]>,
ttnn.reshapeaten::convolution4
159Tensor<[4]>,
Tensor<[4,1,1]>,
ttnn.reshapeaten::convolution4
160Tensor<[1280]>,
Tensor<[1280,1]>,
ttnn.reshapeaten::index.Tensor4
161Tensor<[1280,1]>,
Tensor<[1280,1,1]>,
ttnn.reshapeaten::index.Tensor4
162Tensor<[1,1280,16,16]>,
Tensor<[1,1280,16,16,1]>,
ttnn.reshapeaten::index.Tensor4
163Tensor<[1,1280,32,32]>,
Tensor<[1,1280,32,32,1]>,
ttnn.reshapeaten::index.Tensor4
164Tensor<[640]>,
Tensor<[640,1]>,
ttnn.reshapeaten::index.Tensor4
165Tensor<[640,1]>,
Tensor<[640,1,1]>,
ttnn.reshapeaten::index.Tensor4
166Tensor<[1,640,64,64]>,
Tensor<[1,640,64,64,1]>,
ttnn.reshapeaten::index.Tensor4
167Tensor<[160]>,
Tensor<[1,160]>,
ttnn.reshapeaten::unsqueeze5
168Tensor<[320]>,
Tensor<[1,320]>,
ttnn.reshapeaten::unsqueeze5
169Tensor<[1,320]>,
Tensor<[1,320,1]>,
ttnn.reshapeaten::unsqueeze5
170Tensor<[1,320,1]>,
Tensor<[1,320,1,1]>,
ttnn.reshapeaten::unsqueeze5
171Tensor<[1,640]>,
Tensor<[1,640,1]>,
ttnn.reshapeaten::unsqueeze5
172Tensor<[1,640,1]>,
Tensor<[1,640,1,1]>,
ttnn.reshapeaten::unsqueeze5
173Tensor<[640]>,
Tensor<[1,640]>,
ttnn.reshapeaten::unsqueeze5
174Tensor<[1,1280]>,
Tensor<[1,1280,1]>,
ttnn.reshapeaten::unsqueeze5
175Tensor<[1,1280,1]>,
Tensor<[1,1280,1,1]>,
ttnn.reshapeaten::unsqueeze5
176Tensor<[1280]>,
Tensor<[1,1280]>,
ttnn.reshapeaten::unsqueeze5
177Tensor<[2560]>,
Tensor<[1,2560]>,
ttnn.reshapeaten::unsqueeze5
178Tensor<[1,2560]>,
Tensor<[1,2560,1]>,
ttnn.reshapeaten::unsqueeze5
179Tensor<[1,2560,1]>,
Tensor<[1,2560,1,1]>,
ttnn.reshapeaten::unsqueeze5
180Tensor<[16]>,
Tensor<[16,1]>,
ttnn.reshapeaten::unsqueeze4
181Tensor<[1920]>,
Tensor<[1,1920]>,
ttnn.reshapeaten::unsqueeze5
182Tensor<[1,1920]>,
Tensor<[1,1920,1]>,
ttnn.reshapeaten::unsqueeze5
183Tensor<[1,1920,1]>,
Tensor<[1,1920,1,1]>,
ttnn.reshapeaten::unsqueeze5
184Tensor<[960]>,
Tensor<[1,960]>,
ttnn.reshapeaten::unsqueeze5
185Tensor<[1,960]>,
Tensor<[1,960,1]>,
ttnn.reshapeaten::unsqueeze5
186Tensor<[1,960,1]>,
Tensor<[1,960,1,1]>,
ttnn.reshapeaten::unsqueeze5
187Tensor<[1,320,64,64]>,
Tensor<[1,32,10,4096]>,
ttnn.reshapeaten::view5
188Tensor<[1,32,10,4096]>,
Tensor<[1,320,64,64]>,
ttnn.reshapeaten::view5
189Tensor<[1,64,64,320]>,
Tensor<[1,4096,320]>,
ttnn.reshapeaten::view5
190Tensor<[1,4096,320]>,
Tensor<[4096,320]>,
ttnn.reshapeaten::view5
191Tensor<[1,4096,320]>,
Tensor<[1,4096,8,40]>,
ttnn.reshapeaten::view5
192Tensor<[1,8,4096,40]>,
Tensor<[8,4096,40]>,
ttnn.reshapeaten::view5
193Tensor<[1,8,40,4096]>,
Tensor<[8,40,4096]>,
ttnn.reshapeaten::view5
194Tensor<[8,4096,4096]>,
Tensor<[1,8,4096,4096]>,
ttnn.reshapeaten::view5
195Tensor<[1,8,4096,4096]>,
Tensor<[8,4096,4096]>,
ttnn.reshapeaten::view5
196Tensor<[8,4096,40]>,
Tensor<[1,8,4096,40]>,
ttnn.reshapeaten::view5
197Tensor<[1,4096,8,40]>,
Tensor<[1,4096,320]>,
ttnn.reshapeaten::view5
198Tensor<[1,9,768]>,
Tensor<[9,768]>,
ttnn.reshapeaten::view5
199Tensor<[1,9,320]>,
Tensor<[1,9,8,40]>,
ttnn.reshapeaten::view5
200Tensor<[1,8,40,9]>,
Tensor<[8,40,9]>,
ttnn.reshapeaten::view5
201Tensor<[8,4096,9]>,
Tensor<[1,8,4096,9]>,
ttnn.reshapeaten::view5
202Tensor<[1,8,4096,9]>,
Tensor<[8,4096,9]>,
ttnn.reshapeaten::view5
203Tensor<[1,8,9,40]>,
Tensor<[8,9,40]>,
ttnn.reshapeaten::view5
204Tensor<[4096,2560]>,
Tensor<[1,4096,2560]>,
ttnn.reshapeaten::view5
205Tensor<[1,4096,1280]>,
Tensor<[4096,1280]>,
ttnn.reshapeaten::view5
206Tensor<[1,4096,320]>,
Tensor<[1,64,64,320]>,
ttnn.reshapeaten::view5
207Tensor<[1,320,32,32]>,
Tensor<[1,32,10,1024]>,
ttnn.reshapeaten::view5
208Tensor<[1,32,10,1024]>,
Tensor<[1,320,32,32]>,
ttnn.reshapeaten::view5
209Tensor<[1,640,32,32]>,
Tensor<[1,32,20,1024]>,
ttnn.reshapeaten::view5
210Tensor<[1,32,20,1024]>,
Tensor<[1,640,32,32]>,
ttnn.reshapeaten::view5
211Tensor<[1,32,32,640]>,
Tensor<[1,1024,640]>,
ttnn.reshapeaten::view5
212Tensor<[1,1024,640]>,
Tensor<[1024,640]>,
ttnn.reshapeaten::view5
213Tensor<[1,1024,640]>,
Tensor<[1,1024,8,80]>,
ttnn.reshapeaten::view5
214Tensor<[1,8,1024,80]>,
Tensor<[8,1024,80]>,
ttnn.reshapeaten::view5
215Tensor<[1,8,80,1024]>,
Tensor<[8,80,1024]>,
ttnn.reshapeaten::view5
216Tensor<[8,1024,1024]>,
Tensor<[1,8,1024,1024]>,
ttnn.reshapeaten::view5
217Tensor<[1,8,1024,1024]>,
Tensor<[8,1024,1024]>,
ttnn.reshapeaten::view5
218Tensor<[8,1024,80]>,
Tensor<[1,8,1024,80]>,
ttnn.reshapeaten::view5
219Tensor<[1,1024,8,80]>,
Tensor<[1,1024,640]>,
ttnn.reshapeaten::view5
220Tensor<[1,9,640]>,
Tensor<[1,9,8,80]>,
ttnn.reshapeaten::view5
221Tensor<[1,8,80,9]>,
Tensor<[8,80,9]>,
ttnn.reshapeaten::view5
222Tensor<[8,1024,9]>,
Tensor<[1,8,1024,9]>,
ttnn.reshapeaten::view5
223Tensor<[1,8,1024,9]>,
Tensor<[8,1024,9]>,
ttnn.reshapeaten::view5
224Tensor<[1,8,9,80]>,
Tensor<[8,9,80]>,
ttnn.reshapeaten::view5
225Tensor<[1024,5120]>,
Tensor<[1,1024,5120]>,
ttnn.reshapeaten::view5
226Tensor<[1,1024,2560]>,
Tensor<[1024,2560]>,
ttnn.reshapeaten::view5
227Tensor<[1,1024,640]>,
Tensor<[1,32,32,640]>,
ttnn.reshapeaten::view5
228Tensor<[1,640,16,16]>,
Tensor<[1,32,20,256]>,
ttnn.reshapeaten::view5
229Tensor<[1,32,20,256]>,
Tensor<[1,640,16,16]>,
ttnn.reshapeaten::view5
230Tensor<[1,1280,16,16]>,
Tensor<[1,32,40,256]>,
ttnn.reshapeaten::view5
231Tensor<[1,32,40,256]>,
Tensor<[1,1280,16,16]>,
ttnn.reshapeaten::view5
232Tensor<[1,16,16,1280]>,
Tensor<[1,256,1280]>,
ttnn.reshapeaten::view5
233Tensor<[1,256,1280]>,
Tensor<[256,1280]>,
ttnn.reshapeaten::view5
234Tensor<[1,256,1280]>,
Tensor<[1,256,8,160]>,
ttnn.reshapeaten::view5
235Tensor<[1,8,256,160]>,
Tensor<[8,256,160]>,
ttnn.reshapeaten::view5
236Tensor<[1,8,160,256]>,
Tensor<[8,160,256]>,
ttnn.reshapeaten::view5
237Tensor<[8,256,256]>,
Tensor<[1,8,256,256]>,
ttnn.reshapeaten::view5
238Tensor<[1,8,256,256]>,
Tensor<[8,256,256]>,
ttnn.reshapeaten::view5
239Tensor<[8,256,160]>,
Tensor<[1,8,256,160]>,
ttnn.reshapeaten::view5
240Tensor<[1,256,8,160]>,
Tensor<[1,256,1280]>,
ttnn.reshapeaten::view5
241Tensor<[1,9,1280]>,
Tensor<[1,9,8,160]>,
ttnn.reshapeaten::view5
242Tensor<[1,8,160,9]>,
Tensor<[8,160,9]>,
ttnn.reshapeaten::view5
243Tensor<[8,256,9]>,
Tensor<[1,8,256,9]>,
ttnn.reshapeaten::view5
244Tensor<[1,8,256,9]>,
Tensor<[8,256,9]>,
ttnn.reshapeaten::view5
245Tensor<[1,8,9,160]>,
Tensor<[8,9,160]>,
ttnn.reshapeaten::view5
246Tensor<[256,10240]>,
Tensor<[1,256,10240]>,
ttnn.reshapeaten::view5
247Tensor<[1,256,5120]>,
Tensor<[256,5120]>,
ttnn.reshapeaten::view5
248Tensor<[1,256,1280]>,
Tensor<[1,16,16,1280]>,
ttnn.reshapeaten::view5
249Tensor<[1,1280,8,8]>,
Tensor<[1,32,40,64]>,
ttnn.reshapeaten::view5
250Tensor<[1,32,40,64]>,
Tensor<[1,1280,8,8]>,
ttnn.reshapeaten::view5
251Tensor<[1,8,8,1280]>,
Tensor<[1,64,1280]>,
ttnn.reshapeaten::view5
252Tensor<[1,64,1280]>,
Tensor<[64,1280]>,
ttnn.reshapeaten::view5
253Tensor<[1,64,1280]>,
Tensor<[1,64,8,160]>,
ttnn.reshapeaten::view5
254Tensor<[1,8,64,160]>,
Tensor<[8,64,160]>,
ttnn.reshapeaten::view5
255Tensor<[1,8,160,64]>,
Tensor<[8,160,64]>,
ttnn.reshapeaten::view5
256Tensor<[8,64,64]>,
Tensor<[1,8,64,64]>,
ttnn.reshapeaten::view5
257Tensor<[1,8,64,64]>,
Tensor<[8,64,64]>,
ttnn.reshapeaten::view5
258Tensor<[8,64,160]>,
Tensor<[1,8,64,160]>,
ttnn.reshapeaten::view5
259Tensor<[1,64,8,160]>,
Tensor<[1,64,1280]>,
ttnn.reshapeaten::view5
260Tensor<[8,64,9]>,
Tensor<[1,8,64,9]>,
ttnn.reshapeaten::view5
261Tensor<[1,8,64,9]>,
Tensor<[8,64,9]>,
ttnn.reshapeaten::view5
262Tensor<[64,10240]>,
Tensor<[1,64,10240]>,
ttnn.reshapeaten::view5
263Tensor<[1,64,5120]>,
Tensor<[64,5120]>,
ttnn.reshapeaten::view5
264Tensor<[1,64,1280]>,
Tensor<[1,8,8,1280]>,
ttnn.reshapeaten::view5
265Tensor<[1,2560,8,8]>,
Tensor<[1,32,80,64]>,
ttnn.reshapeaten::view5
266Tensor<[1,32,80,64]>,
Tensor<[1,2560,8,8]>,
ttnn.reshapeaten::view5
267Tensor<[1,2560,16,16]>,
Tensor<[1,32,80,256]>,
ttnn.reshapeaten::view5
268Tensor<[1,32,80,256]>,
Tensor<[1,2560,16,16]>,
ttnn.reshapeaten::view5
269Tensor<[1,1920,16,16]>,
Tensor<[1,32,60,256]>,
ttnn.reshapeaten::view5
270Tensor<[1,32,60,256]>,
Tensor<[1,1920,16,16]>,
ttnn.reshapeaten::view5
271Tensor<[1,1920,32,32]>,
Tensor<[1,32,60,1024]>,
ttnn.reshapeaten::view5
272Tensor<[1,32,60,1024]>,
Tensor<[1,1920,32,32]>,
ttnn.reshapeaten::view5
273Tensor<[1,1280,32,32]>,
Tensor<[1,32,40,1024]>,
ttnn.reshapeaten::view5
274Tensor<[1,32,40,1024]>,
Tensor<[1,1280,32,32]>,
ttnn.reshapeaten::view5
275Tensor<[1,960,32,32]>,
Tensor<[1,32,30,1024]>,
ttnn.reshapeaten::view5
276Tensor<[1,32,30,1024]>,
Tensor<[1,960,32,32]>,
ttnn.reshapeaten::view5
277Tensor<[1,960,64,64]>,
Tensor<[1,32,30,4096]>,
ttnn.reshapeaten::view5
278Tensor<[1,32,30,4096]>,
Tensor<[1,960,64,64]>,
ttnn.reshapeaten::view5
279Tensor<[1,640,64,64]>,
Tensor<[1,32,20,4096]>,
ttnn.reshapeaten::view5
280Tensor<[1,32,20,4096]>,
Tensor<[1,640,64,64]>,
ttnn.reshapeaten::view5
281Tensor<[1,12,25]>,
Tensor<[1,12,25,1]>,
ttnn.reshapeaten::_safe_softmax4
282Tensor<[1,1,768]>,
Tensor<[1,768]>,
ttnn.reshapeaten::select.int4
283Tensor<[1,25,1]>,
Tensor<[1,25]>,
ttnn.reshapeaten::squeeze.dim5
284Tensor<[1,25]>,
Tensor<[1,1,25]>,
ttnn.reshapeaten::unsqueeze4
285Tensor<[1,1,25]>,
Tensor<[1,1,1,25]>,
ttnn.reshapeaten::unsqueeze4
286Tensor<[1,25,768]>,
Tensor<[25,768]>,
ttnn.reshapeaten::view5
287Tensor<[25,768]>,
Tensor<[1,25,768]>,
ttnn.reshapeaten::view5
288Tensor<[1,25,768]>,
Tensor<[1,25,12,64]>,
ttnn.reshapeaten::view5
289Tensor<[1,12,25,64]>,
Tensor<[12,25,64]>,
ttnn.reshapeaten::view5
290Tensor<[1,12,64,25]>,
Tensor<[12,64,25]>,
ttnn.reshapeaten::view5
291Tensor<[12,25,25]>,
Tensor<[1,12,25,25]>,
ttnn.reshapeaten::view5
292Tensor<[1,12,25,25]>,
Tensor<[12,25,25]>,
ttnn.reshapeaten::view5
293Tensor<[12,25,64]>,
Tensor<[1,12,25,64]>,
ttnn.reshapeaten::view5
294Tensor<[1,25,12,64]>,
Tensor<[1,25,768]>,
ttnn.reshapeaten::view5
295Tensor<[25,3072]>,
Tensor<[1,25,3072]>,
ttnn.reshapeaten::view5
296Tensor<[1,25,3072]>,
Tensor<[25,3072]>,
ttnn.reshapeaten::view5
297Tensor<[25,2]>,
Tensor<[1,25,2]>,
ttnn.reshapeaten::view5
298Tensor<[1,25]>,
Tensor<[1,25]>,
ttnn.reshapeaten::view5
299Tensor<[1,1]>,
Tensor<[1]>,
ttnn.reshapeaten::view5
300Tensor<[1,3,1445]>,
Tensor<[1,3,1445,1]>,
ttnn.reshapeaten::_safe_softmax4
301Tensor<[192]>,
Tensor<[192,1,1]>,
ttnn.reshapeaten::convolution4
302Tensor<[1,1,192]>,
Tensor<[1,192]>,
ttnn.reshapeaten::select.int4
303Tensor<[1,192]>,
Tensor<[1,1,192]>,
ttnn.reshapeaten::unsqueeze5
304Tensor<[1,192,32,42]>,
Tensor<[1,192,1344]>,
ttnn.reshapeaten::view5
305Tensor<[1,192,4150]>,
Tensor<[1,192,50,83]>,
ttnn.reshapeaten::view5
306Tensor<[1,1445,192]>,
Tensor<[1445,192]>,
ttnn.reshapeaten::view5
307Tensor<[1445,192]>,
Tensor<[1,1445,192]>,
ttnn.reshapeaten::view5
308Tensor<[1,1445,192]>,
Tensor<[1,1445,3,64]>,
ttnn.reshapeaten::view5
309Tensor<[1,3,1445,64]>,
Tensor<[3,1445,64]>,
ttnn.reshapeaten::view5
310Tensor<[1,3,64,1445]>,
Tensor<[3,64,1445]>,
ttnn.reshapeaten::view5
311Tensor<[3,1445,1445]>,
Tensor<[1,3,1445,1445]>,
ttnn.reshapeaten::view5
312Tensor<[1,3,1445,1445]>,
Tensor<[3,1445,1445]>,
ttnn.reshapeaten::view5
313Tensor<[3,1445,64]>,
Tensor<[1,3,1445,64]>,
ttnn.reshapeaten::view5
314Tensor<[1,1445,3,64]>,
Tensor<[1,1445,192]>,
ttnn.reshapeaten::view5
315Tensor<[1445,768]>,
Tensor<[1,1445,768]>,
ttnn.reshapeaten::view5
316Tensor<[1,1445,768]>,
Tensor<[1445,768]>,
ttnn.reshapeaten::view5
317Tensor<[1,100,192]>,
Tensor<[100,192]>,
ttnn.reshapeaten::view5
318Tensor<[100,192]>,
Tensor<[1,100,192]>,
ttnn.reshapeaten::view5
319Tensor<[100,92]>,
Tensor<[1,100,92]>,
ttnn.reshapeaten::view5
320Tensor<[100,4]>,
Tensor<[1,100,4]>,
ttnn.reshapeaten::view5
321Tensor<[1,512]>,
Tensor<[1,512,1,1]>,
ttnn.reshapeaten::mean.dim4
322Tensor<[1,512,1,1]>,
Tensor<[1,512]>,
ttnn.reshapeaten::view5
323Tensor<[1,12,8]>,
Tensor<[1,12,8,1]>,
ttnn.reshapeaten::_softmax4
324Tensor<[12,8,8]>,
Tensor<[1,12,8,8]>,
ttnn.reshapeaten::_unsafe_view5
325Tensor<[12,8,64]>,
Tensor<[1,12,8,64]>,
ttnn.reshapeaten::_unsafe_view5
326Tensor<[768]>,
Tensor<[768,1]>,
ttnn.reshapeaten::convolution4
327Tensor<[3072]>,
Tensor<[3072,1]>,
ttnn.reshapeaten::convolution4
328Tensor<[1,8]>,
Tensor<[1,1,8]>,
ttnn.reshapeaten::unsqueeze4
329Tensor<[1,1,8]>,
Tensor<[1,1,1,8]>,
ttnn.reshapeaten::unsqueeze4
330Tensor<[1,768,8]>,
Tensor<[1,12,64,8]>,
ttnn.reshapeaten::view5
331Tensor<[1,12,8,64]>,
Tensor<[12,8,64]>,
ttnn.reshapeaten::view5
332Tensor<[1,12,64,8]>,
Tensor<[12,64,8]>,
ttnn.reshapeaten::view5
333Tensor<[1,12,8,8]>,
Tensor<[12,8,8]>,
ttnn.reshapeaten::view5
334Tensor<[1,12,64,8]>,
Tensor<[1,768,8]>,
ttnn.reshapeaten::view5
335Tensor<[1,8,2048]>,
Tensor<[1,8,2048,1]>,
ttnn.reshapeaten::_softmax4
336Tensor<[8,256,2048]>,
Tensor<[1,8,256,2048]>,
ttnn.reshapeaten::_unsafe_view5
337Tensor<[8,2048,256]>,
Tensor<[1,8,2048,256]>,
ttnn.reshapeaten::_unsafe_view5
338Tensor<[8,2048,96]>,
Tensor<[1,8,2048,96]>,
ttnn.reshapeaten::_unsafe_view5
339Tensor<[1,2048]>,
Tensor<[1,1,2048]>,
ttnn.reshapeaten::unsqueeze4
340Tensor<[1,1,2048]>,
Tensor<[1,1,1,2048]>,
ttnn.reshapeaten::unsqueeze4
341Tensor<[1,2048,768]>,
Tensor<[2048,768]>,
ttnn.reshapeaten::view5
342Tensor<[2048,256]>,
Tensor<[1,2048,256]>,
ttnn.reshapeaten::view5
343Tensor<[2048,1280]>,
Tensor<[1,2048,1280]>,
ttnn.reshapeaten::view5
344Tensor<[1,256,256]>,
Tensor<[1,256,8,32]>,
ttnn.reshapeaten::view5
345Tensor<[1,2048,256]>,
Tensor<[1,2048,8,32]>,
ttnn.reshapeaten::view5
346Tensor<[1,2048,1280]>,
Tensor<[1,2048,8,160]>,
ttnn.reshapeaten::view5
347Tensor<[1,8,256,32]>,
Tensor<[8,256,32]>,
ttnn.reshapeaten::view5
348Tensor<[1,8,32,2048]>,
Tensor<[8,32,2048]>,
ttnn.reshapeaten::view5
349Tensor<[1,8,256,2048]>,
Tensor<[8,256,2048]>,
ttnn.reshapeaten::view5
350Tensor<[1,8,2048,160]>,
Tensor<[8,2048,160]>,
ttnn.reshapeaten::view5
351Tensor<[1,8,32,256]>,
Tensor<[8,32,256]>,
ttnn.reshapeaten::view5
352Tensor<[256,768]>,
Tensor<[1,256,768]>,
ttnn.reshapeaten::view5
353Tensor<[1,256,768]>,
Tensor<[1,256,8,96]>,
ttnn.reshapeaten::view5
354Tensor<[1,8,2048,32]>,
Tensor<[8,2048,32]>,
ttnn.reshapeaten::view5
355Tensor<[1,8,2048,256]>,
Tensor<[8,2048,256]>,
ttnn.reshapeaten::view5
356Tensor<[1,8,256,96]>,
Tensor<[8,256,96]>,
ttnn.reshapeaten::view5
357Tensor<[1,2048,8,96]>,
Tensor<[1,2048,768]>,
ttnn.reshapeaten::view5
358Tensor<[2048,768]>,
Tensor<[1,2048,768]>,
ttnn.reshapeaten::view5
359Tensor<[2048,262]>,
Tensor<[1,2048,262]>,
ttnn.reshapeaten::view5
360Tensor<[1,2048]>,
Tensor<[1,2048,1,1]>,
ttnn.reshapeaten::mean.dim4
361Tensor<[1024,1]>,
Tensor<[1024,1,1]>,
ttnn.reshapeaten::unsqueeze5
362Tensor<[2048]>,
Tensor<[2048,1]>,
ttnn.reshapeaten::unsqueeze5
363Tensor<[2048,1]>,
Tensor<[2048,1,1]>,
ttnn.reshapeaten::unsqueeze5
364Tensor<[1,2048,1,1]>,
Tensor<[1,2048]>,
ttnn.reshapeaten::view5
365Tensor<[1,12,201]>,
Tensor<[1,12,201,1]>,
ttnn.reshapeaten::_softmax4
366Tensor<[12,201,201]>,
Tensor<[1,12,201,201]>,
ttnn.reshapeaten::_unsafe_view5
367Tensor<[12,201,64]>,
Tensor<[1,12,201,64]>,
ttnn.reshapeaten::_unsafe_view5
368Tensor<[768]>,
Tensor<[768,1,1]>,
ttnn.reshapeaten::convolution4
369Tensor<[1,1,12,16]>,
Tensor<[1,1,12,16,1]>,
ttnn.reshapeaten::index.Tensor4
370Tensor<[1,1,12,16]>,
Tensor<[1,12,16]>,
ttnn.reshapeaten::select.int4
371Tensor<[192,1]>,
Tensor<[192]>,
ttnn.reshapeaten::select.int4
372Tensor<[12,16]>,
Tensor<[12,16,1]>,
ttnn.reshapeaten::stack4
373Tensor<[1,384,512]>,
Tensor<[1,1,384,512]>,
ttnn.reshapeaten::unsqueeze4
374Tensor<[12]>,
Tensor<[12,1]>,
ttnn.reshapeaten::unsqueeze4
375Tensor<[12,16,2]>,
Tensor<[1,12,16,2]>,
ttnn.reshapeaten::unsqueeze4
376Tensor<[1,12,16,2]>,
Tensor<[1,1,12,16,2]>,
ttnn.reshapeaten::unsqueeze4
377Tensor<[1,201]>,
Tensor<[1,1,201]>,
ttnn.reshapeaten::unsqueeze4
378Tensor<[1,1,201]>,
Tensor<[1,1,1,201]>,
ttnn.reshapeaten::unsqueeze4
379Tensor<[1,768,144]>,
Tensor<[1,768,12,12]>,
ttnn.reshapeaten::view5
380Tensor<[1,768,12,16]>,
Tensor<[1,768,192]>,
ttnn.reshapeaten::view5
381Tensor<[16]>,
Tensor<[1,16]>,
ttnn.reshapeaten::view4
382Tensor<[1,1,12,16,2]>,
Tensor<[1,192,2]>,
ttnn.reshapeaten::view4
383Tensor<[1,1,12,16]>,
Tensor<[1,192]>,
ttnn.reshapeaten::view4
384Tensor<[1,201,768]>,
Tensor<[201,768]>,
ttnn.reshapeaten::view5
385Tensor<[201,768]>,
Tensor<[1,201,768]>,
ttnn.reshapeaten::view5
386Tensor<[1,201,768]>,
Tensor<[1,201,12,64]>,
ttnn.reshapeaten::view5
387Tensor<[1,12,201,64]>,
Tensor<[12,201,64]>,
ttnn.reshapeaten::view5
388Tensor<[1,12,64,201]>,
Tensor<[12,64,201]>,
ttnn.reshapeaten::view5
389Tensor<[1,12,201,201]>,
Tensor<[12,201,201]>,
ttnn.reshapeaten::view5
390Tensor<[1,201,12,64]>,
Tensor<[1,201,768]>,
ttnn.reshapeaten::view5
391Tensor<[201,3072]>,
Tensor<[1,201,3072]>,
ttnn.reshapeaten::view5
392Tensor<[1,201,3072]>,
Tensor<[201,3072]>,
ttnn.reshapeaten::view5
393Tensor<[32]>,
Tensor<[32,1,1]>,
ttnn.reshapeaten::convolution4
394Tensor<[64]>,
Tensor<[64,1,1]>,
ttnn.reshapeaten::convolution4
395Tensor<[1,64,12,12]>,
Tensor<[1,9216]>,
ttnn.reshapeaten::view5
396Tensor<[16,19]>,
Tensor<[16,19,1]>,
ttnn.reshapeaten::_softmax4
397Tensor<[1,19,16,64]>,
Tensor<[1,19,1024]>,
ttnn.reshapeaten::_unsafe_view5
398Tensor<[19,256008]>,
Tensor<[1,19,256008]>,
ttnn.reshapeaten::_unsafe_view5
399Tensor<[19]>,
Tensor<[19,1]>,
ttnn.reshapeaten::amax5
400Tensor<[19,1]>,
Tensor<[19,1,1]>,
ttnn.reshapeaten::gather4
401Tensor<[1,19]>,
Tensor<[19]>,
ttnn.reshapeaten::squeeze.dim4
402Tensor<[19,1]>,
Tensor<[19]>,
ttnn.reshapeaten::squeeze.dim5
403Tensor<[19]>,
Tensor<[1,19]>,
ttnn.reshapeaten::unsqueeze4
404Tensor<[19,19]>,
Tensor<[1,19,19]>,
ttnn.reshapeaten::unsqueeze5
405Tensor<[1,19,19]>,
Tensor<[1,1,19,19]>,
ttnn.reshapeaten::unsqueeze5
406Tensor<[1,19]>,
Tensor<[1,1,19]>,
ttnn.reshapeaten::unsqueeze4
407Tensor<[1,1,19]>,
Tensor<[1,1,1,19]>,
ttnn.reshapeaten::unsqueeze4
408Tensor<[1,19]>,
Tensor<[1,19]>,
ttnn.reshapeaten::view4
409Tensor<[19,1024]>,
Tensor<[1,19,1024]>,
ttnn.reshapeaten::view5
410Tensor<[1,19,1024]>,
Tensor<[19,1024]>,
ttnn.reshapeaten::view5
411Tensor<[1,19,1024]>,
Tensor<[1,19,16,64]>,
ttnn.reshapeaten::view5
412Tensor<[1,16,19,64]>,
Tensor<[16,19,64]>,
ttnn.reshapeaten::view5
413Tensor<[16,19,19]>,
Tensor<[1,16,19,19]>,
ttnn.reshapeaten::view5
414Tensor<[1,16,19,19]>,
Tensor<[16,19,19]>,
ttnn.reshapeaten::view5
415Tensor<[16,19,64]>,
Tensor<[1,16,19,64]>,
ttnn.reshapeaten::view5
416Tensor<[19,4096]>,
Tensor<[1,19,4096]>,
ttnn.reshapeaten::view5
417Tensor<[1,19,4096]>,
Tensor<[19,4096]>,
ttnn.reshapeaten::view5
418Tensor<[1,19,256008]>,
Tensor<[19,256008]>,
ttnn.reshapeaten::view5
419Tensor<[1,1024]>,
Tensor<[1,1024,1,1]>,
ttnn.reshapeaten::mean.dim4
420Tensor<[14]>,
Tensor<[14,1]>,
ttnn.reshapeaten::unsqueeze5
421Tensor<[14,1]>,
Tensor<[14,1,1]>,
ttnn.reshapeaten::unsqueeze5
422Tensor<[24]>,
Tensor<[24,1]>,
ttnn.reshapeaten::unsqueeze5
423Tensor<[24,1]>,
Tensor<[24,1,1]>,
ttnn.reshapeaten::unsqueeze5
424Tensor<[40]>,
Tensor<[40,1]>,
ttnn.reshapeaten::unsqueeze5
425Tensor<[40,1]>,
Tensor<[40,1,1]>,
ttnn.reshapeaten::unsqueeze5
426Tensor<[68]>,
Tensor<[68,1]>,
ttnn.reshapeaten::unsqueeze5
427Tensor<[68,1]>,
Tensor<[68,1,1]>,
ttnn.reshapeaten::unsqueeze5
428Tensor<[16,1]>,
Tensor<[16,1,1]>,
ttnn.reshapeaten::unsqueeze5
429Tensor<[28]>,
Tensor<[28,1]>,
ttnn.reshapeaten::unsqueeze5
430Tensor<[28,1]>,
Tensor<[28,1,1]>,
ttnn.reshapeaten::unsqueeze5
431Tensor<[46]>,
Tensor<[46,1]>,
ttnn.reshapeaten::unsqueeze5
432Tensor<[46,1]>,
Tensor<[46,1,1]>,
ttnn.reshapeaten::unsqueeze5
433Tensor<[78]>,
Tensor<[78,1]>,
ttnn.reshapeaten::unsqueeze5
434Tensor<[78,1]>,
Tensor<[78,1,1]>,
ttnn.reshapeaten::unsqueeze5
435Tensor<[134]>,
Tensor<[134,1]>,
ttnn.reshapeaten::unsqueeze5
436Tensor<[134,1]>,
Tensor<[134,1,1]>,
ttnn.reshapeaten::unsqueeze5
437Tensor<[20]>,
Tensor<[20,1]>,
ttnn.reshapeaten::unsqueeze5
438Tensor<[20,1]>,
Tensor<[20,1,1]>,
ttnn.reshapeaten::unsqueeze5
439Tensor<[34]>,
Tensor<[34,1]>,
ttnn.reshapeaten::unsqueeze5
440Tensor<[34,1]>,
Tensor<[34,1,1]>,
ttnn.reshapeaten::unsqueeze5
441Tensor<[58]>,
Tensor<[58,1]>,
ttnn.reshapeaten::unsqueeze5
442Tensor<[58,1]>,
Tensor<[58,1,1]>,
ttnn.reshapeaten::unsqueeze5
443Tensor<[98]>,
Tensor<[98,1]>,
ttnn.reshapeaten::unsqueeze5
444Tensor<[98,1]>,
Tensor<[98,1,1]>,
ttnn.reshapeaten::unsqueeze5
445Tensor<[168]>,
Tensor<[168,1]>,
ttnn.reshapeaten::unsqueeze5
446Tensor<[168,1]>,
Tensor<[168,1,1]>,
ttnn.reshapeaten::unsqueeze5
447Tensor<[320]>,
Tensor<[320,1]>,
ttnn.reshapeaten::unsqueeze5
448Tensor<[320,1]>,
Tensor<[320,1,1]>,
ttnn.reshapeaten::unsqueeze5
449Tensor<[116]>,
Tensor<[116,1]>,
ttnn.reshapeaten::unsqueeze5
450Tensor<[116,1]>,
Tensor<[116,1,1]>,
ttnn.reshapeaten::unsqueeze5
451Tensor<[196]>,
Tensor<[196,1]>,
ttnn.reshapeaten::unsqueeze5
452Tensor<[196,1]>,
Tensor<[196,1,1]>,
ttnn.reshapeaten::unsqueeze5
453Tensor<[334]>,
Tensor<[334,1]>,
ttnn.reshapeaten::unsqueeze5
454Tensor<[334,1]>,
Tensor<[334,1,1]>,
ttnn.reshapeaten::unsqueeze5
455Tensor<[160]>,
Tensor<[160,1]>,
ttnn.reshapeaten::unsqueeze5
456Tensor<[160,1]>,
Tensor<[160,1,1]>,
ttnn.reshapeaten::unsqueeze5
457Tensor<[272]>,
Tensor<[272,1]>,
ttnn.reshapeaten::unsqueeze5
458Tensor<[272,1]>,
Tensor<[272,1,1]>,
ttnn.reshapeaten::unsqueeze5
459Tensor<[462]>,
Tensor<[462,1]>,
ttnn.reshapeaten::unsqueeze5
460Tensor<[462,1]>,
Tensor<[462,1,1]>,
ttnn.reshapeaten::unsqueeze5
461Tensor<[1,1024,1,1]>,
Tensor<[1,1024]>,
ttnn.reshapeaten::view5
462Tensor<[255]>,
Tensor<[255,1,1]>,
ttnn.reshapeaten::convolution4
463Tensor<[1,256,32,32]>,
Tensor<[1,256,32,32,1]>,
ttnn.reshapeaten::index.Tensor4
464Tensor<[1,128,64,64]>,
Tensor<[1,128,64,64,1]>,
ttnn.reshapeaten::index.Tensor4
465Tensor<[1]>,
Tensor<[1,1,1]>,
ttnn.reshapeaten::convolution4
466Tensor<[16]>,
Tensor<[16,1,1]>,
ttnn.reshapeaten::convolution4
467Tensor<[1,16,32]>,
Tensor<[1,16,32,1]>,
ttnn.reshapeaten::_softmax4
468Tensor<[1,32,16,96]>,
Tensor<[1,32,1536]>,
ttnn.reshapeaten::_unsafe_view5
469Tensor<[32,250880]>,
Tensor<[1,32,250880]>,
ttnn.reshapeaten::_unsafe_view5
470Tensor<[1,32,16,1,96]>,
Tensor<[1,32,16,96]>,
ttnn.reshapeaten::select.int4
471Tensor<[1,16,32]>,
Tensor<[16,1,32]>,
ttnn.reshapeaten::view5
472Tensor<[1,32,1536]>,
Tensor<[32,1536]>,
ttnn.reshapeaten::view5
473Tensor<[32,4608]>,
Tensor<[1,32,4608]>,
ttnn.reshapeaten::view5
474Tensor<[1,32,4608]>,
Tensor<[1,32,16,3,96]>,
ttnn.reshapeaten::view5
475Tensor<[1,16,32,96]>,
Tensor<[16,32,96]>,
ttnn.reshapeaten::view5
476Tensor<[16,32,32]>,
Tensor<[1,16,32,32]>,
ttnn.reshapeaten::view5
477Tensor<[1,16,32,32]>,
Tensor<[16,32,32]>,
ttnn.reshapeaten::view5
478Tensor<[16,32,96]>,
Tensor<[1,16,32,96]>,
ttnn.reshapeaten::view5
479Tensor<[32,1536]>,
Tensor<[1,32,1536]>,
ttnn.reshapeaten::view5
480Tensor<[32,6144]>,
Tensor<[1,32,6144]>,
ttnn.reshapeaten::view5
481Tensor<[1,32,6144]>,
Tensor<[32,6144]>,
ttnn.reshapeaten::view5
482Tensor<[1,12,16]>,
Tensor<[1,12,16,1]>,
ttnn.reshapeaten::_safe_softmax4
483Tensor<[1,16]>,
Tensor<[1,1,16]>,
ttnn.reshapeaten::unsqueeze4
484Tensor<[1,1,16]>,
Tensor<[1,1,1,16]>,
ttnn.reshapeaten::unsqueeze4
485Tensor<[1,16,768]>,
Tensor<[16,768]>,
ttnn.reshapeaten::view5
486Tensor<[16,768]>,
Tensor<[1,16,768]>,
ttnn.reshapeaten::view5
487Tensor<[1,16,768]>,
Tensor<[1,16,12,64]>,
ttnn.reshapeaten::view5
488Tensor<[1,12,16,64]>,
Tensor<[12,16,64]>,
ttnn.reshapeaten::view5
489Tensor<[1,12,64,16]>,
Tensor<[12,64,16]>,
ttnn.reshapeaten::view5
490Tensor<[12,16,16]>,
Tensor<[1,12,16,16]>,
ttnn.reshapeaten::view5
491Tensor<[1,12,16,16]>,
Tensor<[12,16,16]>,
ttnn.reshapeaten::view5
492Tensor<[12,16,64]>,
Tensor<[1,12,16,64]>,
ttnn.reshapeaten::view5
493Tensor<[1,16,12,64]>,
Tensor<[1,16,768]>,
ttnn.reshapeaten::view5
494Tensor<[16,3072]>,
Tensor<[1,16,3072]>,
ttnn.reshapeaten::view5
495Tensor<[1,16,3072]>,
Tensor<[16,3072]>,
ttnn.reshapeaten::view5
496Tensor<[1,1,19200]>,
Tensor<[1,1,19200,1]>,
ttnn.reshapeaten::_softmax4
497Tensor<[1,2,4800]>,
Tensor<[1,2,4800,1]>,
ttnn.reshapeaten::_softmax4
498Tensor<[1,5,1200]>,
Tensor<[1,5,1200,1]>,
ttnn.reshapeaten::_softmax4
499Tensor<[1,8,300]>,
Tensor<[1,8,300,1]>,
ttnn.reshapeaten::_softmax4
500Tensor<[1,19200,300]>,
Tensor<[1,1,19200,300]>,
ttnn.reshapeaten::_unsafe_view5
501Tensor<[1,19200,64]>,
Tensor<[1,1,19200,64]>,
ttnn.reshapeaten::_unsafe_view5
502Tensor<[1,19200,64]>,
Tensor<[1,19200,64]>,
ttnn.reshapeaten::_unsafe_view5
503Tensor<[2,4800,300]>,
Tensor<[1,2,4800,300]>,
ttnn.reshapeaten::_unsafe_view5
504Tensor<[2,4800,64]>,
Tensor<[1,2,4800,64]>,
ttnn.reshapeaten::_unsafe_view5
505Tensor<[1,4800,128]>,
Tensor<[1,4800,128]>,
ttnn.reshapeaten::_unsafe_view5
506Tensor<[5,1200,300]>,
Tensor<[1,5,1200,300]>,
ttnn.reshapeaten::_unsafe_view5
507Tensor<[5,1200,64]>,
Tensor<[1,5,1200,64]>,
ttnn.reshapeaten::_unsafe_view5
508Tensor<[1,1200,320]>,
Tensor<[1,1200,320]>,
ttnn.reshapeaten::_unsafe_view5
509Tensor<[8,300,300]>,
Tensor<[1,8,300,300]>,
ttnn.reshapeaten::_unsafe_view5
510Tensor<[8,300,64]>,
Tensor<[1,8,300,64]>,
ttnn.reshapeaten::_unsafe_view5
511Tensor<[1,300,512]>,
Tensor<[1,300,512]>,
ttnn.reshapeaten::_unsafe_view5
512Tensor<[2048]>,
Tensor<[2048,1,1]>,
ttnn.reshapeaten::convolution4
513Tensor<[2]>,
Tensor<[2,1,1]>,
ttnn.reshapeaten::convolution4
514Tensor<[1,64,30,40]>,
Tensor<[1,64,30,40,1]>,
ttnn.reshapeaten::index.Tensor4
515Tensor<[1,64,60,80]>,
Tensor<[1,64,60,80,1]>,
ttnn.reshapeaten::index.Tensor4
516Tensor<[1,64,120,160]>,
Tensor<[1,64,120,160,1]>,
ttnn.reshapeaten::index.Tensor4
517Tensor<[1,64,240,320]>,
Tensor<[1,64,240,320,1]>,
ttnn.reshapeaten::index.Tensor4
518Tensor<[1,64,480,640]>,
Tensor<[1,64,480,640,1]>,
ttnn.reshapeaten::index.Tensor4
519Tensor<[1,1,30,40]>,
Tensor<[1,30,40]>,
ttnn.reshapeaten::select.int4
520Tensor<[1,1,60,80]>,
Tensor<[1,60,80]>,
ttnn.reshapeaten::select.int4
521Tensor<[1,1,120,160]>,
Tensor<[1,120,160]>,
ttnn.reshapeaten::select.int4
522Tensor<[1,1,480,640]>,
Tensor<[1,480,640]>,
ttnn.reshapeaten::squeeze.dim5
523Tensor<[1,30,40]>,
Tensor<[1,1,30,40]>,
ttnn.reshapeaten::unsqueeze5
524Tensor<[1,60,80]>,
Tensor<[1,1,60,80]>,
ttnn.reshapeaten::unsqueeze5
525Tensor<[1,120,160]>,
Tensor<[1,1,120,160]>,
ttnn.reshapeaten::unsqueeze5
526Tensor<[1,64,120,160]>,
Tensor<[1,64,19200]>,
ttnn.reshapeaten::view5
527Tensor<[1,19200,64]>,
Tensor<[19200,64]>,
ttnn.reshapeaten::view5
528Tensor<[19200,64]>,
Tensor<[1,19200,64]>,
ttnn.reshapeaten::view5
529Tensor<[1,19200,64]>,
Tensor<[1,19200,1,64]>,
ttnn.reshapeaten::view5
530Tensor<[1,64,19200]>,
Tensor<[1,64,120,160]>,
ttnn.reshapeaten::view5
531Tensor<[1,64,15,20]>,
Tensor<[1,64,300]>,
ttnn.reshapeaten::view5
532Tensor<[1,300,64]>,
Tensor<[300,64]>,
ttnn.reshapeaten::view5
533Tensor<[300,64]>,
Tensor<[1,300,64]>,
ttnn.reshapeaten::view5
534Tensor<[1,300,64]>,
Tensor<[1,300,1,64]>,
ttnn.reshapeaten::view5
535Tensor<[1,1,19200,64]>,
Tensor<[1,19200,64]>,
ttnn.reshapeaten::view5
536Tensor<[1,1,64,300]>,
Tensor<[1,64,300]>,
ttnn.reshapeaten::view5
537Tensor<[1,1,19200,300]>,
Tensor<[1,19200,300]>,
ttnn.reshapeaten::view5
538Tensor<[1,1,300,64]>,
Tensor<[1,300,64]>,
ttnn.reshapeaten::view5
539Tensor<[1,19200,1,64]>,
Tensor<[1,19200,64]>,
ttnn.reshapeaten::view5
540Tensor<[19200,256]>,
Tensor<[1,19200,256]>,
ttnn.reshapeaten::view5
541Tensor<[1,256,19200]>,
Tensor<[1,256,120,160]>,
ttnn.reshapeaten::view5
542Tensor<[1,256,120,160]>,
Tensor<[1,256,19200]>,
ttnn.reshapeaten::view5
543Tensor<[1,19200,256]>,
Tensor<[1,19200,256]>,
ttnn.reshapeaten::view5
544Tensor<[1,256,64]>,
Tensor<[1,256,64]>,
ttnn.reshapeaten::view5
545Tensor<[1,19200,64]>,
Tensor<[1,120,160,64]>,
ttnn.reshapeaten::view5
546Tensor<[1,128,60,80]>,
Tensor<[1,128,4800]>,
ttnn.reshapeaten::view5
547Tensor<[1,4800,128]>,
Tensor<[4800,128]>,
ttnn.reshapeaten::view5
548Tensor<[4800,128]>,
Tensor<[1,4800,128]>,
ttnn.reshapeaten::view5
549Tensor<[1,4800,128]>,
Tensor<[1,4800,2,64]>,
ttnn.reshapeaten::view5
550Tensor<[1,128,4800]>,
Tensor<[1,128,60,80]>,
ttnn.reshapeaten::view5
551Tensor<[1,128,15,20]>,
Tensor<[1,128,300]>,
ttnn.reshapeaten::view5
552Tensor<[1,300,128]>,
Tensor<[300,128]>,
ttnn.reshapeaten::view5
553Tensor<[300,128]>,
Tensor<[1,300,128]>,
ttnn.reshapeaten::view5
554Tensor<[1,300,128]>,
Tensor<[1,300,2,64]>,
ttnn.reshapeaten::view5
555Tensor<[1,2,4800,64]>,
Tensor<[2,4800,64]>,
ttnn.reshapeaten::view5
556Tensor<[1,2,64,300]>,
Tensor<[2,64,300]>,
ttnn.reshapeaten::view5
557Tensor<[1,2,4800,300]>,
Tensor<[2,4800,300]>,
ttnn.reshapeaten::view5
558Tensor<[1,2,300,64]>,
Tensor<[2,300,64]>,
ttnn.reshapeaten::view5
559Tensor<[1,4800,2,64]>,
Tensor<[1,4800,128]>,
ttnn.reshapeaten::view5
560Tensor<[4800,512]>,
Tensor<[1,4800,512]>,
ttnn.reshapeaten::view5
561Tensor<[1,512,4800]>,
Tensor<[1,512,60,80]>,
ttnn.reshapeaten::view5
562Tensor<[1,512,60,80]>,
Tensor<[1,512,4800]>,
ttnn.reshapeaten::view5
563Tensor<[1,4800,512]>,
Tensor<[1,4800,512]>,
ttnn.reshapeaten::view5
564Tensor<[1,512,128]>,
Tensor<[1,512,128]>,
ttnn.reshapeaten::view5
565Tensor<[1,4800,128]>,
Tensor<[1,60,80,128]>,
ttnn.reshapeaten::view5
566Tensor<[1,320,30,40]>,
Tensor<[1,320,1200]>,
ttnn.reshapeaten::view5
567Tensor<[1,1200,320]>,
Tensor<[1200,320]>,
ttnn.reshapeaten::view5
568Tensor<[1200,320]>,
Tensor<[1,1200,320]>,
ttnn.reshapeaten::view5
569Tensor<[1,1200,320]>,
Tensor<[1,1200,5,64]>,
ttnn.reshapeaten::view5
570Tensor<[1,320,1200]>,
Tensor<[1,320,30,40]>,
ttnn.reshapeaten::view5
571Tensor<[1,320,15,20]>,
Tensor<[1,320,300]>,
ttnn.reshapeaten::view5
572Tensor<[1,300,320]>,
Tensor<[300,320]>,
ttnn.reshapeaten::view5
573Tensor<[300,320]>,
Tensor<[1,300,320]>,
ttnn.reshapeaten::view5
574Tensor<[1,300,320]>,
Tensor<[1,300,5,64]>,
ttnn.reshapeaten::view5
575Tensor<[1,5,1200,64]>,
Tensor<[5,1200,64]>,
ttnn.reshapeaten::view5
576Tensor<[1,5,64,300]>,
Tensor<[5,64,300]>,
ttnn.reshapeaten::view5
577Tensor<[1,5,1200,300]>,
Tensor<[5,1200,300]>,
ttnn.reshapeaten::view5
578Tensor<[1,5,300,64]>,
Tensor<[5,300,64]>,
ttnn.reshapeaten::view5
579Tensor<[1,1200,5,64]>,
Tensor<[1,1200,320]>,
ttnn.reshapeaten::view5
580Tensor<[1200,1280]>,
Tensor<[1,1200,1280]>,
ttnn.reshapeaten::view5
581Tensor<[1,1280,1200]>,
Tensor<[1,1280,30,40]>,
ttnn.reshapeaten::view5
582Tensor<[1,1280,30,40]>,
Tensor<[1,1280,1200]>,
ttnn.reshapeaten::view5
583Tensor<[1,1200,1280]>,
Tensor<[1,1200,1280]>,
ttnn.reshapeaten::view5
584Tensor<[1,1280,320]>,
Tensor<[1,1280,320]>,
ttnn.reshapeaten::view5
585Tensor<[1,1200,320]>,
Tensor<[1,30,40,320]>,
ttnn.reshapeaten::view5
586Tensor<[1,512,15,20]>,
Tensor<[1,512,300]>,
ttnn.reshapeaten::view5
587Tensor<[1,300,512]>,
Tensor<[300,512]>,
ttnn.reshapeaten::view5
588Tensor<[300,512]>,
Tensor<[1,300,512]>,
ttnn.reshapeaten::view5
589Tensor<[1,300,512]>,
Tensor<[1,300,8,64]>,
ttnn.reshapeaten::view5
590Tensor<[1,8,300,64]>,
Tensor<[8,300,64]>,
ttnn.reshapeaten::view5
591Tensor<[1,8,64,300]>,
Tensor<[8,64,300]>,
ttnn.reshapeaten::view5
592Tensor<[1,8,300,300]>,
Tensor<[8,300,300]>,
ttnn.reshapeaten::view5
593Tensor<[1,300,8,64]>,
Tensor<[1,300,512]>,
ttnn.reshapeaten::view5
594Tensor<[300,2048]>,
Tensor<[1,300,2048]>,
ttnn.reshapeaten::view5
595Tensor<[1,2048,300]>,
Tensor<[1,2048,15,20]>,
ttnn.reshapeaten::view5
596Tensor<[1,2048,15,20]>,
Tensor<[1,2048,300]>,
ttnn.reshapeaten::view5
597Tensor<[1,300,2048]>,
Tensor<[1,300,2048]>,
ttnn.reshapeaten::view5
598Tensor<[1,2048,512]>,
Tensor<[1,2048,512]>,
ttnn.reshapeaten::view5
599Tensor<[1,300,512]>,
Tensor<[1,15,20,512]>,
ttnn.reshapeaten::view5
600Tensor<[30]>,
Tensor<[30,1]>,
ttnn.reshapeaten::view5
601Tensor<[60]>,
Tensor<[60,1]>,
ttnn.reshapeaten::view5
602Tensor<[120]>,
Tensor<[120,1]>,
ttnn.reshapeaten::view5
603Tensor<[240]>,
Tensor<[240,1]>,
ttnn.reshapeaten::view5
604Tensor<[480]>,
Tensor<[480,1]>,
ttnn.reshapeaten::view5
605Tensor<[1,12,197]>,
Tensor<[1,12,197,1]>,
ttnn.reshapeaten::_safe_softmax4
606Tensor<[1,768,14,14]>,
Tensor<[1,768,196]>,
ttnn.reshapeaten::view5
607Tensor<[1,197,768]>,
Tensor<[197,768]>,
ttnn.reshapeaten::view5
608Tensor<[197,768]>,
Tensor<[1,197,768]>,
ttnn.reshapeaten::view5
609Tensor<[1,197,768]>,
Tensor<[1,197,12,64]>,
ttnn.reshapeaten::view5
610Tensor<[1,12,197,64]>,
Tensor<[12,197,64]>,
ttnn.reshapeaten::view5
611Tensor<[1,12,64,197]>,
Tensor<[12,64,197]>,
ttnn.reshapeaten::view5
612Tensor<[12,197,197]>,
Tensor<[1,12,197,197]>,
ttnn.reshapeaten::view5
613Tensor<[1,12,197,197]>,
Tensor<[12,197,197]>,
ttnn.reshapeaten::view5
614Tensor<[12,197,64]>,
Tensor<[1,12,197,64]>,
ttnn.reshapeaten::view5
615Tensor<[1,197,12,64]>,
Tensor<[1,197,768]>,
ttnn.reshapeaten::view5
616Tensor<[197,3072]>,
Tensor<[1,197,3072]>,
ttnn.reshapeaten::view5
617Tensor<[1,197,3072]>,
Tensor<[197,3072]>,
ttnn.reshapeaten::view5
618Tensor<[1,1,16384]>,
Tensor<[1,1,16384,1]>,
ttnn.reshapeaten::_softmax4
619Tensor<[1,2,4096]>,
Tensor<[1,2,4096,1]>,
ttnn.reshapeaten::_softmax4
620Tensor<[1,5,1024]>,
Tensor<[1,5,1024,1]>,
ttnn.reshapeaten::_softmax4
621Tensor<[1,16384,256]>,
Tensor<[1,1,16384,256]>,
ttnn.reshapeaten::_unsafe_view5
622Tensor<[1,16384,32]>,
Tensor<[1,1,16384,32]>,
ttnn.reshapeaten::_unsafe_view5
623Tensor<[1,16384,32]>,
Tensor<[1,16384,32]>,
ttnn.reshapeaten::_unsafe_view5
624Tensor<[2,4096,256]>,
Tensor<[1,2,4096,256]>,
ttnn.reshapeaten::_unsafe_view5
625Tensor<[2,4096,32]>,
Tensor<[1,2,4096,32]>,
ttnn.reshapeaten::_unsafe_view5
626Tensor<[1,4096,64]>,
Tensor<[1,4096,64]>,
ttnn.reshapeaten::_unsafe_view5
627Tensor<[5,1024,256]>,
Tensor<[1,5,1024,256]>,
ttnn.reshapeaten::_unsafe_view5
628Tensor<[5,1024,32]>,
Tensor<[1,5,1024,32]>,
ttnn.reshapeaten::_unsafe_view5
629Tensor<[1,1024,160]>,
Tensor<[1,1024,160]>,
ttnn.reshapeaten::_unsafe_view5
630Tensor<[8,256,32]>,
Tensor<[1,8,256,32]>,
ttnn.reshapeaten::_unsafe_view5
631Tensor<[1,256,256]>,
Tensor<[1,256,256]>,
ttnn.reshapeaten::_unsafe_view5
632Tensor<[1,16384,256]>,
Tensor<[1,16384,256]>,
ttnn.reshapeaten::_unsafe_view5
633Tensor<[1,4096,256]>,
Tensor<[1,4096,256]>,
ttnn.reshapeaten::_unsafe_view5
634Tensor<[1,1024,256]>,
Tensor<[1,1024,256]>,
ttnn.reshapeaten::_unsafe_view5
635Tensor<[160]>,
Tensor<[160,1,1]>,
ttnn.reshapeaten::convolution4
636Tensor<[1024]>,
Tensor<[1024,1,1]>,
ttnn.reshapeaten::convolution4
637Tensor<[150]>,
Tensor<[150,1,1]>,
ttnn.reshapeaten::convolution4
638Tensor<[1,256,128,128]>,
Tensor<[1,256,128,128,1]>,
ttnn.reshapeaten::index.Tensor4
639Tensor<[1,32,128,128]>,
Tensor<[1,32,16384]>,
ttnn.reshapeaten::view5
640Tensor<[1,16384,32]>,
Tensor<[16384,32]>,
ttnn.reshapeaten::view5
641Tensor<[16384,32]>,
Tensor<[1,16384,32]>,
ttnn.reshapeaten::view5
642Tensor<[1,16384,32]>,
Tensor<[1,16384,1,32]>,
ttnn.reshapeaten::view5
643Tensor<[1,32,16384]>,
Tensor<[1,32,128,128]>,
ttnn.reshapeaten::view5
644Tensor<[1,32,16,16]>,
Tensor<[1,32,256]>,
ttnn.reshapeaten::view5
645Tensor<[1,256,32]>,
Tensor<[256,32]>,
ttnn.reshapeaten::view5
646Tensor<[256,32]>,
Tensor<[1,256,32]>,
ttnn.reshapeaten::view5
647Tensor<[1,256,32]>,
Tensor<[1,256,1,32]>,
ttnn.reshapeaten::view5
648Tensor<[1,1,16384,32]>,
Tensor<[1,16384,32]>,
ttnn.reshapeaten::view5
649Tensor<[1,1,32,256]>,
Tensor<[1,32,256]>,
ttnn.reshapeaten::view5
650Tensor<[1,1,16384,256]>,
Tensor<[1,16384,256]>,
ttnn.reshapeaten::view5
651Tensor<[1,1,256,32]>,
Tensor<[1,256,32]>,
ttnn.reshapeaten::view5
652Tensor<[1,16384,1,32]>,
Tensor<[1,16384,32]>,
ttnn.reshapeaten::view5
653Tensor<[16384,128]>,
Tensor<[1,16384,128]>,
ttnn.reshapeaten::view5
654Tensor<[1,128,16384]>,
Tensor<[1,128,128,128]>,
ttnn.reshapeaten::view5
655Tensor<[1,128,128,128]>,
Tensor<[1,128,16384]>,
ttnn.reshapeaten::view5
656Tensor<[1,16384,128]>,
Tensor<[1,16384,128]>,
ttnn.reshapeaten::view5
657Tensor<[1,128,32]>,
Tensor<[1,128,32]>,
ttnn.reshapeaten::view5
658Tensor<[1,16384,32]>,
Tensor<[1,128,128,32]>,
ttnn.reshapeaten::view5
659Tensor<[1,64,64,64]>,
Tensor<[1,64,4096]>,
ttnn.reshapeaten::view5
660Tensor<[1,4096,64]>,
Tensor<[4096,64]>,
ttnn.reshapeaten::view5
661Tensor<[4096,64]>,
Tensor<[1,4096,64]>,
ttnn.reshapeaten::view5
662Tensor<[1,4096,64]>,
Tensor<[1,4096,2,32]>,
ttnn.reshapeaten::view5
663Tensor<[1,64,4096]>,
Tensor<[1,64,64,64]>,
ttnn.reshapeaten::view5
664Tensor<[1,64,16,16]>,
Tensor<[1,64,256]>,
ttnn.reshapeaten::view5
665Tensor<[1,256,64]>,
Tensor<[256,64]>,
ttnn.reshapeaten::view5
666Tensor<[256,64]>,
Tensor<[1,256,64]>,
ttnn.reshapeaten::view5
667Tensor<[1,256,64]>,
Tensor<[1,256,2,32]>,
ttnn.reshapeaten::view5
668Tensor<[1,2,4096,32]>,
Tensor<[2,4096,32]>,
ttnn.reshapeaten::view5
669Tensor<[1,2,32,256]>,
Tensor<[2,32,256]>,
ttnn.reshapeaten::view5
670Tensor<[1,2,4096,256]>,
Tensor<[2,4096,256]>,
ttnn.reshapeaten::view5
671Tensor<[1,2,256,32]>,
Tensor<[2,256,32]>,
ttnn.reshapeaten::view5
672Tensor<[1,4096,2,32]>,
Tensor<[1,4096,64]>,
ttnn.reshapeaten::view5
673Tensor<[4096,256]>,
Tensor<[1,4096,256]>,
ttnn.reshapeaten::view5
674Tensor<[1,256,4096]>,
Tensor<[1,256,64,64]>,
ttnn.reshapeaten::view5
675Tensor<[1,256,64,64]>,
Tensor<[1,256,4096]>,
ttnn.reshapeaten::view5
676Tensor<[1,4096,64]>,
Tensor<[1,64,64,64]>,
ttnn.reshapeaten::view5
677Tensor<[1,160,32,32]>,
Tensor<[1,160,1024]>,
ttnn.reshapeaten::view5
678Tensor<[1,1024,160]>,
Tensor<[1024,160]>,
ttnn.reshapeaten::view5
679Tensor<[1024,160]>,
Tensor<[1,1024,160]>,
ttnn.reshapeaten::view5
680Tensor<[1,1024,160]>,
Tensor<[1,1024,5,32]>,
ttnn.reshapeaten::view5
681Tensor<[1,160,1024]>,
Tensor<[1,160,32,32]>,
ttnn.reshapeaten::view5
682Tensor<[1,160,16,16]>,
Tensor<[1,160,256]>,
ttnn.reshapeaten::view5
683Tensor<[1,256,160]>,
Tensor<[256,160]>,
ttnn.reshapeaten::view5
684Tensor<[256,160]>,
Tensor<[1,256,160]>,
ttnn.reshapeaten::view5
685Tensor<[1,256,160]>,
Tensor<[1,256,5,32]>,
ttnn.reshapeaten::view5
686Tensor<[1,5,1024,32]>,
Tensor<[5,1024,32]>,
ttnn.reshapeaten::view5
687Tensor<[1,5,32,256]>,
Tensor<[5,32,256]>,
ttnn.reshapeaten::view5
688Tensor<[1,5,1024,256]>,
Tensor<[5,1024,256]>,
ttnn.reshapeaten::view5
689Tensor<[1,5,256,32]>,
Tensor<[5,256,32]>,
ttnn.reshapeaten::view5
690Tensor<[1,1024,5,32]>,
Tensor<[1,1024,160]>,
ttnn.reshapeaten::view5
691Tensor<[1,640,1024]>,
Tensor<[1,640,32,32]>,
ttnn.reshapeaten::view5
692Tensor<[1,640,32,32]>,
Tensor<[1,640,1024]>,
ttnn.reshapeaten::view5
693Tensor<[1,1024,640]>,
Tensor<[1,1024,640]>,
ttnn.reshapeaten::view5
694Tensor<[1,640,160]>,
Tensor<[1,640,160]>,
ttnn.reshapeaten::view5
695Tensor<[1,1024,160]>,
Tensor<[1,32,32,160]>,
ttnn.reshapeaten::view5
696Tensor<[1,256,16,16]>,
Tensor<[1,256,256]>,
ttnn.reshapeaten::view5
697Tensor<[1,256,8,32]>,
Tensor<[1,256,256]>,
ttnn.reshapeaten::view5
698Tensor<[256,1024]>,
Tensor<[1,256,1024]>,
ttnn.reshapeaten::view5
699Tensor<[1,1024,256]>,
Tensor<[1,1024,16,16]>,
ttnn.reshapeaten::view5
700Tensor<[1,1024,16,16]>,
Tensor<[1,1024,256]>,
ttnn.reshapeaten::view5
701Tensor<[1,256,1024]>,
Tensor<[1,256,1024]>,
ttnn.reshapeaten::view5
702Tensor<[1,256,256]>,
Tensor<[1,16,16,256]>,
ttnn.reshapeaten::view5
703Tensor<[1,32,256]>,
Tensor<[1,32,256]>,
ttnn.reshapeaten::view5
704Tensor<[1,256,16384]>,
Tensor<[1,256,128,128]>,
ttnn.reshapeaten::view5
705Tensor<[1,64,256]>,
Tensor<[1,64,256]>,
ttnn.reshapeaten::view5
706Tensor<[1,160,256]>,
Tensor<[1,160,256]>,
ttnn.reshapeaten::view5
707Tensor<[1,256,1024]>,
Tensor<[1,256,32,32]>,
ttnn.reshapeaten::view5
708Tensor<[1,256,256]>,
Tensor<[1,256,16,16]>,
ttnn.reshapeaten::view5
709Tensor<[1,71,7]>,
Tensor<[1,71,7,1]>,
ttnn.reshapeaten::_safe_softmax4
710Tensor<[1,32,7]>,
Tensor<[1,32,7]>,
ttnn.reshapeaten::_unsafe_view5
711Tensor<[7,4672]>,
Tensor<[1,7,4672]>,
ttnn.reshapeaten::_unsafe_view5
712Tensor<[71,7,7]>,
Tensor<[1,71,7,7]>,
ttnn.reshapeaten::_unsafe_view5
713Tensor<[71,7,64]>,
Tensor<[1,71,7,64]>,
ttnn.reshapeaten::_unsafe_view5
714Tensor<[1,7,71,64]>,
Tensor<[1,7,4544]>,
ttnn.reshapeaten::_unsafe_view5
715Tensor<[7,4544]>,
Tensor<[1,7,4544]>,
ttnn.reshapeaten::_unsafe_view5
716Tensor<[7,18176]>,
Tensor<[1,7,18176]>,
ttnn.reshapeaten::_unsafe_view5
717Tensor<[1,7,4544]>,
Tensor<[7,4544]>,
ttnn.reshapeaten::_unsafe_view5
718Tensor<[7,65024]>,
Tensor<[1,7,65024]>,
ttnn.reshapeaten::_unsafe_view5
719Tensor<[7,1]>,
Tensor<[7,1,1]>,
ttnn.reshapeaten::index.Tensor4
720Tensor<[1,7,1,64]>,
Tensor<[1,7,1,64,1]>,
ttnn.reshapeaten::index.Tensor4
721Tensor<[1,7,64]>,
Tensor<[1,1,7,64]>,
ttnn.reshapeaten::unsqueeze5
722Tensor<[1,32,1]>,
Tensor<[1,32,1]>,
ttnn.reshapeaten::view5
723Tensor<[1,1,7]>,
Tensor<[1,1,7]>,
ttnn.reshapeaten::view5
724Tensor<[1,7,4672]>,
Tensor<[1,7,73,64]>,
ttnn.reshapeaten::view5
725Tensor<[1,71,7,64]>,
Tensor<[1,71,7,64]>,
ttnn.reshapeaten::view5
726Tensor<[1,1,7,64]>,
Tensor<[1,1,7,64]>,
ttnn.reshapeaten::view5
727Tensor<[1,71,7,64]>,
Tensor<[71,7,64]>,
ttnn.reshapeaten::view5
728Tensor<[1,71,64,7]>,
Tensor<[71,64,7]>,
ttnn.reshapeaten::view5
729Tensor<[1,71,7,7]>,
Tensor<[71,7,7]>,
ttnn.reshapeaten::view5
730Tensor<[1,7,18176]>,
Tensor<[7,18176]>,
ttnn.reshapeaten::view5
731Tensor<[1,1280]>,
Tensor<[1,1280,1,1]>,
ttnn.reshapeaten::mean.dim4
732Tensor<[96]>,
Tensor<[96,1]>,
ttnn.reshapeaten::unsqueeze5
733Tensor<[96,1]>,
Tensor<[96,1,1]>,
ttnn.reshapeaten::unsqueeze5
734Tensor<[144]>,
Tensor<[144,1]>,
ttnn.reshapeaten::unsqueeze5
735Tensor<[144,1]>,
Tensor<[144,1,1]>,
ttnn.reshapeaten::unsqueeze5
736Tensor<[192]>,
Tensor<[192,1]>,
ttnn.reshapeaten::unsqueeze5
737Tensor<[192,1]>,
Tensor<[192,1,1]>,
ttnn.reshapeaten::unsqueeze5
738Tensor<[384]>,
Tensor<[384,1]>,
ttnn.reshapeaten::unsqueeze5
739Tensor<[384,1]>,
Tensor<[384,1,1]>,
ttnn.reshapeaten::unsqueeze5
740Tensor<[576]>,
Tensor<[576,1]>,
ttnn.reshapeaten::unsqueeze5
741Tensor<[576,1]>,
Tensor<[576,1,1]>,
ttnn.reshapeaten::unsqueeze5
742Tensor<[960]>,
Tensor<[960,1]>,
ttnn.reshapeaten::unsqueeze5
743Tensor<[960,1]>,
Tensor<[960,1,1]>,
ttnn.reshapeaten::unsqueeze5
744Tensor<[1,1280,1,1]>,
Tensor<[1,1280]>,
ttnn.reshapeaten::view5
745Tensor<[1,12,12]>,
Tensor<[1,12,12,1]>,
ttnn.reshapeaten::_safe_softmax4
746Tensor<[1,12]>,
Tensor<[1,1,12]>,
ttnn.reshapeaten::unsqueeze4
747Tensor<[1,1,12]>,
Tensor<[1,1,1,12]>,
ttnn.reshapeaten::unsqueeze4
748Tensor<[1,12,128]>,
Tensor<[12,128]>,
ttnn.reshapeaten::view5
749Tensor<[12,768]>,
Tensor<[1,12,768]>,
ttnn.reshapeaten::view5
750Tensor<[1,12,768]>,
Tensor<[12,768]>,
ttnn.reshapeaten::view5
751Tensor<[1,12,768]>,
Tensor<[1,12,12,64]>,
ttnn.reshapeaten::view5
752Tensor<[1,12,12,64]>,
Tensor<[12,12,64]>,
ttnn.reshapeaten::view5
753Tensor<[1,12,64,12]>,
Tensor<[12,64,12]>,
ttnn.reshapeaten::view5
754Tensor<[12,12,12]>,
Tensor<[1,12,12,12]>,
ttnn.reshapeaten::view5
755Tensor<[1,12,12,12]>,
Tensor<[12,12,12]>,
ttnn.reshapeaten::view5
756Tensor<[12,12,64]>,
Tensor<[1,12,12,64]>,
ttnn.reshapeaten::view5
757Tensor<[1,12,12,64]>,
Tensor<[1,12,768]>,
ttnn.reshapeaten::view5
758Tensor<[12,3072]>,
Tensor<[1,12,3072]>,
ttnn.reshapeaten::view5
759Tensor<[1,12,3072]>,
Tensor<[12,3072]>,
ttnn.reshapeaten::view5
760Tensor<[12,2]>,
Tensor<[1,12,2]>,
ttnn.reshapeaten::view5
761Tensor<[1,12,9]>,
Tensor<[1,12,9,1]>,
ttnn.reshapeaten::_safe_softmax4
762Tensor<[1,9]>,
Tensor<[1,1,9]>,
ttnn.reshapeaten::unsqueeze4
763Tensor<[1,1,9]>,
Tensor<[1,1,1,9]>,
ttnn.reshapeaten::unsqueeze4
764Tensor<[1,9,128]>,
Tensor<[9,128]>,
ttnn.reshapeaten::view5
765Tensor<[9,768]>,
Tensor<[1,9,768]>,
ttnn.reshapeaten::view5
766Tensor<[1,9,768]>,
Tensor<[1,9,12,64]>,
ttnn.reshapeaten::view5
767Tensor<[1,12,9,64]>,
Tensor<[12,9,64]>,
ttnn.reshapeaten::view5
768Tensor<[1,12,64,9]>,
Tensor<[12,64,9]>,
ttnn.reshapeaten::view5
769Tensor<[12,9,9]>,
Tensor<[1,12,9,9]>,
ttnn.reshapeaten::view5
770Tensor<[1,12,9,9]>,
Tensor<[12,9,9]>,
ttnn.reshapeaten::view5
771Tensor<[12,9,64]>,
Tensor<[1,12,9,64]>,
ttnn.reshapeaten::view5
772Tensor<[1,9,12,64]>,
Tensor<[1,9,768]>,
ttnn.reshapeaten::view5
773Tensor<[9,3072]>,
Tensor<[1,9,3072]>,
ttnn.reshapeaten::view5
774Tensor<[1,9,3072]>,
Tensor<[9,3072]>,
ttnn.reshapeaten::view5
775Tensor<[9,128]>,
Tensor<[1,9,128]>,
ttnn.reshapeaten::view5
776Tensor<[9,30000]>,
Tensor<[1,9,30000]>,
ttnn.reshapeaten::view5
777Tensor<[1,16,9]>,
Tensor<[1,16,9,1]>,
ttnn.reshapeaten::_safe_softmax4
778Tensor<[9,2048]>,
Tensor<[1,9,2048]>,
ttnn.reshapeaten::view5
779Tensor<[1,9,2048]>,
Tensor<[9,2048]>,
ttnn.reshapeaten::view5
780Tensor<[1,9,2048]>,
Tensor<[1,9,16,128]>,
ttnn.reshapeaten::view5
781Tensor<[1,16,9,128]>,
Tensor<[16,9,128]>,
ttnn.reshapeaten::view5
782Tensor<[1,16,128,9]>,
Tensor<[16,128,9]>,
ttnn.reshapeaten::view5
783Tensor<[16,9,9]>,
Tensor<[1,16,9,9]>,
ttnn.reshapeaten::view5
784Tensor<[1,16,9,9]>,
Tensor<[16,9,9]>,
ttnn.reshapeaten::view5
785Tensor<[16,9,128]>,
Tensor<[1,16,9,128]>,
ttnn.reshapeaten::view5
786Tensor<[1,9,16,128]>,
Tensor<[1,9,2048]>,
ttnn.reshapeaten::view5
787Tensor<[9,8192]>,
Tensor<[1,9,8192]>,
ttnn.reshapeaten::view5
788Tensor<[1,9,8192]>,
Tensor<[9,8192]>,
ttnn.reshapeaten::view5
789Tensor<[9,1024]>,
Tensor<[1,9,1024]>,
ttnn.reshapeaten::view5
790Tensor<[1,9,1024]>,
Tensor<[9,1024]>,
ttnn.reshapeaten::view5
791Tensor<[1,9,1024]>,
Tensor<[1,9,16,64]>,
ttnn.reshapeaten::view5
792Tensor<[1,16,9,64]>,
Tensor<[16,9,64]>,
ttnn.reshapeaten::view5
793Tensor<[1,16,64,9]>,
Tensor<[16,64,9]>,
ttnn.reshapeaten::view5
794Tensor<[16,9,64]>,
Tensor<[1,16,9,64]>,
ttnn.reshapeaten::view5
795Tensor<[1,9,16,64]>,
Tensor<[1,9,1024]>,
ttnn.reshapeaten::view5
796Tensor<[9,4096]>,
Tensor<[1,9,4096]>,
ttnn.reshapeaten::view5
797Tensor<[1,9,4096]>,
Tensor<[9,4096]>,
ttnn.reshapeaten::view5
798Tensor<[1,64,9]>,
Tensor<[1,64,9,1]>,
ttnn.reshapeaten::_safe_softmax4
799Tensor<[1,9,4096]>,
Tensor<[1,9,64,64]>,
ttnn.reshapeaten::view5
800Tensor<[1,64,9,64]>,
Tensor<[64,9,64]>,
ttnn.reshapeaten::view5
801Tensor<[1,64,64,9]>,
Tensor<[64,64,9]>,
ttnn.reshapeaten::view5
802Tensor<[64,9,9]>,
Tensor<[1,64,9,9]>,
ttnn.reshapeaten::view5
803Tensor<[1,64,9,9]>,
Tensor<[64,9,9]>,
ttnn.reshapeaten::view5
804Tensor<[64,9,64]>,
Tensor<[1,64,9,64]>,
ttnn.reshapeaten::view5
805Tensor<[1,9,64,64]>,
Tensor<[1,9,4096]>,
ttnn.reshapeaten::view5
806Tensor<[9,16384]>,
Tensor<[1,9,16384]>,
ttnn.reshapeaten::view5
807Tensor<[1,9,16384]>,
Tensor<[9,16384]>,
ttnn.reshapeaten::view5
808Tensor<[1,12,14]>,
Tensor<[1,12,14,1]>,
ttnn.reshapeaten::_safe_softmax4
809Tensor<[1,14,1]>,
Tensor<[1,14]>,
ttnn.reshapeaten::squeeze.dim5
810Tensor<[1,14]>,
Tensor<[1,1,14]>,
ttnn.reshapeaten::unsqueeze4
811Tensor<[1,1,14]>,
Tensor<[1,1,1,14]>,
ttnn.reshapeaten::unsqueeze4
812Tensor<[1,14,128]>,
Tensor<[14,128]>,
ttnn.reshapeaten::view5
813Tensor<[14,768]>,
Tensor<[1,14,768]>,
ttnn.reshapeaten::view5
814Tensor<[1,14,768]>,
Tensor<[14,768]>,
ttnn.reshapeaten::view5
815Tensor<[1,14,768]>,
Tensor<[1,14,12,64]>,
ttnn.reshapeaten::view5
816Tensor<[1,12,14,64]>,
Tensor<[12,14,64]>,
ttnn.reshapeaten::view5
817Tensor<[1,12,64,14]>,
Tensor<[12,64,14]>,
ttnn.reshapeaten::view5
818Tensor<[12,14,14]>,
Tensor<[1,12,14,14]>,
ttnn.reshapeaten::view5
819Tensor<[1,12,14,14]>,
Tensor<[12,14,14]>,
ttnn.reshapeaten::view5
820Tensor<[12,14,64]>,
Tensor<[1,12,14,64]>,
ttnn.reshapeaten::view5
821Tensor<[1,14,12,64]>,
Tensor<[1,14,768]>,
ttnn.reshapeaten::view5
822Tensor<[14,3072]>,
Tensor<[1,14,3072]>,
ttnn.reshapeaten::view5
823Tensor<[1,14,3072]>,
Tensor<[14,3072]>,
ttnn.reshapeaten::view5
824Tensor<[14,2]>,
Tensor<[1,14,2]>,
ttnn.reshapeaten::view5
825Tensor<[1,12,50]>,
Tensor<[1,12,50,1]>,
ttnn.reshapeaten::_safe_softmax4
826Tensor<[2,8,7]>,
Tensor<[2,8,7,1]>,
ttnn.reshapeaten::_safe_softmax4
827Tensor<[2,8,7,64]>,
Tensor<[16,7,64]>,
ttnn.reshapeaten::_unsafe_view5
828Tensor<[2,8,64,7]>,
Tensor<[16,64,7]>,
ttnn.reshapeaten::_unsafe_view5
829Tensor<[2]>,
Tensor<[2,1]>,
ttnn.reshapeaten::index.Tensor4
830Tensor<[2,7]>,
Tensor<[2,1,7]>,
ttnn.reshapeaten::unsqueeze4
831Tensor<[2,1,7]>,
Tensor<[2,1,1,7]>,
ttnn.reshapeaten::unsqueeze4
832Tensor<[1,768,7,7]>,
Tensor<[1,768,49]>,
ttnn.reshapeaten::view5
833Tensor<[1,50,768]>,
Tensor<[50,768]>,
ttnn.reshapeaten::view5
834Tensor<[50,768]>,
Tensor<[1,50,768]>,
ttnn.reshapeaten::view5
835Tensor<[1,50,768]>,
Tensor<[1,50,12,64]>,
ttnn.reshapeaten::view5
836Tensor<[1,12,50,64]>,
Tensor<[12,50,64]>,
ttnn.reshapeaten::view5
837Tensor<[1,12,64,50]>,
Tensor<[12,64,50]>,
ttnn.reshapeaten::view5
838Tensor<[12,50,50]>,
Tensor<[1,12,50,50]>,
ttnn.reshapeaten::view5
839Tensor<[1,12,50,50]>,
Tensor<[12,50,50]>,
ttnn.reshapeaten::view5
840Tensor<[12,50,64]>,
Tensor<[1,12,50,64]>,
ttnn.reshapeaten::view5
841Tensor<[1,50,12,64]>,
Tensor<[1,50,768]>,
ttnn.reshapeaten::view5
842Tensor<[50,3072]>,
Tensor<[1,50,3072]>,
ttnn.reshapeaten::view5
843Tensor<[1,50,3072]>,
Tensor<[50,3072]>,
ttnn.reshapeaten::view5
844Tensor<[2,7]>,
Tensor<[2,7]>,
ttnn.reshapeaten::view4
845Tensor<[2,7,512]>,
Tensor<[14,512]>,
ttnn.reshapeaten::view5
846Tensor<[14,512]>,
Tensor<[2,7,512]>,
ttnn.reshapeaten::view5
847Tensor<[2,7,512]>,
Tensor<[2,7,8,64]>,
ttnn.reshapeaten::view5
848Tensor<[16,7,7]>,
Tensor<[2,8,7,7]>,
ttnn.reshapeaten::view5
849Tensor<[2,8,7,7]>,
Tensor<[16,7,7]>,
ttnn.reshapeaten::view5
850Tensor<[16,7,64]>,
Tensor<[2,8,7,64]>,
ttnn.reshapeaten::view5
851Tensor<[2,7,8,64]>,
Tensor<[2,7,512]>,
ttnn.reshapeaten::view5
852Tensor<[14,2048]>,
Tensor<[2,7,2048]>,
ttnn.reshapeaten::view5
853Tensor<[2,7,2048]>,
Tensor<[14,2048]>,
ttnn.reshapeaten::view5
854Tensor<[1,16,197]>,
Tensor<[1,16,197,1]>,
ttnn.reshapeaten::_softmax4
855Tensor<[197,1024]>,
Tensor<[1,197,1024]>,
ttnn.reshapeaten::_unsafe_view5
856Tensor<[16,197,197]>,
Tensor<[1,16,197,197]>,
ttnn.reshapeaten::_unsafe_view5
857Tensor<[16,197,64]>,
Tensor<[1,16,197,64]>,
ttnn.reshapeaten::_unsafe_view5
858Tensor<[1,16,27,27]>,
Tensor<[1,16,27,27,1]>,
ttnn.reshapeaten::index.Tensor4
859Tensor<[38809]>,
Tensor<[38809,1]>,
ttnn.reshapeaten::index.Tensor4
860Tensor<[196,196,1]>,
Tensor<[196,196]>,
ttnn.reshapeaten::select.int4
861Tensor<[1,197]>,
Tensor<[197]>,
ttnn.reshapeaten::select.int4
862Tensor<[197,1]>,
Tensor<[197]>,
ttnn.reshapeaten::select.int4
863Tensor<[196,196]>,
Tensor<[196,196,1]>,
ttnn.reshapeaten::select_scatter4
864Tensor<[197]>,
Tensor<[1,197]>,
ttnn.reshapeaten::select_scatter4
865Tensor<[197]>,
Tensor<[197,1]>,
ttnn.reshapeaten::select_scatter4
866Tensor<[14,14]>,
Tensor<[1,14,14]>,
ttnn.reshapeaten::stack4
867Tensor<[2,196]>,
Tensor<[2,196,1]>,
ttnn.reshapeaten::unsqueeze4
868Tensor<[2,196]>,
Tensor<[2,1,196]>,
ttnn.reshapeaten::unsqueeze4
869Tensor<[1,1024,14,14]>,
Tensor<[1,1024,196]>,
ttnn.reshapeaten::view5
870Tensor<[1,197,1024]>,
Tensor<[197,1024]>,
ttnn.reshapeaten::view5
871Tensor<[1,197,1024]>,
Tensor<[1,197,16,64]>,
ttnn.reshapeaten::view5
872Tensor<[1,16,197,64]>,
Tensor<[16,197,64]>,
ttnn.reshapeaten::view5
873Tensor<[1,16,64,197]>,
Tensor<[16,64,197]>,
ttnn.reshapeaten::view5
874Tensor<[729,16]>,
Tensor<[1,27,27,16]>,
ttnn.reshapeaten::view5
875Tensor<[27]>,
Tensor<[27,1]>,
ttnn.reshapeaten::view5
876Tensor<[1,27,27,16]>,
Tensor<[729,16]>,
ttnn.reshapeaten::view5
877Tensor<[14]>,
Tensor<[1,14]>,
ttnn.reshapeaten::view4
878Tensor<[2,14,14]>,
Tensor<[2,196]>,
ttnn.reshapeaten::view4
879Tensor<[197,197]>,
Tensor<[38809]>,
ttnn.reshapeaten::view4
880Tensor<[38809,16]>,
Tensor<[197,197,16]>,
ttnn.reshapeaten::view5
881Tensor<[1,16,197,197]>,
Tensor<[16,197,197]>,
ttnn.reshapeaten::view5
882Tensor<[1,197,16,64]>,
Tensor<[1,197,1024]>,
ttnn.reshapeaten::view5
883Tensor<[197,4096]>,
Tensor<[1,197,4096]>,
ttnn.reshapeaten::view5
884Tensor<[1,197,4096]>,
Tensor<[197,4096]>,
ttnn.reshapeaten::view5
885Tensor<[12,1]>,
Tensor<[12,1,1]>,
ttnn.reshapeaten::index.Tensor4
886Tensor<[1,12,27,27]>,
Tensor<[1,12,27,27,1]>,
ttnn.reshapeaten::index.Tensor4
887Tensor<[729,12]>,
Tensor<[1,27,27,12]>,
ttnn.reshapeaten::view5
888Tensor<[1,27,27,12]>,
Tensor<[729,12]>,
ttnn.reshapeaten::view5
889Tensor<[38809,12]>,
Tensor<[197,197,12]>,
ttnn.reshapeaten::view5

stablehlo.reverse::ttnn.?

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[2,2,256,512]>,
dims: [0, 1]
ttnn.?aten::convolution4
1Tensor<[2,2,128,256]>,
dims: [0, 1]
ttnn.?aten::convolution4
2Tensor<[2,2,64,128]>,
dims: [0, 1]
ttnn.?aten::convolution4
3Tensor<[2,2,32,64]>,
dims: [0, 1]
ttnn.?aten::convolution4
4Tensor<[2,2,16,4]>,
dims: [0, 1]
ttnn.?aten::convolution4
5Tensor<[2,2,1,16]>,
dims: [0, 1]
ttnn.?aten::convolution4
6Tensor<[2,2,512,1024]>,
dims: [0, 1]
ttnn.?aten::convolution4

stablehlo.rng

STABLE HLO Input Variationsttnn opTorch NameStatus
0Scalar,
Scalar,
Tensor<[3]>,
distribution: UNIFORM
aten::rand4

stablehlo.rsqrt::ttnn.rsqrt

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,32,1]>,
ttnn.rsqrtaten::rsqrt5
1Tensor<[1,7,1]>,
ttnn.rsqrtaten::rsqrt5
2Tensor<[1,1024,512]>,
ttnn.rsqrtaten::gelu4
3Tensor<[1,256,256]>,
ttnn.rsqrtaten::gelu4
4Tensor<[1,256,1]>,
ttnn.rsqrtaten::rsqrt5
5Tensor<[1,64,1,1]>,
ttnn.rsqrtaten::rsqrt5
6Tensor<[1,256,1,1]>,
ttnn.rsqrtaten::rsqrt5
7Tensor<[1,128,1,1]>,
ttnn.rsqrtaten::rsqrt5
8Tensor<[1,512,1,1]>,
ttnn.rsqrtaten::rsqrt5
9Tensor<[1,1024,1,1]>,
ttnn.rsqrtaten::rsqrt5
10Tensor<[1,2048,1,1]>,
ttnn.rsqrtaten::rsqrt5
11Tensor<[920,1,1]>,
ttnn.rsqrtaten::rsqrt5
12Tensor<[100,1,1]>,
ttnn.rsqrtaten::rsqrt5
13Tensor<[1,10,3072]>,
ttnn.rsqrtaten::gelu4
14Tensor<[1,10,768]>,
ttnn.rsqrtaten::gelu4
15Tensor<[1,10,1]>,
ttnn.rsqrtaten::rsqrt5
16Tensor<[1,4096,1280]>,
ttnn.rsqrtaten::gelu4
17Tensor<[1,1024,2560]>,
ttnn.rsqrtaten::gelu4
18Tensor<[1,256,5120]>,
ttnn.rsqrtaten::gelu4
19Tensor<[1,64,5120]>,
ttnn.rsqrtaten::gelu4
20Tensor<[1,32,1,1]>,
ttnn.rsqrtaten::rsqrt5
21Tensor<[1,4096,1]>,
ttnn.rsqrtaten::rsqrt5
22Tensor<[1,1024,1]>,
ttnn.rsqrtaten::rsqrt5
23Tensor<[1,64,1]>,
ttnn.rsqrtaten::rsqrt5
24Tensor<[1,25,3072]>,
ttnn.rsqrtaten::gelu4
25Tensor<[1,25,1]>,
ttnn.rsqrtaten::rsqrt5
26Tensor<[1,1445,768]>,
ttnn.rsqrtaten::gelu4
27Tensor<[1,1445,1]>,
ttnn.rsqrtaten::rsqrt5
28Tensor<[1,3072,8]>,
ttnn.rsqrtaten::gelu4
29Tensor<[1,8,1]>,
ttnn.rsqrtaten::rsqrt5
30Tensor<[1,256,1280]>,
ttnn.rsqrtaten::gelu4
31Tensor<[1,2048,768]>,
ttnn.rsqrtaten::gelu4
32Tensor<[1,2048,1]>,
ttnn.rsqrtaten::rsqrt5
33Tensor<[1,201,3072]>,
ttnn.rsqrtaten::gelu4
34Tensor<[1,1536]>,
ttnn.rsqrtaten::gelu4
35Tensor<[1,201,1]>,
ttnn.rsqrtaten::rsqrt5
36Tensor<[1,1]>,
ttnn.rsqrtaten::rsqrt5
37Tensor<[1,19,4096]>,
ttnn.rsqrtaten::gelu4
38Tensor<[1,19,1]>,
ttnn.rsqrtaten::rsqrt5
39Tensor<[1,16,3072]>,
ttnn.rsqrtaten::gelu4
40Tensor<[1,16,1]>,
ttnn.rsqrtaten::rsqrt5
41Tensor<[1,19200,256]>,
ttnn.rsqrtaten::gelu4
42Tensor<[1,4800,512]>,
ttnn.rsqrtaten::gelu4
43Tensor<[1,1200,1280]>,
ttnn.rsqrtaten::gelu4
44Tensor<[1,300,2048]>,
ttnn.rsqrtaten::gelu4
45Tensor<[1,19200,1]>,
ttnn.rsqrtaten::rsqrt5
46Tensor<[1,300,1]>,
ttnn.rsqrtaten::rsqrt5
47Tensor<[1,4800,1]>,
ttnn.rsqrtaten::rsqrt5
48Tensor<[1,1200,1]>,
ttnn.rsqrtaten::rsqrt5
49Tensor<[1,197,3072]>,
ttnn.rsqrtaten::gelu4
50Tensor<[1,197,1]>,
ttnn.rsqrtaten::rsqrt5
51Tensor<[1,16384,128]>,
ttnn.rsqrtaten::gelu4
52Tensor<[1,4096,256]>,
ttnn.rsqrtaten::gelu4
53Tensor<[1,1024,640]>,
ttnn.rsqrtaten::gelu4
54Tensor<[1,256,1024]>,
ttnn.rsqrtaten::gelu4
55Tensor<[1,16384,1]>,
ttnn.rsqrtaten::rsqrt5
56Tensor<[1,7,18176]>,
ttnn.rsqrtaten::gelu4
57Tensor<[1,12,1]>,
ttnn.rsqrtaten::rsqrt5
58Tensor<[1,9,1]>,
ttnn.rsqrtaten::rsqrt5
59Tensor<[1,14,1]>,
ttnn.rsqrtaten::rsqrt5
60Tensor<[1,50,1]>,
ttnn.rsqrtaten::rsqrt5
61Tensor<[2,7,1]>,
ttnn.rsqrtaten::rsqrt5
62Tensor<[1,197,4096]>,
ttnn.rsqrtaten::gelu4

stablehlo.scatter::ttnn.scatter

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,3,720,1280]>,
Tensor<[1,1]>,
Tensor<[1,3,720,1280]>,
update_window_dims: [1, 2, 3]
inserted_window_dims: [0]
scatter_dims_to_operand_dims: [0]
index_vector_dim: 1>
ttnn.scatteraten::select_scatter4
1Tensor<[1,720,1280]>,
Tensor<[1,1]>,
Tensor<[1,720,1280]>,
update_window_dims: [1, 2]
inserted_window_dims: [0]
scatter_dims_to_operand_dims: [0]
index_vector_dim: 1>
ttnn.scatteraten::select_scatter4
2Tensor<[196,196,2]>,
Tensor<[1,1]>,
Tensor<[196,196,1]>,
update_window_dims: [0, 1]
inserted_window_dims: [2]
scatter_dims_to_operand_dims: [2]
index_vector_dim: 1>
ttnn.scatteraten::select_scatter4
3Tensor<[197,197]>,
Tensor<[1,1]>,
Tensor<[1,197]>,
update_window_dims: [1]
inserted_window_dims: [0]
scatter_dims_to_operand_dims: [0]
index_vector_dim: 1>
ttnn.scatteraten::select_scatter4
4Tensor<[197,197]>,
Tensor<[1,1]>,
Tensor<[197,1]>,
update_window_dims: [0]
inserted_window_dims: [1]
scatter_dims_to_operand_dims: [1]
index_vector_dim: 1>
ttnn.scatteraten::select_scatter4
5Tensor<[197]>,
Tensor<[1,1]>,
Tensor<[1]>,
inserted_window_dims: [0]
scatter_dims_to_operand_dims: [0]
index_vector_dim: 1>
ttnn.scatteraten::select_scatter4

stablehlo.select::ttnn.where

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,32,32,32]>,
Tensor<[1,32,32,32]>,
ttnn.whereaten::_safe_softmax4
1Tensor<[32,32]>,
Tensor<[32,32]>,
ttnn.whereaten::triu4
2Tensor<[1,1,32,32]>,
Tensor<[1,1,32,32]>,
ttnn.whereaten::where.self4
3Tensor<[1,12,7,7]>,
Tensor<[1,12,7,7]>,
ttnn.whereaten::_safe_softmax4
4Tensor<[7,7]>,
Tensor<[7,7]>,
ttnn.whereaten::where.self4
5Tensor<[1,1,7,7]>,
Tensor<[1,1,7,7]>,
ttnn.whereaten::where.self4
6Tensor<[1,920]>,
Tensor<[1,920]>,
ttnn.whereaten::where.self4
7Tensor<[1,12,10,10]>,
Tensor<[1,12,10,10]>,
ttnn.whereaten::_safe_softmax4
8Tensor<[1,1,10,10]>,
Tensor<[1,1,10,10]>,
ttnn.whereaten::where.self4
9Tensor<[1,8,4096,4096]>,
Tensor<[1,8,4096,4096]>,
ttnn.whereaten::_safe_softmax4
10Tensor<[1,8,4096,9]>,
Tensor<[1,8,4096,9]>,
ttnn.whereaten::_safe_softmax4
11Tensor<[1,8,1024,1024]>,
Tensor<[1,8,1024,1024]>,
ttnn.whereaten::_safe_softmax4
12Tensor<[1,8,1024,9]>,
Tensor<[1,8,1024,9]>,
ttnn.whereaten::_safe_softmax4
13Tensor<[1,8,256,256]>,
Tensor<[1,8,256,256]>,
ttnn.whereaten::_safe_softmax4
14Tensor<[1,8,256,9]>,
Tensor<[1,8,256,9]>,
ttnn.whereaten::_safe_softmax4
15Tensor<[1,8,64,64]>,
Tensor<[1,8,64,64]>,
ttnn.whereaten::_safe_softmax4
16Tensor<[1,8,64,9]>,
Tensor<[1,8,64,9]>,
ttnn.whereaten::_safe_softmax4
17Tensor<[1,12,25,25]>,
Tensor<[1,12,25,25]>,
ttnn.whereaten::_safe_softmax4
18Tensor<[1,1,25,25]>,
Tensor<[1,1,25,25]>,
ttnn.whereaten::where.self4
19Tensor<[1,3,1445,1445]>,
Tensor<[1,3,1445,1445]>,
ttnn.whereaten::_safe_softmax4
20Tensor<[19,19]>,
Tensor<[19,19]>,
ttnn.whereaten::where.self4
21Tensor<[1,1,19,19]>,
Tensor<[1,1,19,19]>,
ttnn.whereaten::where.self4
22Tensor<[1,19]>,
Tensor<[1,19]>,
ttnn.whereaten::where.self4
23Tensor<[19]>,
Tensor<[19]>,
ttnn.whereaten::where.self4
24Tensor<[1,12,16,16]>,
Tensor<[1,12,16,16]>,
ttnn.whereaten::_safe_softmax4
25Tensor<[1,1,16,16]>,
Tensor<[1,1,16,16]>,
ttnn.whereaten::where.self4
26Tensor<[1,12,197,197]>,
Tensor<[1,12,197,197]>,
ttnn.whereaten::_safe_softmax4
27Tensor<[1,71,7,7]>,
Tensor<[1,71,7,7]>,
ttnn.whereaten::_safe_softmax4
28Tensor<[1,12,12,12]>,
Tensor<[1,12,12,12]>,
ttnn.whereaten::_safe_softmax4
29Tensor<[1,1,12,12]>,
Tensor<[1,1,12,12]>,
ttnn.whereaten::where.self4
30Tensor<[1,12,9,9]>,
Tensor<[1,12,9,9]>,
ttnn.whereaten::_safe_softmax4
31Tensor<[1,1,9,9]>,
Tensor<[1,1,9,9]>,
ttnn.whereaten::where.self4
32Tensor<[1,16,9,9]>,
Tensor<[1,16,9,9]>,
ttnn.whereaten::_safe_softmax4
33Tensor<[1,64,9,9]>,
Tensor<[1,64,9,9]>,
ttnn.whereaten::_safe_softmax4
34Tensor<[1,12,14,14]>,
Tensor<[1,12,14,14]>,
ttnn.whereaten::_safe_softmax4
35Tensor<[1,1,14,14]>,
Tensor<[1,1,14,14]>,
ttnn.whereaten::where.self4
36Tensor<[1,12,50,50]>,
Tensor<[1,12,50,50]>,
ttnn.whereaten::_safe_softmax4
37Tensor<[2,8,7,7]>,
Tensor<[2,8,7,7]>,
ttnn.whereaten::_safe_softmax4
38Tensor<[2,1,7,7]>,
Tensor<[2,1,7,7]>,
ttnn.whereaten::where.self4
39Tensor<[196,197]>,
Tensor<[196,197]>,
ttnn.whereaten::where.self4
40Tensor<[197,197]>,
Tensor<[197,197]>,
ttnn.whereaten::where.self4

stablehlo.sine::ttnn.sin

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,32,128]>,
ttnn.sinaten::sin4
1Tensor<[1,23,40,64]>,
ttnn.sinaten::sin4
2Tensor<[1,160]>,
ttnn.sinaten::sin4
3Tensor<[1,7,64]>,
ttnn.sinaten::sin4

stablehlo.slice::ttnn.slice

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,32,32,128]>,
indices: [0:1, 0:32, 0:32, 0:64]
ttnn.reshapeaten::slice.Tensor4
1Tensor<[1,32,32,128]>,
indices: [0:1, 0:32, 0:32, 64:128]
ttnn.reshapeaten::slice.Tensor4
2Tensor<[1,7,2304]>,
indices: [0:1, 0:7, 0:768]
ttnn.reshapeaten::slice.Tensor4
3Tensor<[1,7,2304]>,
indices: [0:1, 0:7, 768:1536]
ttnn.reshapeaten::slice.Tensor4
4Tensor<[1,7,2304]>,
indices: [0:1, 0:7, 1536:2304]
ttnn.reshapeaten::slice.Tensor4
5Tensor<[1,185,28,28]>,
indices: [0:1, 0:128, 0:28, 0:28]
ttnn.reshapeaten::slice.Tensor4
6Tensor<[1,185,28,28]>,
indices: [0:1, 128:185, 0:28, 0:28]
ttnn.reshapeaten::slice.Tensor4
7Tensor<[6,1,100,4]>,
indices: [5:6, 0:1, 0:100, 0:4]
ttnn.reshapeaten::select.int4
8Tensor<[6,1,100,92]>,
indices: [5:6, 0:1, 0:100, 0:92]
ttnn.reshapeaten::select.int4
9Tensor<[1,23,40]>,
indices: [0:1, 22:23, 0:40]
ttnn.reshapeaten::slice.Tensor4
10Tensor<[1,23,40]>,
indices: [0:1, 0:23, 39:40]
ttnn.reshapeaten::slice.Tensor4
11Tensor<[1,23,40,128]>,
indices: [0:1, 0:23, 0:40, 0:128:2]
ttnn.reshapeaten::slice.Tensor4
12Tensor<[1,23,40,128]>,
indices: [0:1, 0:23, 0:40, 1:128:2]
ttnn.reshapeaten::slice.Tensor4
13Tensor<[768,256]>,
indices: [0:256, 0:256]
ttnn.reshapeaten::slice.Tensor4
14Tensor<[768,256]>,
indices: [256:512, 0:256]
ttnn.reshapeaten::slice.Tensor4
15Tensor<[768,256]>,
indices: [512:768, 0:256]
ttnn.reshapeaten::slice.Tensor4
16Tensor<[768]>,
indices: [0:256]
ttnn.reshapeaten::slice.Tensor4
17Tensor<[768]>,
indices: [256:512]
ttnn.reshapeaten::slice.Tensor4
18Tensor<[768]>,
indices: [512:768]
ttnn.reshapeaten::slice.Tensor4
19Tensor<[1,514]>,
indices: [0:1, 0:10]
ttnn.reshapeaten::slice.Tensor4
20Tensor<[1,320]>,
indices: [0:1, 160:320]
ttnn.reshapeaten::slice.Tensor4
21Tensor<[1,320]>,
indices: [0:1, 0:160]
ttnn.reshapeaten::slice.Tensor4
22Tensor<[1,4096,2560]>,
indices: [0:1, 0:4096, 0:1280]
ttnn.reshapeaten::slice.Tensor4
23Tensor<[1,4096,2560]>,
indices: [0:1, 0:4096, 1280:2560]
ttnn.reshapeaten::slice.Tensor4
24Tensor<[1,1024,5120]>,
indices: [0:1, 0:1024, 0:2560]
ttnn.reshapeaten::slice.Tensor4
25Tensor<[1,1024,5120]>,
indices: [0:1, 0:1024, 2560:5120]
ttnn.reshapeaten::slice.Tensor4
26Tensor<[1,256,10240]>,
indices: [0:1, 0:256, 0:5120]
ttnn.reshapeaten::slice.Tensor4
27Tensor<[1,256,10240]>,
indices: [0:1, 0:256, 5120:10240]
ttnn.reshapeaten::slice.Tensor4
28Tensor<[1,64,10240]>,
indices: [0:1, 0:64, 0:5120]
ttnn.reshapeaten::slice.Tensor4
29Tensor<[1,64,10240]>,
indices: [0:1, 0:64, 5120:10240]
ttnn.reshapeaten::slice.Tensor4
30Tensor<[1,25,768]>,
indices: [0:1, 0:1, 0:768]
ttnn.reshapeaten::select.int4
31Tensor<[1,512]>,
indices: [0:1, 0:25]
ttnn.reshapeaten::slice.Tensor4
32Tensor<[1,25,2]>,
indices: [0:1, 0:25, 0:1]
ttnn.reshapeaten::slice.Tensor4
33Tensor<[1,25,2]>,
indices: [0:1, 0:25, 1:2]
ttnn.reshapeaten::slice.Tensor4
34Tensor<[1,4251,192]>,
indices: [0:1, 0:1, 0:192]
ttnn.reshapeaten::select.int4
35Tensor<[1,4251,192]>,
indices: [0:1, 4151:4251, 0:192]
ttnn.reshapeaten::slice.Tensor4
36Tensor<[1,4251,192]>,
indices: [0:1, 1:4151, 0:192]
ttnn.reshapeaten::slice.Tensor4
37Tensor<[1,1445,192]>,
indices: [0:1, 1345:1445, 0:192]
ttnn.reshapeaten::slice.Tensor4
38Tensor<[1,8,768]>,
indices: [0:1, 0:1, 0:768]
ttnn.reshapeaten::select.int4
39Tensor<[1,512]>,
indices: [0:1, 0:8]
ttnn.reshapeaten::slice.Tensor4
40Tensor<[1,16]>,
indices: [0:1, 0:1]
ttnn.reshapeaten::select.int4
41Tensor<[1,12]>,
indices: [0:1, 0:1]
ttnn.reshapeaten::select.int4
42Tensor<[192,2]>,
indices: [0:192, 0:1]
ttnn.reshapeaten::select.int4
43Tensor<[1,201,768]>,
indices: [0:1, 0:1, 0:768]
ttnn.reshapeaten::select.int4
44Tensor<[1,40]>,
indices: [0:1, 0:8]
ttnn.reshapeaten::slice.Tensor4
45Tensor<[1,145,768]>,
indices: [0:1, 1:145, 0:768]
ttnn.reshapeaten::slice.Tensor4
46Tensor<[1,19]>,
indices: [0:1, 18:19]
ttnn.reshapeaten::select.int4
47Tensor<[1,19]>,
indices: [0:1, 1:19]
ttnn.reshapeaten::slice.Tensor4
48Tensor<[1,19]>,
indices: [0:1, 0:18]
ttnn.reshapeaten::slice.Tensor4
49Tensor<[1,32,16,3,96]>,
indices: [0:1, 0:32, 0:16, 0:1, 0:96]
ttnn.reshapeaten::select.int4
50Tensor<[1,32,16,3,96]>,
indices: [0:1, 0:32, 0:16, 1:2, 0:96]
ttnn.reshapeaten::select.int4
51Tensor<[1,32,16,3,96]>,
indices: [0:1, 0:32, 0:16, 2:3, 0:96]
ttnn.reshapeaten::select.int4
52Tensor<[1,512]>,
indices: [0:1, 0:16]
ttnn.reshapeaten::slice.Tensor4
53Tensor<[1,2,30,40]>,
indices: [0:1, 0:1, 0:30, 0:40]
ttnn.reshapeaten::select.int4
54Tensor<[1,2,30,40]>,
indices: [0:1, 1:2, 0:30, 0:40]
ttnn.reshapeaten::select.int4
55Tensor<[1,2,60,80]>,
indices: [0:1, 0:1, 0:60, 0:80]
ttnn.reshapeaten::select.int4
56Tensor<[1,2,60,80]>,
indices: [0:1, 1:2, 0:60, 0:80]
ttnn.reshapeaten::select.int4
57Tensor<[1,2,120,160]>,
indices: [0:1, 0:1, 0:120, 0:160]
ttnn.reshapeaten::select.int4
58Tensor<[1,2,120,160]>,
indices: [0:1, 1:2, 0:120, 0:160]
ttnn.reshapeaten::select.int4
59Tensor<[1,197,768]>,
indices: [0:1, 0:1, 0:768]
ttnn.reshapeaten::select.int4
60Tensor<[1,7,73,64]>,
indices: [0:1, 0:7, 0:71, 0:64]
ttnn.reshapeaten::slice.Tensor4
61Tensor<[1,71,7,64]>,
indices: [0:1, 0:71, 0:7, 0:32]
ttnn.reshapeaten::slice.Tensor4
62Tensor<[1,71,7,64]>,
indices: [0:1, 0:71, 0:7, 32:64]
ttnn.reshapeaten::slice.Tensor4
63Tensor<[1,1,7,64]>,
indices: [0:1, 0:1, 0:7, 0:32]
ttnn.reshapeaten::slice.Tensor4
64Tensor<[1,1,7,64]>,
indices: [0:1, 0:1, 0:7, 32:64]
ttnn.reshapeaten::slice.Tensor4
65Tensor<[1,512]>,
indices: [0:1, 0:12]
ttnn.reshapeaten::slice.Tensor4
66Tensor<[1,512]>,
indices: [0:1, 0:9]
ttnn.reshapeaten::slice.Tensor4
67Tensor<[1,9,768]>,
indices: [0:1, 0:1, 0:768]
ttnn.reshapeaten::select.int4
68Tensor<[1,512]>,
indices: [0:1, 0:14]
ttnn.reshapeaten::slice.Tensor4
69Tensor<[1,14,2]>,
indices: [0:1, 0:14, 0:1]
ttnn.reshapeaten::slice.Tensor4
70Tensor<[1,14,2]>,
indices: [0:1, 0:14, 1:2]
ttnn.reshapeaten::slice.Tensor4
71Tensor<[1,50,768]>,
indices: [0:1, 0:1, 0:768]
ttnn.reshapeaten::select.int4
72Tensor<[1,77]>,
indices: [0:1, 0:7]
ttnn.reshapeaten::slice.Tensor4
73Tensor<[196,196,2]>,
indices: [0:196, 0:196, 0:1]
ttnn.reshapeaten::select.int4
74Tensor<[196,196,2]>,
indices: [0:196, 0:196, 1:2]
ttnn.reshapeaten::select.int4
75Tensor<[197,197]>,
indices: [0:1, 0:197]
ttnn.reshapeaten::select.int4
76Tensor<[197,197]>,
indices: [0:197, 0:1]
ttnn.reshapeaten::select.int4
77Tensor<[197]>,
indices: [0:1]
ttnn.reshapeaten::select.int4
78Tensor<[732,16]>,
indices: [0:729, 0:16]
ttnn.reshapeaten::slice.Tensor4
79Tensor<[732,16]>,
indices: [729:732, 0:16]
ttnn.reshapeaten::slice.Tensor4
80Tensor<[197,197]>,
indices: [1:197, 0:197]
ttnn.reshapeaten::slice.Tensor4
81Tensor<[196,197]>,
indices: [0:196, 1:197]
ttnn.reshapeaten::slice.Tensor4
82Tensor<[1,197,1024]>,
indices: [0:1, 1:197, 0:1024]
ttnn.reshapeaten::slice.Tensor4
83Tensor<[732,12]>,
indices: [0:729, 0:12]
ttnn.reshapeaten::slice.Tensor4
84Tensor<[732,12]>,
indices: [729:732, 0:12]
ttnn.reshapeaten::slice.Tensor4
85Tensor<[1,197,768]>,
indices: [0:1, 1:197, 0:768]
ttnn.reshapeaten::slice.Tensor4

stablehlo.sqrt::ttnn.sqrt

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[32]>,
ttnn.sqrtaten::sqrt5
1Tensor<[64]>,
ttnn.sqrtaten::sqrt5
2Tensor<[128]>,
ttnn.sqrtaten::sqrt5
3Tensor<[256]>,
ttnn.sqrtaten::sqrt5
4Tensor<[512]>,
ttnn.sqrtaten::sqrt5
5Tensor<[1024]>,
ttnn.sqrtaten::sqrt5
6Tensor<[2048]>,
ttnn.sqrtaten::sqrt5
7Tensor<[14]>,
ttnn.sqrtaten::sqrt5
8Tensor<[24]>,
ttnn.sqrtaten::sqrt5
9Tensor<[40]>,
ttnn.sqrtaten::sqrt5
10Tensor<[68]>,
ttnn.sqrtaten::sqrt5
11Tensor<[16]>,
ttnn.sqrtaten::sqrt5
12Tensor<[28]>,
ttnn.sqrtaten::sqrt5
13Tensor<[46]>,
ttnn.sqrtaten::sqrt5
14Tensor<[78]>,
ttnn.sqrtaten::sqrt5
15Tensor<[134]>,
ttnn.sqrtaten::sqrt5
16Tensor<[20]>,
ttnn.sqrtaten::sqrt5
17Tensor<[34]>,
ttnn.sqrtaten::sqrt5
18Tensor<[58]>,
ttnn.sqrtaten::sqrt5
19Tensor<[98]>,
ttnn.sqrtaten::sqrt5
20Tensor<[168]>,
ttnn.sqrtaten::sqrt5
21Tensor<[320]>,
ttnn.sqrtaten::sqrt5
22Tensor<[116]>,
ttnn.sqrtaten::sqrt5
23Tensor<[196]>,
ttnn.sqrtaten::sqrt5
24Tensor<[334]>,
ttnn.sqrtaten::sqrt5
25Tensor<[640]>,
ttnn.sqrtaten::sqrt5
26Tensor<[160]>,
ttnn.sqrtaten::sqrt5
27Tensor<[272]>,
ttnn.sqrtaten::sqrt5
28Tensor<[462]>,
ttnn.sqrtaten::sqrt5
29Tensor<[96]>,
ttnn.sqrtaten::sqrt5
30Tensor<[144]>,
ttnn.sqrtaten::sqrt5
31Tensor<[192]>,
ttnn.sqrtaten::sqrt5
32Tensor<[384]>,
ttnn.sqrtaten::sqrt5
33Tensor<[576]>,
ttnn.sqrtaten::sqrt5
34Tensor<[960]>,
ttnn.sqrtaten::sqrt5
35Tensor<[1280]>,
ttnn.sqrtaten::sqrt5

stablehlo.subtract::ttnn.subtract

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,32,32,32]>,
Tensor<[1,32,32,32]>,
ttnn.subtractaten::_safe_softmax4
1Tensor<[1,12,7,7]>,
Tensor<[1,12,7,7]>,
ttnn.subtractaten::_safe_softmax4
2Tensor<[1,1,7,7]>,
Tensor<[1,1,7,7]>,
ttnn.subtractaten::rsub.Scalar4
3Tensor<[1,7,768]>,
Tensor<[1,7,768]>,
ttnn.subtractaten::sub.Tensor4
4Tensor<[1]>,
Tensor<[1]>,
ttnn.subtractaten::sub.Tensor4
5Tensor<[1,128,28,28]>,
Tensor<[1,128,28,28]>,
ttnn.subtractaten::elu4
6Tensor<[1,32,112,112]>,
Tensor<[1,32,112,112]>,
ttnn.subtractaten::sub.Tensor4
7Tensor<[1,64,112,112]>,
Tensor<[1,64,112,112]>,
ttnn.subtractaten::sub.Tensor4
8Tensor<[1,64,56,56]>,
Tensor<[1,64,56,56]>,
ttnn.subtractaten::sub.Tensor4
9Tensor<[1,128,56,56]>,
Tensor<[1,128,56,56]>,
ttnn.subtractaten::sub.Tensor4
10Tensor<[1,256,28,28]>,
Tensor<[1,256,28,28]>,
ttnn.subtractaten::sub.Tensor4
11Tensor<[1,512,28,28]>,
Tensor<[1,512,28,28]>,
ttnn.subtractaten::sub.Tensor4
12Tensor<[1,256,512]>,
Tensor<[1,256,512]>,
ttnn.subtractaten::sub.Tensor4
13Tensor<[8,920,920]>,
Tensor<[8,920,920]>,
ttnn.subtractaten::_softmax4
14Tensor<[8,100,100]>,
Tensor<[8,100,100]>,
ttnn.subtractaten::_softmax4
15Tensor<[8,100,920]>,
Tensor<[8,100,920]>,
ttnn.subtractaten::_softmax4
16Scalar,
Scalar,
ttnn.subtractaten::arange4
17Tensor<[1,64,1,1]>,
Tensor<[1,64,1,1]>,
ttnn.subtractaten::sub.Tensor5
18Tensor<[1,256,1,1]>,
Tensor<[1,256,1,1]>,
ttnn.subtractaten::sub.Tensor5
19Tensor<[1,128,1,1]>,
Tensor<[1,128,1,1]>,
ttnn.subtractaten::sub.Tensor5
20Tensor<[1,512,1,1]>,
Tensor<[1,512,1,1]>,
ttnn.subtractaten::sub.Tensor5
21Tensor<[1,1024,1,1]>,
Tensor<[1,1024,1,1]>,
ttnn.subtractaten::sub.Tensor5
22Tensor<[1,2048,1,1]>,
Tensor<[1,2048,1,1]>,
ttnn.subtractaten::sub.Tensor5
23Tensor<[920,1,256]>,
Tensor<[920,1,256]>,
ttnn.subtractaten::sub.Tensor4
24Tensor<[100,1,256]>,
Tensor<[100,1,256]>,
ttnn.subtractaten::sub.Tensor4
25Tensor<[1,12,10,10]>,
Tensor<[1,12,10,10]>,
ttnn.subtractaten::_safe_softmax4
26Tensor<[1,1,10,10]>,
Tensor<[1,1,10,10]>,
ttnn.subtractaten::rsub.Scalar4
27Tensor<[1,10,768]>,
Tensor<[1,10,768]>,
ttnn.subtractaten::sub.Tensor4
28Tensor<[1,8,4096,4096]>,
Tensor<[1,8,4096,4096]>,
ttnn.subtractaten::_safe_softmax4
29Tensor<[1,8,4096,9]>,
Tensor<[1,8,4096,9]>,
ttnn.subtractaten::_safe_softmax4
30Tensor<[1,8,1024,1024]>,
Tensor<[1,8,1024,1024]>,
ttnn.subtractaten::_safe_softmax4
31Tensor<[1,8,1024,9]>,
Tensor<[1,8,1024,9]>,
ttnn.subtractaten::_safe_softmax4
32Tensor<[1,8,256,256]>,
Tensor<[1,8,256,256]>,
ttnn.subtractaten::_safe_softmax4
33Tensor<[1,8,256,9]>,
Tensor<[1,8,256,9]>,
ttnn.subtractaten::_safe_softmax4
34Tensor<[1,8,64,64]>,
Tensor<[1,8,64,64]>,
ttnn.subtractaten::_safe_softmax4
35Tensor<[1,8,64,9]>,
Tensor<[1,8,64,9]>,
ttnn.subtractaten::_safe_softmax4
36Tensor<[1,32,10,4096]>,
Tensor<[1,32,10,4096]>,
ttnn.subtractaten::sub.Tensor4
37Tensor<[1,4096,320]>,
Tensor<[1,4096,320]>,
ttnn.subtractaten::sub.Tensor4
38Tensor<[1,32,10,1024]>,
Tensor<[1,32,10,1024]>,
ttnn.subtractaten::sub.Tensor4
39Tensor<[1,32,20,1024]>,
Tensor<[1,32,20,1024]>,
ttnn.subtractaten::sub.Tensor4
40Tensor<[1,1024,640]>,
Tensor<[1,1024,640]>,
ttnn.subtractaten::sub.Tensor4
41Tensor<[1,32,20,256]>,
Tensor<[1,32,20,256]>,
ttnn.subtractaten::sub.Tensor4
42Tensor<[1,32,40,256]>,
Tensor<[1,32,40,256]>,
ttnn.subtractaten::sub.Tensor4
43Tensor<[1,256,1280]>,
Tensor<[1,256,1280]>,
ttnn.subtractaten::sub.Tensor4
44Tensor<[1,32,40,64]>,
Tensor<[1,32,40,64]>,
ttnn.subtractaten::sub.Tensor4
45Tensor<[1,64,1280]>,
Tensor<[1,64,1280]>,
ttnn.subtractaten::sub.Tensor4
46Tensor<[1,32,80,64]>,
Tensor<[1,32,80,64]>,
ttnn.subtractaten::sub.Tensor4
47Tensor<[1,32,80,256]>,
Tensor<[1,32,80,256]>,
ttnn.subtractaten::sub.Tensor4
48Tensor<[1,32,60,256]>,
Tensor<[1,32,60,256]>,
ttnn.subtractaten::sub.Tensor4
49Tensor<[1,32,60,1024]>,
Tensor<[1,32,60,1024]>,
ttnn.subtractaten::sub.Tensor4
50Tensor<[1,32,40,1024]>,
Tensor<[1,32,40,1024]>,
ttnn.subtractaten::sub.Tensor4
51Tensor<[1,32,30,1024]>,
Tensor<[1,32,30,1024]>,
ttnn.subtractaten::sub.Tensor4
52Tensor<[1,32,30,4096]>,
Tensor<[1,32,30,4096]>,
ttnn.subtractaten::sub.Tensor4
53Tensor<[1,32,20,4096]>,
Tensor<[1,32,20,4096]>,
ttnn.subtractaten::sub.Tensor4
54Tensor<[1,12,25,25]>,
Tensor<[1,12,25,25]>,
ttnn.subtractaten::_safe_softmax4
55Tensor<[1,1,25,25]>,
Tensor<[1,1,25,25]>,
ttnn.subtractaten::rsub.Scalar4
56Tensor<[1,25,768]>,
Tensor<[1,25,768]>,
ttnn.subtractaten::sub.Tensor4
57Tensor<[1,3,1445,1445]>,
Tensor<[1,3,1445,1445]>,
ttnn.subtractaten::_safe_softmax4
58Tensor<[1,1445,192]>,
Tensor<[1,1445,192]>,
ttnn.subtractaten::sub.Tensor4
59Tensor<[1,256,14,14]>,
Tensor<[1,256,14,14]>,
ttnn.subtractaten::sub.Tensor4
60Tensor<[1,512,7,7]>,
Tensor<[1,512,7,7]>,
ttnn.subtractaten::sub.Tensor4
61Tensor<[1,12,8,8]>,
Tensor<[1,12,8,8]>,
ttnn.subtractaten::_softmax4
62Tensor<[1,1,1,8]>,
Tensor<[1,1,1,8]>,
ttnn.subtractaten::rsub.Scalar4
63Tensor<[1,8,768]>,
Tensor<[1,8,768]>,
ttnn.subtractaten::sub.Tensor4
64Tensor<[1,8,256,2048]>,
Tensor<[1,8,256,2048]>,
ttnn.subtractaten::_softmax4
65Tensor<[1,8,2048,256]>,
Tensor<[1,8,2048,256]>,
ttnn.subtractaten::_softmax4
66Tensor<[1,1,1,2048]>,
Tensor<[1,1,1,2048]>,
ttnn.subtractaten::rsub.Scalar4
67Tensor<[1,2048,768]>,
Tensor<[1,2048,768]>,
ttnn.subtractaten::sub.Tensor4
68Tensor<[1,256,56,56]>,
Tensor<[1,256,56,56]>,
ttnn.subtractaten::sub.Tensor4
69Tensor<[1,1024,14,14]>,
Tensor<[1,1024,14,14]>,
ttnn.subtractaten::sub.Tensor4
70Tensor<[1,512,14,14]>,
Tensor<[1,512,14,14]>,
ttnn.subtractaten::sub.Tensor4
71Tensor<[1,2048,7,7]>,
Tensor<[1,2048,7,7]>,
ttnn.subtractaten::sub.Tensor4
72Tensor<[1,12,201,201]>,
Tensor<[1,12,201,201]>,
ttnn.subtractaten::_softmax4
73Tensor<[1,192]>,
Tensor<[1,192]>,
ttnn.subtractaten::rsub.Scalar4
74Tensor<[1,1,1,201]>,
Tensor<[1,1,1,201]>,
ttnn.subtractaten::rsub.Scalar4
75Tensor<[1,201,768]>,
Tensor<[1,201,768]>,
ttnn.subtractaten::sub.Tensor4
76Tensor<[1,1536]>,
Tensor<[1,1536]>,
ttnn.subtractaten::sub.Tensor4
77Tensor<[1,10]>,
Tensor<[1,10]>,
ttnn.subtractaten::sub.Tensor4
78Tensor<[16,19,19]>,
Tensor<[16,19,19]>,
ttnn.subtractaten::_softmax4
79Tensor<[1,1,19,19]>,
Tensor<[1,1,19,19]>,
ttnn.subtractaten::rsub.Scalar4
80Tensor<[1,19,1024]>,
Tensor<[1,19,1024]>,
ttnn.subtractaten::sub.Tensor4
81Tensor<[19]>,
Tensor<[19]>,
ttnn.subtractaten::sub.Tensor4
82Tensor<[19,256008]>,
Tensor<[19,256008]>,
ttnn.subtractaten::sub.Tensor4
83Tensor<[1,14,56,56]>,
Tensor<[1,14,56,56]>,
ttnn.subtractaten::sub.Tensor4
84Tensor<[1,24,56,56]>,
Tensor<[1,24,56,56]>,
ttnn.subtractaten::sub.Tensor4
85Tensor<[1,40,56,56]>,
Tensor<[1,40,56,56]>,
ttnn.subtractaten::sub.Tensor4
86Tensor<[1,68,56,56]>,
Tensor<[1,68,56,56]>,
ttnn.subtractaten::sub.Tensor4
87Tensor<[1,16,28,28]>,
Tensor<[1,16,28,28]>,
ttnn.subtractaten::sub.Tensor4
88Tensor<[1,28,28,28]>,
Tensor<[1,28,28,28]>,
ttnn.subtractaten::sub.Tensor4
89Tensor<[1,46,28,28]>,
Tensor<[1,46,28,28]>,
ttnn.subtractaten::sub.Tensor4
90Tensor<[1,78,28,28]>,
Tensor<[1,78,28,28]>,
ttnn.subtractaten::sub.Tensor4
91Tensor<[1,134,28,28]>,
Tensor<[1,134,28,28]>,
ttnn.subtractaten::sub.Tensor4
92Tensor<[1,20,28,28]>,
Tensor<[1,20,28,28]>,
ttnn.subtractaten::sub.Tensor4
93Tensor<[1,34,28,28]>,
Tensor<[1,34,28,28]>,
ttnn.subtractaten::sub.Tensor4
94Tensor<[1,58,28,28]>,
Tensor<[1,58,28,28]>,
ttnn.subtractaten::sub.Tensor4
95Tensor<[1,98,28,28]>,
Tensor<[1,98,28,28]>,
ttnn.subtractaten::sub.Tensor4
96Tensor<[1,168,28,28]>,
Tensor<[1,168,28,28]>,
ttnn.subtractaten::sub.Tensor4
97Tensor<[1,320,28,28]>,
Tensor<[1,320,28,28]>,
ttnn.subtractaten::sub.Tensor4
98Tensor<[1,40,14,14]>,
Tensor<[1,40,14,14]>,
ttnn.subtractaten::sub.Tensor4
99Tensor<[1,68,14,14]>,
Tensor<[1,68,14,14]>,
ttnn.subtractaten::sub.Tensor4
100Tensor<[1,116,14,14]>,
Tensor<[1,116,14,14]>,
ttnn.subtractaten::sub.Tensor4
101Tensor<[1,196,14,14]>,
Tensor<[1,196,14,14]>,
ttnn.subtractaten::sub.Tensor4
102Tensor<[1,334,14,14]>,
Tensor<[1,334,14,14]>,
ttnn.subtractaten::sub.Tensor4
103Tensor<[1,640,14,14]>,
Tensor<[1,640,14,14]>,
ttnn.subtractaten::sub.Tensor4
104Tensor<[1,160,7,7]>,
Tensor<[1,160,7,7]>,
ttnn.subtractaten::sub.Tensor4
105Tensor<[1,272,7,7]>,
Tensor<[1,272,7,7]>,
ttnn.subtractaten::sub.Tensor4
106Tensor<[1,462,7,7]>,
Tensor<[1,462,7,7]>,
ttnn.subtractaten::sub.Tensor4
107Tensor<[1,1024,7,7]>,
Tensor<[1,1024,7,7]>,
ttnn.subtractaten::sub.Tensor4
108Tensor<[1,32,512,512]>,
Tensor<[1,32,512,512]>,
ttnn.subtractaten::sub.Tensor4
109Tensor<[1,64,256,256]>,
Tensor<[1,64,256,256]>,
ttnn.subtractaten::sub.Tensor4
110Tensor<[1,32,256,256]>,
Tensor<[1,32,256,256]>,
ttnn.subtractaten::sub.Tensor4
111Tensor<[1,128,128,128]>,
Tensor<[1,128,128,128]>,
ttnn.subtractaten::sub.Tensor4
112Tensor<[1,64,128,128]>,
Tensor<[1,64,128,128]>,
ttnn.subtractaten::sub.Tensor4
113Tensor<[1,256,64,64]>,
Tensor<[1,256,64,64]>,
ttnn.subtractaten::sub.Tensor4
114Tensor<[1,128,64,64]>,
Tensor<[1,128,64,64]>,
ttnn.subtractaten::sub.Tensor4
115Tensor<[1,512,32,32]>,
Tensor<[1,512,32,32]>,
ttnn.subtractaten::sub.Tensor4
116Tensor<[1,256,32,32]>,
Tensor<[1,256,32,32]>,
ttnn.subtractaten::sub.Tensor4
117Tensor<[1,1024,16,16]>,
Tensor<[1,1024,16,16]>,
ttnn.subtractaten::sub.Tensor4
118Tensor<[1,512,16,16]>,
Tensor<[1,512,16,16]>,
ttnn.subtractaten::sub.Tensor4
119Tensor<[1,256,16,16]>,
Tensor<[1,256,16,16]>,
ttnn.subtractaten::sub.Tensor4
120Tensor<[1,128,32,32]>,
Tensor<[1,128,32,32]>,
ttnn.subtractaten::sub.Tensor4
121Tensor<[1,16,32,32]>,
Tensor<[1,16,32,32]>,
ttnn.subtractaten::_softmax4
122Tensor<[1,32,1536]>,
Tensor<[1,32,1536]>,
ttnn.subtractaten::sub.Tensor4
123Tensor<[1,32]>,
Tensor<[1,32]>,
ttnn.subtractaten::sub.Tensor4
124Tensor<[1,12,16,16]>,
Tensor<[1,12,16,16]>,
ttnn.subtractaten::_safe_softmax4
125Tensor<[1,1,16,16]>,
Tensor<[1,1,16,16]>,
ttnn.subtractaten::rsub.Scalar4
126Tensor<[1,16,768]>,
Tensor<[1,16,768]>,
ttnn.subtractaten::sub.Tensor4
127Tensor<[1,64,224,224]>,
Tensor<[1,64,224,224]>,
ttnn.subtractaten::sub.Tensor4
128Tensor<[1,128,112,112]>,
Tensor<[1,128,112,112]>,
ttnn.subtractaten::sub.Tensor4
129Tensor<[1,1,19200,300]>,
Tensor<[1,1,19200,300]>,
ttnn.subtractaten::_softmax4
130Tensor<[1,2,4800,300]>,
Tensor<[1,2,4800,300]>,
ttnn.subtractaten::_softmax4
131Tensor<[1,5,1200,300]>,
Tensor<[1,5,1200,300]>,
ttnn.subtractaten::_softmax4
132Tensor<[1,8,300,300]>,
Tensor<[1,8,300,300]>,
ttnn.subtractaten::_softmax4
133Tensor<[1,19200,64]>,
Tensor<[1,19200,64]>,
ttnn.subtractaten::sub.Tensor4
134Tensor<[1,300,64]>,
Tensor<[1,300,64]>,
ttnn.subtractaten::sub.Tensor4
135Tensor<[1,4800,128]>,
Tensor<[1,4800,128]>,
ttnn.subtractaten::sub.Tensor4
136Tensor<[1,300,128]>,
Tensor<[1,300,128]>,
ttnn.subtractaten::sub.Tensor4
137Tensor<[1,1200,320]>,
Tensor<[1,1200,320]>,
ttnn.subtractaten::sub.Tensor4
138Tensor<[1,300,320]>,
Tensor<[1,300,320]>,
ttnn.subtractaten::sub.Tensor4
139Tensor<[1,300,512]>,
Tensor<[1,300,512]>,
ttnn.subtractaten::sub.Tensor4
140Tensor<[30]>,
Tensor<[30]>,
ttnn.subtractaten::sub.Tensor4
141Tensor<[40]>,
Tensor<[40]>,
ttnn.subtractaten::sub.Tensor4
142Tensor<[1,64,30,40]>,
Tensor<[1,64,30,40]>,
ttnn.subtractaten::sub.Tensor5
143Tensor<[30,1]>,
Tensor<[30,1]>,
ttnn.subtractaten::sub.Tensor4
144Tensor<[1,32,30,40]>,
Tensor<[1,32,30,40]>,
ttnn.subtractaten::sub.Tensor4
145Tensor<[60]>,
Tensor<[60]>,
ttnn.subtractaten::sub.Tensor4
146Tensor<[80]>,
Tensor<[80]>,
ttnn.subtractaten::sub.Tensor4
147Tensor<[1,64,60,80]>,
Tensor<[1,64,60,80]>,
ttnn.subtractaten::sub.Tensor5
148Tensor<[60,1]>,
Tensor<[60,1]>,
ttnn.subtractaten::sub.Tensor4
149Tensor<[1,32,60,80]>,
Tensor<[1,32,60,80]>,
ttnn.subtractaten::sub.Tensor4
150Tensor<[120]>,
Tensor<[120]>,
ttnn.subtractaten::sub.Tensor4
151Tensor<[160]>,
Tensor<[160]>,
ttnn.subtractaten::sub.Tensor4
152Tensor<[1,64,120,160]>,
Tensor<[1,64,120,160]>,
ttnn.subtractaten::sub.Tensor5
153Tensor<[120,1]>,
Tensor<[120,1]>,
ttnn.subtractaten::sub.Tensor4
154Tensor<[1,32,120,160]>,
Tensor<[1,32,120,160]>,
ttnn.subtractaten::sub.Tensor4
155Tensor<[240]>,
Tensor<[240]>,
ttnn.subtractaten::sub.Tensor4
156Tensor<[320]>,
Tensor<[320]>,
ttnn.subtractaten::sub.Tensor4
157Tensor<[1,64,240,320]>,
Tensor<[1,64,240,320]>,
ttnn.subtractaten::sub.Tensor5
158Tensor<[240,1]>,
Tensor<[240,1]>,
ttnn.subtractaten::sub.Tensor4
159Tensor<[480]>,
Tensor<[480]>,
ttnn.subtractaten::sub.Tensor4
160Tensor<[640]>,
Tensor<[640]>,
ttnn.subtractaten::sub.Tensor4
161Tensor<[1,64,480,640]>,
Tensor<[1,64,480,640]>,
ttnn.subtractaten::sub.Tensor5
162Tensor<[480,1]>,
Tensor<[480,1]>,
ttnn.subtractaten::sub.Tensor4
163Tensor<[1,12,197,197]>,
Tensor<[1,12,197,197]>,
ttnn.subtractaten::_safe_softmax4
164Tensor<[1,197,768]>,
Tensor<[1,197,768]>,
ttnn.subtractaten::sub.Tensor4
165Tensor<[1,1,16384,256]>,
Tensor<[1,1,16384,256]>,
ttnn.subtractaten::_softmax4
166Tensor<[1,2,4096,256]>,
Tensor<[1,2,4096,256]>,
ttnn.subtractaten::_softmax4
167Tensor<[1,5,1024,256]>,
Tensor<[1,5,1024,256]>,
ttnn.subtractaten::_softmax4
168Tensor<[1,16384,32]>,
Tensor<[1,16384,32]>,
ttnn.subtractaten::sub.Tensor4
169Tensor<[1,256,32]>,
Tensor<[1,256,32]>,
ttnn.subtractaten::sub.Tensor4
170Tensor<[1,4096,64]>,
Tensor<[1,4096,64]>,
ttnn.subtractaten::sub.Tensor4
171Tensor<[1,256,64]>,
Tensor<[1,256,64]>,
ttnn.subtractaten::sub.Tensor4
172Tensor<[1,1024,160]>,
Tensor<[1,1024,160]>,
ttnn.subtractaten::sub.Tensor4
173Tensor<[1,256,160]>,
Tensor<[1,256,160]>,
ttnn.subtractaten::sub.Tensor4
174Tensor<[1,256,256]>,
Tensor<[1,256,256]>,
ttnn.subtractaten::sub.Tensor4
175Tensor<[128]>,
Tensor<[128]>,
ttnn.subtractaten::sub.Tensor4
176Tensor<[1,256,128,128]>,
Tensor<[1,256,128,128]>,
ttnn.subtractaten::sub.Tensor5
177Tensor<[128,1]>,
Tensor<[128,1]>,
ttnn.subtractaten::sub.Tensor4
178Tensor<[1,71,7,7]>,
Tensor<[1,71,7,7]>,
ttnn.subtractaten::_safe_softmax4
179Tensor<[1,7,4544]>,
Tensor<[1,7,4544]>,
ttnn.subtractaten::sub.Tensor4
180Tensor<[1,16,112,112]>,
Tensor<[1,16,112,112]>,
ttnn.subtractaten::sub.Tensor4
181Tensor<[1,96,112,112]>,
Tensor<[1,96,112,112]>,
ttnn.subtractaten::sub.Tensor4
182Tensor<[1,96,56,56]>,
Tensor<[1,96,56,56]>,
ttnn.subtractaten::sub.Tensor4
183Tensor<[1,144,56,56]>,
Tensor<[1,144,56,56]>,
ttnn.subtractaten::sub.Tensor4
184Tensor<[1,144,28,28]>,
Tensor<[1,144,28,28]>,
ttnn.subtractaten::sub.Tensor4
185Tensor<[1,32,28,28]>,
Tensor<[1,32,28,28]>,
ttnn.subtractaten::sub.Tensor4
186Tensor<[1,192,28,28]>,
Tensor<[1,192,28,28]>,
ttnn.subtractaten::sub.Tensor4
187Tensor<[1,192,14,14]>,
Tensor<[1,192,14,14]>,
ttnn.subtractaten::sub.Tensor4
188Tensor<[1,64,14,14]>,
Tensor<[1,64,14,14]>,
ttnn.subtractaten::sub.Tensor4
189Tensor<[1,384,14,14]>,
Tensor<[1,384,14,14]>,
ttnn.subtractaten::sub.Tensor4
190Tensor<[1,96,14,14]>,
Tensor<[1,96,14,14]>,
ttnn.subtractaten::sub.Tensor4
191Tensor<[1,576,14,14]>,
Tensor<[1,576,14,14]>,
ttnn.subtractaten::sub.Tensor4
192Tensor<[1,576,7,7]>,
Tensor<[1,576,7,7]>,
ttnn.subtractaten::sub.Tensor4
193Tensor<[1,960,7,7]>,
Tensor<[1,960,7,7]>,
ttnn.subtractaten::sub.Tensor4
194Tensor<[1,320,7,7]>,
Tensor<[1,320,7,7]>,
ttnn.subtractaten::sub.Tensor4
195Tensor<[1,1280,7,7]>,
Tensor<[1,1280,7,7]>,
ttnn.subtractaten::sub.Tensor4
196Tensor<[1,12,12,12]>,
Tensor<[1,12,12,12]>,
ttnn.subtractaten::_safe_softmax4
197Tensor<[1,1,12,12]>,
Tensor<[1,1,12,12]>,
ttnn.subtractaten::rsub.Scalar4
198Tensor<[1,12,128]>,
Tensor<[1,12,128]>,
ttnn.subtractaten::sub.Tensor4
199Tensor<[1,12,768]>,
Tensor<[1,12,768]>,
ttnn.subtractaten::sub.Tensor4
200Tensor<[1,12,9,9]>,
Tensor<[1,12,9,9]>,
ttnn.subtractaten::_safe_softmax4
201Tensor<[1,1,9,9]>,
Tensor<[1,1,9,9]>,
ttnn.subtractaten::rsub.Scalar4
202Tensor<[1,9,128]>,
Tensor<[1,9,128]>,
ttnn.subtractaten::sub.Tensor4
203Tensor<[1,9,768]>,
Tensor<[1,9,768]>,
ttnn.subtractaten::sub.Tensor4
204Tensor<[1,16,9,9]>,
Tensor<[1,16,9,9]>,
ttnn.subtractaten::_safe_softmax4
205Tensor<[1,9,2048]>,
Tensor<[1,9,2048]>,
ttnn.subtractaten::sub.Tensor4
206Tensor<[1,9,1024]>,
Tensor<[1,9,1024]>,
ttnn.subtractaten::sub.Tensor4
207Tensor<[1,64,9,9]>,
Tensor<[1,64,9,9]>,
ttnn.subtractaten::_safe_softmax4
208Tensor<[1,9,4096]>,
Tensor<[1,9,4096]>,
ttnn.subtractaten::sub.Tensor4
209Tensor<[1,12,14,14]>,
Tensor<[1,12,14,14]>,
ttnn.subtractaten::_safe_softmax4
210Tensor<[1,1,14,14]>,
Tensor<[1,1,14,14]>,
ttnn.subtractaten::rsub.Scalar4
211Tensor<[1,14,128]>,
Tensor<[1,14,128]>,
ttnn.subtractaten::sub.Tensor4
212Tensor<[1,14,768]>,
Tensor<[1,14,768]>,
ttnn.subtractaten::sub.Tensor4
213Tensor<[1,12,50,50]>,
Tensor<[1,12,50,50]>,
ttnn.subtractaten::_safe_softmax4
214Tensor<[2,8,7,7]>,
Tensor<[2,8,7,7]>,
ttnn.subtractaten::_safe_softmax4
215Tensor<[2,1,7,7]>,
Tensor<[2,1,7,7]>,
ttnn.subtractaten::rsub.Scalar4
216Tensor<[1,50,768]>,
Tensor<[1,50,768]>,
ttnn.subtractaten::sub.Tensor4
217Tensor<[1,768]>,
Tensor<[1,768]>,
ttnn.subtractaten::sub.Tensor4
218Tensor<[2,7,512]>,
Tensor<[2,7,512]>,
ttnn.subtractaten::sub.Tensor4
219Tensor<[1,16,197,197]>,
Tensor<[1,16,197,197]>,
ttnn.subtractaten::_softmax4
220Tensor<[1,197,1024]>,
Tensor<[1,197,1024]>,
ttnn.subtractaten::sub.Tensor4
221Tensor<[27]>,
Tensor<[27]>,
ttnn.subtractaten::sub.Tensor4
222Tensor<[1,16,27,27]>,
Tensor<[1,16,27,27]>,
ttnn.subtractaten::sub.Tensor5
223Tensor<[27,1]>,
Tensor<[27,1]>,
ttnn.subtractaten::sub.Tensor4
224Tensor<[2,196,196]>,
Tensor<[2,196,196]>,
ttnn.subtractaten::sub.Tensor4
225Tensor<[197]>,
Tensor<[197]>,
ttnn.subtractaten::sub.Tensor4
226Tensor<[1,1024]>,
Tensor<[1,1024]>,
ttnn.subtractaten::sub.Tensor4
227Tensor<[1,12,27,27]>,
Tensor<[1,12,27,27]>,
ttnn.subtractaten::sub.Tensor5

stablehlo.tanh::ttnn.tanh

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,7,3072]>,
ttnn.tanhaten::tanh4
1Tensor<[1,768]>,
ttnn.tanhaten::tanh4
2Tensor<[1,32,6144]>,
ttnn.tanhaten::tanh4
3Tensor<[1,12,3072]>,
ttnn.tanhaten::tanh4
4Tensor<[1,9,3072]>,
ttnn.tanhaten::tanh4
5Tensor<[1,9,128]>,
ttnn.tanhaten::tanh4
6Tensor<[1,9,8192]>,
ttnn.tanhaten::tanh4
7Tensor<[1,9,4096]>,
ttnn.tanhaten::tanh4
8Tensor<[1,9,16384]>,
ttnn.tanhaten::tanh4
9Tensor<[1,14,3072]>,
ttnn.tanhaten::tanh4

stablehlo.transpose::ttnn.permute

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[1,64,32]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
1Tensor<[4096,4096]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
2Tensor<[1,32,32,128]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
3Tensor<[1,32,32,128]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
4Tensor<[11008,4096]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
5Tensor<[4096,11008]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
6Tensor<[32000,4096]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
7Tensor<[1,7,12,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
8Tensor<[1,12,7,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
9Tensor<[1,12,7,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
10Tensor<[2,768]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
11Tensor<[1,3,16,16,16,16]>,
dims: [0, 2, 4, 3, 5, 1]
ttnn.permuteaten::permute4
12Tensor<[1,256,512]>,
dims: [0, 2, 1]
ttnn.permuteaten::permute4
13Tensor<[512,768]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
14Tensor<[256,512]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
15Tensor<[512,256]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
16Tensor<[1000,512]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
17Tensor<[1,23,40,256]>,
dims: [0, 3, 1, 2]
ttnn.permuteaten::permute4
18Tensor<[1,256,920]>,
dims: [2, 0, 1]
ttnn.permuteaten::permute4
19Tensor<[256,256]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
20Tensor<[920,8,32]>,
dims: [1, 0, 2]
ttnn.permuteaten::transpose.int4
21Tensor<[8,920,32]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
22Tensor<[8,920,32]>,
dims: [1, 0, 2]
ttnn.permuteaten::transpose.int4
23Tensor<[2048,256]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
24Tensor<[256,2048]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
25Tensor<[100,8,32]>,
dims: [1, 0, 2]
ttnn.permuteaten::transpose.int4
26Tensor<[8,100,32]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
27Tensor<[8,100,32]>,
dims: [1, 0, 2]
ttnn.permuteaten::transpose.int4
28Tensor<[6,100,1,256]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
29Tensor<[92,256]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
30Tensor<[4,256]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
31Tensor<[1,10,12,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
32Tensor<[768,768]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
33Tensor<[1,12,10,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
34Tensor<[1,12,10,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
35Tensor<[3072,768]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
36Tensor<[768,3072]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
37Tensor<[250002,768]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
38Tensor<[1,320,64,64]>,
dims: [0, 2, 3, 1]
ttnn.permuteaten::permute4
39Tensor<[1,64,64,320]>,
dims: [0, 3, 1, 2]
ttnn.permuteaten::permute4
40Tensor<[1,640,32,32]>,
dims: [0, 2, 3, 1]
ttnn.permuteaten::permute4
41Tensor<[1,32,32,640]>,
dims: [0, 3, 1, 2]
ttnn.permuteaten::permute4
42Tensor<[1,1280,16,16]>,
dims: [0, 2, 3, 1]
ttnn.permuteaten::permute4
43Tensor<[1,16,16,1280]>,
dims: [0, 3, 1, 2]
ttnn.permuteaten::permute4
44Tensor<[1,1280,8,8]>,
dims: [0, 2, 3, 1]
ttnn.permuteaten::permute4
45Tensor<[1,8,8,1280]>,
dims: [0, 3, 1, 2]
ttnn.permuteaten::permute4
46Tensor<[1280,320]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
47Tensor<[1280,1280]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
48Tensor<[320,1280]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
49Tensor<[320,320]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
50Tensor<[1,4096,8,40]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
51Tensor<[1,8,4096,40]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
52Tensor<[1,8,4096,40]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
53Tensor<[320,768]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
54Tensor<[1,9,8,40]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
55Tensor<[1,8,9,40]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
56Tensor<[2560,320]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
57Tensor<[640,1280]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
58Tensor<[640,640]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
59Tensor<[1,1024,8,80]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
60Tensor<[1,8,1024,80]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
61Tensor<[1,8,1024,80]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
62Tensor<[640,768]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
63Tensor<[1,9,8,80]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
64Tensor<[1,8,9,80]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
65Tensor<[5120,640]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
66Tensor<[640,2560]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
67Tensor<[1,256,8,160]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
68Tensor<[1,8,256,160]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
69Tensor<[1,8,256,160]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
70Tensor<[1280,768]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
71Tensor<[1,9,8,160]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
72Tensor<[1,8,9,160]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
73Tensor<[10240,1280]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
74Tensor<[1280,5120]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
75Tensor<[1,64,8,160]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
76Tensor<[1,8,64,160]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
77Tensor<[1,8,64,160]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
78Tensor<[1,25,12,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
79Tensor<[1,12,25,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
80Tensor<[1,12,25,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
81Tensor<[1,768]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
82Tensor<[1,1445,3,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
83Tensor<[1,3,1445,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
84Tensor<[1,192,1344]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
85Tensor<[1,4150,192]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
86Tensor<[192,192]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
87Tensor<[1,3,1445,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
88Tensor<[768,192]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
89Tensor<[192,768]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
90Tensor<[92,192]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
91Tensor<[4,192]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
92Tensor<[1,8,768]>,
dims: [0, 2, 1]
ttnn.permuteaten::permute4
93Tensor<[1,12,64,8]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::permute4
94Tensor<[1,12,8,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::permute4
95Tensor<[1,768,8]>,
dims: [0, 2, 1]
ttnn.permuteaten::permute4
96Tensor<[3,768]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
97Tensor<[1,256,8,32]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
98Tensor<[1,2048,8,32]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
99Tensor<[1,2048,8,160]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
100Tensor<[1,256,8,96]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
101Tensor<[1,8,2048,96]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
102Tensor<[256,1280]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
103Tensor<[256,768]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
104Tensor<[1,8,2048,32]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
105Tensor<[1,8,256,32]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
106Tensor<[768,1280]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
107Tensor<[262,768]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
108Tensor<[1000,2048]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
109Tensor<[1,201,12,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
110Tensor<[1,12,201,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
111Tensor<[1,144,768]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
112Tensor<[1,768,192]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
113Tensor<[1,12,201,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
114Tensor<[1536,768]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
115Tensor<[3129,1536]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
116Tensor<[128,9216]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
117Tensor<[10,128]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
118Tensor<[1024,1024]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
119Tensor<[1,19,16,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
120Tensor<[16,19,64]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
121Tensor<[1,16,19,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
122Tensor<[4096,1024]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
123Tensor<[1024,4096]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
124Tensor<[256008,1024]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
125Tensor<[1000,1024]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
126Tensor<[512,256,2,2]>,
dims: [2, 3, 1, 0]
ttnn.permuteaten::convolution4
127Tensor<[256,128,2,2]>,
dims: [2, 3, 1, 0]
ttnn.permuteaten::convolution4
128Tensor<[128,64,2,2]>,
dims: [2, 3, 1, 0]
ttnn.permuteaten::convolution4
129Tensor<[64,32,2,2]>,
dims: [2, 3, 1, 0]
ttnn.permuteaten::convolution4
130Tensor<[4,16,2,2]>,
dims: [2, 3, 1, 0]
ttnn.permuteaten::convolution4
131Tensor<[16,1,2,2]>,
dims: [2, 3, 1, 0]
ttnn.permuteaten::convolution4
132Tensor<[1,16,32,96]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
133Tensor<[4608,1536]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
134Tensor<[1,32,16,96]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
135Tensor<[16,32,96]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
136Tensor<[1536,1536]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
137Tensor<[6144,1536]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
138Tensor<[1536,6144]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
139Tensor<[250880,1536]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
140Tensor<[1,16,12,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
141Tensor<[1,12,16,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
142Tensor<[1,12,16,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
143Tensor<[1024,512,2,2]>,
dims: [2, 3, 1, 0]
ttnn.permuteaten::convolution4
144Tensor<[1,19200,1,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
145Tensor<[1,19200,64]>,
dims: [0, 2, 1]
ttnn.permuteaten::permute4
146Tensor<[1,64,300]>,
dims: [0, 2, 1]
ttnn.permuteaten::permute4
147Tensor<[1,300,1,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
148Tensor<[1,1,19200,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
149Tensor<[1,120,160,64]>,
dims: [0, 3, 1, 2]
ttnn.permuteaten::permute4
150Tensor<[1,4800,2,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
151Tensor<[1,4800,128]>,
dims: [0, 2, 1]
ttnn.permuteaten::permute4
152Tensor<[1,128,300]>,
dims: [0, 2, 1]
ttnn.permuteaten::permute4
153Tensor<[1,300,2,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
154Tensor<[1,2,4800,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
155Tensor<[1,60,80,128]>,
dims: [0, 3, 1, 2]
ttnn.permuteaten::permute4
156Tensor<[1,1200,5,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
157Tensor<[1,1200,320]>,
dims: [0, 2, 1]
ttnn.permuteaten::permute4
158Tensor<[1,320,300]>,
dims: [0, 2, 1]
ttnn.permuteaten::permute4
159Tensor<[1,300,5,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
160Tensor<[1,5,1200,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
161Tensor<[1,30,40,320]>,
dims: [0, 3, 1, 2]
ttnn.permuteaten::permute4
162Tensor<[1,300,8,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
163Tensor<[1,8,300,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
164Tensor<[1,15,20,512]>,
dims: [0, 3, 1, 2]
ttnn.permuteaten::permute4
165Tensor<[1,64,19200]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
166Tensor<[64,64]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
167Tensor<[1,1,300,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
168Tensor<[256,64]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
169Tensor<[1,19200,256]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
170Tensor<[1,256,19200]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
171Tensor<[64,256]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
172Tensor<[1,128,4800]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
173Tensor<[128,128]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
174Tensor<[1,2,300,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
175Tensor<[512,128]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
176Tensor<[1,4800,512]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
177Tensor<[1,512,4800]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
178Tensor<[128,512]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
179Tensor<[1,320,1200]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
180Tensor<[1,5,300,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
181Tensor<[1,1200,1280]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
182Tensor<[1,1280,1200]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
183Tensor<[1,512,300]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
184Tensor<[512,512]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
185Tensor<[1,8,300,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
186Tensor<[2048,512]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
187Tensor<[1,300,2048]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
188Tensor<[1,2048,300]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
189Tensor<[512,2048]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
190Tensor<[1,197,12,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
191Tensor<[1,12,197,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
192Tensor<[1,768,196]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
193Tensor<[1,12,197,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
194Tensor<[1000,768]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
195Tensor<[1,16384,1,32]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
196Tensor<[1,16384,32]>,
dims: [0, 2, 1]
ttnn.permuteaten::permute4
197Tensor<[1,32,256]>,
dims: [0, 2, 1]
ttnn.permuteaten::permute4
198Tensor<[1,256,1,32]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
199Tensor<[1,1,16384,32]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
200Tensor<[1,128,128,32]>,
dims: [0, 3, 1, 2]
ttnn.permuteaten::permute4
201Tensor<[1,4096,2,32]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
202Tensor<[1,4096,64]>,
dims: [0, 2, 1]
ttnn.permuteaten::permute4
203Tensor<[1,64,256]>,
dims: [0, 2, 1]
ttnn.permuteaten::permute4
204Tensor<[1,256,2,32]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
205Tensor<[1,2,4096,32]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
206Tensor<[1,64,64,64]>,
dims: [0, 3, 1, 2]
ttnn.permuteaten::permute4
207Tensor<[1,1024,5,32]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
208Tensor<[1,1024,160]>,
dims: [0, 2, 1]
ttnn.permuteaten::permute4
209Tensor<[1,160,256]>,
dims: [0, 2, 1]
ttnn.permuteaten::permute4
210Tensor<[1,256,5,32]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
211Tensor<[1,5,1024,32]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
212Tensor<[1,32,32,160]>,
dims: [0, 3, 1, 2]
ttnn.permuteaten::permute4
213Tensor<[1,8,256,32]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
214Tensor<[1,16,16,256]>,
dims: [0, 3, 1, 2]
ttnn.permuteaten::permute4
215Tensor<[1,16384,256]>,
dims: [0, 2, 1]
ttnn.permuteaten::permute4
216Tensor<[1,4096,256]>,
dims: [0, 2, 1]
ttnn.permuteaten::permute4
217Tensor<[1,1024,256]>,
dims: [0, 2, 1]
ttnn.permuteaten::permute4
218Tensor<[1,256,256]>,
dims: [0, 2, 1]
ttnn.permuteaten::permute4
219Tensor<[1,32,16384]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
220Tensor<[32,32]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
221Tensor<[1,1,256,32]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
222Tensor<[128,32]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
223Tensor<[1,16384,128]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
224Tensor<[1,128,16384]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
225Tensor<[32,128]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
226Tensor<[1,64,4096]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
227Tensor<[1,2,256,32]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
228Tensor<[1,256,4096]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
229Tensor<[1,160,1024]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
230Tensor<[160,160]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
231Tensor<[1,5,256,32]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
232Tensor<[640,160]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
233Tensor<[1,1024,640]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
234Tensor<[1,640,1024]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
235Tensor<[160,640]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
236Tensor<[1024,256]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
237Tensor<[1,256,1024]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
238Tensor<[256,1024]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
239Tensor<[256,32]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
240Tensor<[256,160]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
241Tensor<[4672,4544]>,
dims: [1, 0]
ttnn.permuteaten::permute5
242Tensor<[1,71,7,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
243Tensor<[4544,4544]>,
dims: [1, 0]
ttnn.permuteaten::permute5
244Tensor<[18176,4544]>,
dims: [1, 0]
ttnn.permuteaten::permute5
245Tensor<[4544,18176]>,
dims: [1, 0]
ttnn.permuteaten::permute5
246Tensor<[1,32,7]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
247Tensor<[1,7,71,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
248Tensor<[1,7,1,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
249Tensor<[1,1,7,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
250Tensor<[65024,4544]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
251Tensor<[1000,1280]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
252Tensor<[1,12,12,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
253Tensor<[768,128]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
254Tensor<[1,12,12,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
255Tensor<[1,9,12,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
256Tensor<[1,12,9,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
257Tensor<[1,12,9,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
258Tensor<[128,768]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
259Tensor<[30000,128]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
260Tensor<[1,9,16,128]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
261Tensor<[2048,128]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
262Tensor<[2048,2048]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
263Tensor<[1,16,9,128]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
264Tensor<[1,16,9,128]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
265Tensor<[8192,2048]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
266Tensor<[2048,8192]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
267Tensor<[128,2048]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
268Tensor<[1,9,16,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
269Tensor<[1024,128]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
270Tensor<[1,16,9,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
271Tensor<[1,16,9,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
272Tensor<[128,1024]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
273Tensor<[1,9,64,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
274Tensor<[4096,128]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
275Tensor<[1,64,9,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
276Tensor<[1,64,9,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
277Tensor<[16384,4096]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
278Tensor<[4096,16384]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
279Tensor<[128,4096]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
280Tensor<[1,14,12,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
281Tensor<[1,12,14,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
282Tensor<[1,12,14,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
283Tensor<[1,768,49]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
284Tensor<[1,50,12,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
285Tensor<[1,12,50,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
286Tensor<[1,12,50,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
287Tensor<[2,7,8,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
288Tensor<[2,8,7,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
289Tensor<[2,8,7,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::transpose.int4
290Tensor<[1,512]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
291Tensor<[2,1]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
292Tensor<[1,197,16,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
293Tensor<[1,27,27,16]>,
dims: [0, 3, 1, 2]
ttnn.permuteaten::permute4
294Tensor<[1,16,27,27]>,
dims: [0, 2, 3, 1]
ttnn.permuteaten::permute4
295Tensor<[2,196,196]>,
dims: [1, 2, 0]
ttnn.permuteaten::permute4
296Tensor<[197,197,16]>,
dims: [2, 0, 1]
ttnn.permuteaten::permute4
297Tensor<[1,16,197,64]>,
dims: [0, 2, 1, 3]
ttnn.permuteaten::permute4
298Tensor<[1,1024,196]>,
dims: [0, 2, 1]
ttnn.permuteaten::transpose.int4
299Tensor<[1,16,197,64]>,
dims: [0, 1, 3, 2]
ttnn.permuteaten::transpose.int4
300Tensor<[1,27,27,12]>,
dims: [0, 3, 1, 2]
ttnn.permuteaten::permute4
301Tensor<[1,12,27,27]>,
dims: [0, 2, 3, 1]
ttnn.permuteaten::permute4
302Tensor<[197,197,12]>,
dims: [2, 0, 1]
ttnn.permuteaten::permute4
303Tensor<[128,784]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
304Tensor<[64,128]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
305Tensor<[12,64]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
306Tensor<[3,12]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
307Tensor<[12,3]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
308Tensor<[64,12]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
309Tensor<[128,64]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5
310Tensor<[784,128]>,
dims: [1, 0]
ttnn.permuteaten::transpose.int5

tensor.empty

STABLE HLO Input Variationsttnn opTorch NameStatus
0Tensor<[32,32]>,
aten::empty_strided4
1Tensor<[7,7]>,
aten::empty_strided4
2Tensor<[19,19]>,
aten::empty_strided4

TTNN OP Traces

The following pages have traces of operations that are currently not being compiled correctly. They can be updated by running:

python tt_torch/tools/generate_md.py --excel_path <path to xlsx file> --md_dir docs/src/ops/ttnn --json_dir docs/src/ops/ttnn --failures_only

How to read these files?

The *.md/ *.json files store information related to ops from ttnn graphs. A TTNN Graph could look like the following

#device = #tt.device<workerGrid = #tt.grid<8x8, (d0, d1) -> (0, d0, d1)>, l1Map = (d0, d1)[s0, s1] -> (0, d0 floordiv s0, d1 floordiv s1, (d0 mod s0) * s1 + d1 mod s1), dramMap = (d0, d1)[s0, s1] -> (0, 0, ((((d0 floordiv s0) * 8 + d1 floordiv s1) * (s1 * s0) + (d0 mod s0) * s1 + d1 mod s1) floordiv 8192) mod 12, (((d0 floordiv s0) * 8 + d1 floordiv s1) * (s1 * s0) + (d0 mod s0) * s1 + d1 mod s1) floordiv 98304 + (((d0 floordiv s0) * 8 + d1 floordiv s1) * (s1 * s0) + (d0 mod s0) * s1 + d1 mod s1) mod 8192), meshShape = , chipIds = [0]>
#dram = #ttnn.buffer_type<dram>
#system_desc = #tt.system_desc<[{role = host, target_triple = ""x86_64-pc-linux-gnu""}], [{arch = <wormhole_b0>, grid = 8x8, l1_size = 1499136, num_dram_channels = 12, dram_channel_size = 1073741824, noc_l1_address_align_bytes = 16, pcie_address_align_bytes = 32, noc_dram_address_align_bytes = 32, l1_unreserved_base = 1024, erisc_l1_unreserved_base = 1024, dram_unreserved_base = 1024, dram_unreserved_end = 1073741824, physical_cores = {worker = [ 0x0,  0x1,  0x2,  0x3,  0x4,  0x5,  0x6,  0x7,  1x0,  1x1,  1x2,  1x3,  1x4,  1x5,  1x6,  1x7,  2x0,  2x1,  2x2,  2x3,  2x4,  2x5,  2x6,  2x7,  3x0,  3x1,  3x2,  3x3,  3x4,  3x5,  3x6,  3x7,  4x0,  4x1,  4x2,  4x3,  4x4,  4x5,  4x6,  4x7,  5x0,  5x1,  5x2,  5x3,  5x4,  5x5,  5x6,  5x7,  6x0,  6x1,  6x2,  6x3,  6x4,  6x5,  6x6,  6x7,  7x0,  7x1,  7x2,  7x3,  7x4,  7x5,  7x6,  7x7] dram = [ 8x0,  9x0,  10x0,  8x1,  9x1,  10x1,  8x2,  9x2,  10x2,  8x3,  9x3,  10x3]}, supported_data_types = [<f32>, <f16>, <bf16>, <bfp_f8>, <bfp_bf8>, <bfp_f4>, <bfp_bf4>, <bfp_f2>, <bfp_bf2>, <u32>, <u16>, <u8>], supported_tile_sizes = [ 4x16,  16x16,  32x16,  4x32,  16x32,  32x32], num_cbs = 32}], [0], [3 : i32], [ 0x0x0x0]>
#system_memory = #ttnn.buffer_type<system_memory>
#ttnn_layout = #ttnn.ttnn_layout<(d0, d1, d2, d3) -> (d0 * 14336 + d1 * 14 + d2, d3), <1x1>, memref<14336x14xbf16, #system_memory>>
#ttnn_layout1 = #ttnn.ttnn_layout<(d0, d1, d2, d3) -> (d0 * 3072 + d1 * 3 + d2, d3), <1x1>, memref<3145728x3xbf16, #system_memory>>
#ttnn_layout2 = #ttnn.ttnn_layout<(d0, d1, d2, d3) -> (d0 * 14336 + d1 * 14 + d2, d3), <1x1>, memref<448x1x!tt.tile<32x32, bf16>, #dram>, interleaved>
#ttnn_layout3 = #ttnn.ttnn_layout<(d0, d1, d2, d3) -> (d0 * 14336 + d1 * 1024 + d2, d3), <1x1>, memref<448x1x!tt.tile<32x32, bf16>, #dram>, interleaved>
#ttnn_layout4 = #ttnn.ttnn_layout<(d0, d1, d2, d3) -> (d0 * 196 + d1 * 14 + d2, d3), <1x1>, memref<7x32x!tt.tile<32x32, bf16>, #dram>, interleaved>
module attributes {tt.device = #device, tt.system_desc = #system_desc} {
  func.func @main(%arg0: tensor<1x1024x14x14xbf16, #ttnn_layout>, %arg1: tensor<1024x1024x3x3xbf16, #ttnn_layout1>) -> tensor<1x1024x14x14xbf16, #ttnn_layout> {
    %0 = ""ttnn.get_device""() <{mesh_shape = #ttnn<mesh_shape 1x1>}> : () -> !tt.device<#device>
    %1 = ""ttnn.to_device""(%arg0, %0) <{memory_config = #ttnn.memory_config<<interleaved>, #dram, <<448x1>>>}> : (tensor<1x1024x14x14xbf16, #ttnn_layout>, !tt.device<#device>) -> tensor<1x1024x14x14xbf16, #ttnn_layout2>
    %2 = ""ttnn.to_layout""(%1) <{layout = #ttnn.layout<tile>}> : (tensor<1x1024x14x14xbf16, #ttnn_layout2>) -> tensor<1x1024x14x14xbf16, #ttnn_layout2>
    ""ttnn.deallocate""(%1) <{force = false}> : (tensor<1x1024x14x14xbf16, #ttnn_layout2>) -> ()
    %3 = ""ttnn.transpose""(%2) <{dim0 = 1 : si32, dim1 = 2 : si32}> : (tensor<1x1024x14x14xbf16, #ttnn_layout2>) -> tensor<1x14x1024x14xbf16, #ttnn_layout3>
    ""ttnn.deallocate""(%2) <{force = false}> : (tensor<1x1024x14x14xbf16, #ttnn_layout2>) -> ()
    %4 = ""ttnn.transpose""(%3) <{dim0 = 2 : si32, dim1 = 3 : si32}> : (tensor<1x14x1024x14xbf16, #ttnn_layout3>) -> tensor<1x14x14x1024xbf16, #ttnn_layout4>
    ""ttnn.deallocate""(%3) <{force = false}> : (tensor<1x14x1024x14xbf16, #ttnn_layout3>) -> ()
    %5 = ""ttnn.reshape""(%4) <{shape = [1 : i32, 1 : i32, 196 : i32, 1024 : i32]}> : (tensor<1x14x14x1024xbf16, #ttnn_layout4>) -> tensor<1x1x196x1024xbf16, #ttnn_layout4>
    ""ttnn.deallocate""(%4) <{force = false}> : (tensor<1x14x14x1024xbf16, #ttnn_layout4>) -> ()
    %6 = ""ttnn.empty""(%0) <{dtype = #tt.supportedDataTypes<bf16>, layout = #ttnn.layout<tile>, memory_config = #ttnn.memory_config<<interleaved>, #dram, <<7x32>>>, shape = #ttnn.shape<1x1x196x1024>}> : (!tt.device<#device>) -> tensor<1x1x196x1024xbf16, #ttnn_layout4>
    %7 = ""ttnn.conv2d""(%5, %arg1, %6, %0) <{batch_size = 1 : i32, dilation_height = 1 : i32, dilation_width = 1 : i32, groups = 1 : i32, in_channels = 1024 : i32, input_height = 14 : i32, input_width = 14 : i32, kernel_height = 3 : i32, kernel_width = 3 : i32, out_channels = 1024 : i32, padding_height = 1 : i32, padding_width = 1 : i32, stride_height = 1 : i32, stride_width = 1 : i32}> : (tensor<1x1x196x1024xbf16, #ttnn_layout4>, tensor<1024x1024x3x3xbf16, #ttnn_layout1>, tensor<1x1x196x1024xbf16, #ttnn_layout4>, !tt.device<#device>) -> tensor<1x1x196x1024xbf16, #ttnn_layout4>
    ""ttnn.deallocate""(%5) <{force = false}> : (tensor<1x1x196x1024xbf16, #ttnn_layout4>) -> ()
    %8 = ""ttnn.reshape""(%7) <{shape = [1 : i32, 14 : i32, 14 : i32, 1024 : i32]}> : (tensor<1x1x196x1024xbf16, #ttnn_layout4>) -> tensor<1x14x14x1024xbf16, #ttnn_layout4>
    ""ttnn.deallocate""(%6) <{force = false}> : (tensor<1x1x196x1024xbf16, #ttnn_layout4>) -> ()
    %9 = ""ttnn.transpose""(%8) <{dim0 = 2 : si32, dim1 = 3 : si32}> : (tensor<1x14x14x1024xbf16, #ttnn_layout4>) -> tensor<1x14x1024x14xbf16, #ttnn_layout3>
    ""ttnn.deallocate""(%8) <{force = false}> : (tensor<1x14x14x1024xbf16, #ttnn_layout4>) -> ()
    %10 = ""ttnn.transpose""(%9) <{dim0 = 1 : si32, dim1 = 2 : si32}> : (tensor<1x14x1024x14xbf16, #ttnn_layout3>) -> tensor<1x1024x14x14xbf16, #ttnn_layout2>
    ""ttnn.deallocate""(%9) <{force = false}> : (tensor<1x14x1024x14xbf16, #ttnn_layout3>) -> ()
    %11 = ""ttnn.from_device""(%10) : (tensor<1x1024x14x14xbf16, #ttnn_layout2>) -> tensor<1x1024x14x14xbf16, #ttnn_layout>
    ""ttnn.deallocate""(%10) <{force = false}> : (tensor<1x1024x14x14xbf16, #ttnn_layout2>) -> ()
    %12 = ""ttnn.to_layout""(%11) <{layout = #ttnn.layout<row_major>}> : (tensor<1x1024x14x14xbf16, #ttnn_layout>) -> tensor<1x1024x14x14xbf16, #ttnn_layout>
    ""ttnn.deallocate""(%11) <{force = false}> : (tensor<1x1024x14x14xbf16, #ttnn_layout>) -> ()
    return %12 : tensor<1x1024x14x14xbf16, #ttnn_layout>
  }
}

Each line that starts with #number refers to an operation. The parser parses through all TTNN graphs generated by models under tt-torch and compiles all ops by the same name together.

Name

The name of the operation, i.e. ttnn.add, ttnn.matmul

Input/ Output Shapes

The shapes of the input/ output arguments to the operation, last element is the data type (i.e. bf16, i32)

Note: Some operations take the output as the last input.

Input/ Output Layouts

Please refer to tt-mlir tensor layout documentation

Mapping From/ To

i.e. (d0, d1, d2, d3) -> (d0 * 3072 + d1 * 3 + d2, d3)

Memory Config

i.e. <448x1x!tt.tile<32x32, bf16>, #dram>

  • "tile" refers to tilized memory
  • "dram" refers to dram memory
  • "system_memory" refers to unformatted weight tensor on host
  • "interleaved" refers to interleaved memory

Attributes

Parameters passed into the operation.

Runs on TTNN

Yes / No/ N/A

PCC

Pearson's correlation coefficient

ATOL

The tolerance on absolute differences

ttnn.add

This table is a trace for ttnn.add op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.addtensor<[1,128,512,512,bf16]>
tensor<[1,128,1,1,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 512 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 4096 + d1 * 32 + d2, d3), memory_config: (128, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,128,512,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 512 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')nannan
ttnn.addtensor<[1,3,512,512,bf16]>
tensor<[1,3,1,1,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1536 + d1 * 512 + d2, d3), memory_config: (48, 16, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 96 + d1 * 32 + d2, d3), memory_config: (3, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,3,512,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1536 + d1 * 512 + d2, d3), memory_config: (48, 16, 'tile<32x32, bf16>', 'dram')nannan
ttnn.addtensor<[1,4,14,14,bf16]>
tensor<[1,4,1,1,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 128 + d1 * 32 + d2, d3), memory_config: (4, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 128 + d1 * 32 + d2, d3), memory_config: (4, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,4,14,14,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 128 + d1 * 32 + d2, d3), memory_config: (4, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.addtensor<[1,128,512,512,bf16]>
tensor<[1,128,1,1,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 512 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 4096 + d1 * 32 + d2, d3), memory_config: (128, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,128,512,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 512 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')nannan
ttnn.addtensor<[1,256,512,512,bf16]>
tensor<[1,256,1,1,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 131072 + d1 * 512 + d2, d3), memory_config: (4096, 16, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,256,512,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 131072 + d1 * 512 + d2, d3), memory_config: (4096, 16, 'tile<32x32, bf16>', 'dram')nannan
ttnn.addtensor<[1,64,1024,1024,bf16]>
tensor<[1,64,1,1,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 1024 + d2, d3), memory_config: (2048, 32, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2048 + d1 * 32 + d2, d3), memory_config: (64, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,64,1024,1024,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 1024 + d2, d3), memory_config: (2048, 32, 'tile<32x32, bf16>', 'dram')nannan
ttnn.addtensor<[1,64,256,256,bf16]>
tensor<[1,64,1,1,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 256 + d2, d3), memory_config: (512, 8, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2048 + d1 * 32 + d2, d3), memory_config: (64, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,64,256,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 256 + d2, d3), memory_config: (512, 8, 'tile<32x32, bf16>', 'dram')nannan
ttnn.addtensor<[1,256,256,256,bf16]>
tensor<[1,256,1,1,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 256 + d2, d3), memory_config: (2048, 8, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,256,256,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 256 + d2, d3), memory_config: (2048, 8, 'tile<32x32, bf16>', 'dram')nannan
ttnn.addtensor<[1,512,256,256,bf16]>
tensor<[1,512,1,1,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 131072 + d1 * 256 + d2, d3), memory_config: (4096, 8, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 32 + d2, d3), memory_config: (512, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,512,256,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 131072 + d1 * 256 + d2, d3), memory_config: (4096, 8, 'tile<32x32, bf16>', 'dram')nannan
ttnn.addtensor<[1,48,1024,1024,bf16]>
tensor<[1,48,1,1,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 49152 + d1 * 1024 + d2, d3), memory_config: (1536, 32, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1536 + d1 * 32 + d2, d3), memory_config: (48, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,48,1024,1024,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 49152 + d1 * 1024 + d2, d3), memory_config: (1536, 32, 'tile<32x32, bf16>', 'dram')nannan
ttnn.addtensor<[1,98,256,256,bf16]>
tensor<[1,98,1,1,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 25088 + d1 * 256 + d2, d3), memory_config: (784, 8, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 3136 + d1 * 32 + d2, d3), memory_config: (98, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,98,256,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 25088 + d1 * 256 + d2, d3), memory_config: (784, 8, 'tile<32x32, bf16>', 'dram')nannan
ttnn.addtensor<[1,1,480,640,bf16]>
tensor<[1,1,1,1,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 480 + d1 * 480 + d2, d3), memory_config: (15, 20, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,1,480,640,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 480 + d1 * 480 + d2, d3), memory_config: (15, 20, 'tile<32x32, bf16>', 'dram')nannan
ttnn.addtensor<[1,64,480,640,bf16]>
tensor<[1,64,1,1,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 30720 + d1 * 480 + d2, d3), memory_config: (960, 20, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2048 + d1 * 32 + d2, d3), memory_config: (64, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,64,480,640,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 30720 + d1 * 480 + d2, d3), memory_config: (960, 20, 'tile<32x32, bf16>', 'dram')nannan
ttnn.addtensor<[1,64,128,128,bf16]>
tensor<[1,64,1,1,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 128 + d2, d3), memory_config: (256, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2048 + d1 * 32 + d2, d3), memory_config: (64, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,64,128,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 128 + d2, d3), memory_config: (256, 4, 'tile<32x32, bf16>', 'dram')nannan
ttnn.addtensor<[1,1,7,25281,2,f32]>
tensor<[1,256,7,25281,2,f32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 177184 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (5537, 1, 'tile<32x32, f32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')
tensor<[1,256,7,25281,2,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.addtensor<[16,1,1,1,si32]>
tensor<[16,250,250,1,si32]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (16, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, si32>', 'dram')
tensor<[16,250,250,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,250,1,1,si32]>
tensor<[16,250,250,1,si32]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8000 + d1 * 32 + d2, d3), memory_config: (250, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, si32>', 'dram')
tensor<[16,250,250,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,1,si32]>
tensor<[1,2640,768,1,si32]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,2640,768,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,768,1,si32]>
tensor<[1,2640,768,1,si32]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 768 + d1 * 768 + d2, d3), memory_config: (24, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,2640,768,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,1,si32]>
tensor<[1,300,4,1,si32]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,300,4,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,4,1,si32]>
tensor<[1,300,4,1,si32]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,300,4,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,1,si32]>
tensor<[1,300,80,1,si32]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,300,80,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,80,1,si32]>
tensor<[1,300,80,1,si32]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 96 + d1 * 96 + d2, d3), memory_config: (3, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,300,80,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,6,1,1,si32]>
tensor<[1,1,6,1,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,1,6,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,16,1,1,1,si32]>
tensor<[1,16,1,1,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,16,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,1,1,si32]>
tensor<[1,1,1,1,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,1,1,si32]>
tensor<[1,16,6,192,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,16,6,192,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,16,1,1,1,si32]>
tensor<[1,16,6,192,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,16,6,192,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,6,1,1,si32]>
tensor<[1,16,6,192,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,16,6,192,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,192,1,si32]>
tensor<[1,16,6,192,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 192 + d3, d4), memory_config: (6, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,16,6,192,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,6,1,1,si32]>
tensor<[1,1,6,1,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,1,6,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,16,1,1,1,si32]>
tensor<[1,16,1,1,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,16,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,1,1,si32]>
tensor<[1,1,1,1,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,1,1,si32]>
tensor<[1,16,6,192,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,16,6,192,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,16,1,1,1,si32]>
tensor<[1,16,6,192,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,16,6,192,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,6,1,1,si32]>
tensor<[1,16,6,192,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,16,6,192,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,192,1,si32]>
tensor<[1,16,6,192,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 192 + d3, d4), memory_config: (6, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,16,6,192,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,1,3,1,1,si32]>
tensor<[1,1,1,1,3,1,1,si32]>
mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,1,1,1,3,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,1,1,1,1,si32]>
tensor<[1,1,1,1,1,1,1,si32]>
mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4 * 32 + d5, d6), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4 * 32 + d5, d6), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,1,1,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4 * 32 + d5, d6), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,1,1,1,1,si32]>
tensor<[1,1,1,1,3,1,1,si32]>
mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4 * 32 + d5, d6), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,1,1,1,3,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,1,1,si32]>
tensor<[1,256,7,25281,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,256,7,25281,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,256,1,1,1,si32]>
tensor<[1,256,7,25281,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 8192 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (256, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,256,7,25281,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,1,1,si32]>
tensor<[1,256,7,25281,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,256,7,25281,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,256,1,1,1,si32]>
tensor<[1,256,7,25281,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 8192 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (256, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,256,7,25281,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,1,1,si32]>
tensor<[1,256,7,25281,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,256,7,25281,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,256,1,1,1,si32]>
tensor<[1,256,7,25281,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 8192 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (256, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,256,7,25281,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,25281,1,1,si32]>
tensor<[1,1,25281,1,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 808992 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (25281, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 808992 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (25281, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,1,25281,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 808992 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (25281, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,7,1,1,1,si32]>
tensor<[1,7,1,1,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 224 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (7, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 224 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (7, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,7,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 224 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (7, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,1,1,si32]>
tensor<[1,1,1,1,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,1,1,si32]>
tensor<[1,7,25281,2,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,7,25281,2,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,7,1,1,1,si32]>
tensor<[1,7,25281,2,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 224 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (7, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,7,25281,2,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,25281,1,1,si32]>
tensor<[1,7,25281,2,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 808992 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (25281, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,7,25281,2,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,2,1,si32]>
tensor<[1,7,25281,2,1,si32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,7,25281,2,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[160,si32]>
tensor<[160,si32]>
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, si32>', 'dram')
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, si32>', 'dram')
tensor<[160,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,8,1,1,1,1,si32]>
tensor<[1,8,1,1,1,1,si32]>
mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 256 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (8, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 256 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (8, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,8,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 256 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (8, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,1,1,1,si32]>
tensor<[1,1,1,1,1,1,si32]>
mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,1,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,1,1,1,si32]>
tensor<[1,8,160,160,1,1,si32]>
mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 6553600 + d1 * 819200 + d2 * 5120 + d3 * 32 + d4, d5), memory_config: (204800, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,8,160,160,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 6553600 + d1 * 819200 + d2 * 5120 + d3 * 32 + d4, d5), memory_config: (204800, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,8,1,1,1,1,si32]>
tensor<[1,8,160,160,1,1,si32]>
mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 256 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (8, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 6553600 + d1 * 819200 + d2 * 5120 + d3 * 32 + d4, d5), memory_config: (204800, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,8,160,160,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 6553600 + d1 * 819200 + d2 * 5120 + d3 * 32 + d4, d5), memory_config: (204800, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,160,1,1,1,si32]>
tensor<[1,8,160,160,1,1,si32]>
mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 5120 + d1 * 5120 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (160, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 6553600 + d1 * 819200 + d2 * 5120 + d3 * 32 + d4, d5), memory_config: (204800, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,8,160,160,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 6553600 + d1 * 819200 + d2 * 5120 + d3 * 32 + d4, d5), memory_config: (204800, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,160,1,1,si32]>
tensor<[1,8,160,160,1,1,si32]>
mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 5120 + d1 * 5120 + d2 * 5120 + d3 * 32 + d4, d5), memory_config: (160, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 6553600 + d1 * 819200 + d2 * 5120 + d3 * 32 + d4, d5), memory_config: (204800, 1, 'tile<32x32, si32>', 'dram')
tensor<[1,8,160,160,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 6553600 + d1 * 819200 + d2 * 5120 + d3 * 32 + d4, d5), memory_config: (204800, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.addtensor<[1,1,1,1,si32]>
tensor<[1,256,7,25281,si32]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, si32>', 'dram')
tensor<[1,256,7,25281,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, si32>', 'dram')nannan

ttnn.arange

This table is a trace for ttnn.arange op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 16 : i64
memory_config: #ttnn.memory_config<#dram, <<1x1>>, >
start: 0 : i64
step: 1 : i64
tensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 250 : i64
memory_config: #ttnn.memory_config<#dram, <<1x8>>, >
start: 0 : i64
step: 1 : i64
tensor<[250,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 250, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 1 : i64
memory_config: #ttnn.memory_config<#dram, <<1x1>>, >
start: 0 : i64
step: 1 : i64
tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 768 : i64
memory_config: #ttnn.memory_config<#dram, <<1x24>>, >
start: 0 : i64
step: 1 : i64
tensor<[768,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 768, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 1 : i64
memory_config: #ttnn.memory_config<#dram, <<1x1>>, >
start: 0 : i64
step: 1 : i64
tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 4 : i64
memory_config: #ttnn.memory_config<#dram, <<1x1>>, >
start: 0 : i64
step: 1 : i64
tensor<[4,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 4, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 1 : i64
memory_config: #ttnn.memory_config<#dram, <<1x1>>, >
start: 0 : i64
step: 1 : i64
tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 80 : i64
memory_config: #ttnn.memory_config<#dram, <<1x3>>, >
start: 0 : i64
step: 1 : i64
tensor<[80,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 80, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 6 : i64
memory_config: #ttnn.memory_config<#dram, <<1x1>>, >
start: 0 : i64
step: 1 : i64
tensor<[6,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 16 : i64
memory_config: #ttnn.memory_config<#dram, <<1x1>>, >
start: 0 : i64
step: 1 : i64
tensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 1 : i64
memory_config: #ttnn.memory_config<#dram, <<1x1>>, >
start: 0 : i64
step: 1 : i64
tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 6 : i64
memory_config: #ttnn.memory_config<#dram, <<1x1>>, >
start: 0 : i64
step: 1 : i64
tensor<[6,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 16 : i64
memory_config: #ttnn.memory_config<#dram, <<1x1>>, >
start: 0 : i64
step: 1 : i64
tensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 1 : i64
memory_config: #ttnn.memory_config<#dram, <<1x1>>, >
start: 0 : i64
step: 1 : i64
tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 3 : i64
memory_config: #ttnn.memory_config<#dram, <<1x1>>, >
start: 0 : i64
step: 1 : i64
tensor<[3,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 3, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 1 : i64
memory_config: #ttnn.memory_config<#dram, <<1x1>>, >
start: 0 : i64
step: 1 : i64
tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 25281 : i64
memory_config: #ttnn.memory_config<#dram, <<1x791>>, >
start: 0 : i64
step: 1 : i64
tensor<[25281,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 25281, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 7 : i64
memory_config: #ttnn.memory_config<#dram, <<1x1>>, >
start: 0 : i64
step: 1 : i64
tensor<[7,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 7, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 1 : i64
memory_config: #ttnn.memory_config<#dram, <<1x1>>, >
start: 0 : i64
step: 1 : i64
tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 160 : i64
memory_config: #ttnn.memory_config<#dram, <<1x5>>, >
start: 0 : i64
step: 1 : i64
tensor<[160,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 160, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 8 : i64
memory_config: #ttnn.memory_config<#dram, <<1x1>>, >
start: 0 : i64
step: 1 : i64
tensor<[8,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 8, 'si32', 'dram')nannan
ttnn.arange!ttnn.devicedtype: #tt.supportedDataTypes
end: 1 : i64
memory_config: #ttnn.memory_config<#dram, <<1x1>>, >
start: 0 : i64
step: 1 : i64
tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'dram')nannan

ttnn.concat

This table is a trace for ttnn.concat op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.concattensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
tensor<[1,3,128,128,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 128 + d2, d3), memory_config: (12, 4, 'tile<32x32, bf16>', 'dram')
dim: 1 : si32tensor<[1,192,128,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 24576 + d1 * 128 + d2, d3), memory_config: (768, 4, 'tile<32x32, bf16>', 'dram')nannan
ttnn.concatdim: 1 : si32nannan
ttnn.concattensor<[16,250,250,1,bf16]>
tensor<[16,250,250,1,bf16]>
tensor<[16,250,250,1,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, bf16>', 'dram')
dim: 3 : si32tensor<[16,250,250,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.concattensor<[1,2640,768,1,bf16]>
tensor<[1,2640,768,1,bf16]>
tensor<[1,2640,768,1,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, bf16>', 'dram')
dim: 3 : si32tensor<[1,2640,768,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.concattensor<[1,300,4,1,bf16]>
tensor<[1,300,4,1,bf16]>
tensor<[1,300,4,1,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, bf16>', 'dram')
dim: 3 : si32tensor<[1,300,4,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.concattensor<[1,300,80,1,bf16]>
tensor<[1,300,80,1,bf16]>
tensor<[1,300,80,1,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, bf16>', 'dram')
dim: 3 : si32tensor<[1,300,80,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.concattensor<[1,16,6,192,1,bf16]>
tensor<[1,16,6,192,1,bf16]>
tensor<[1,16,6,192,1,bf16]>
tensor<[1,16,6,192,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, bf16>', 'dram')
dim: 4 : si32tensor<[1,16,6,192,4,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.concattensor<[1,16,6,192,1,bf16]>
tensor<[1,16,6,192,1,bf16]>
tensor<[1,16,6,192,1,bf16]>
tensor<[1,16,6,192,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, bf16>', 'dram')
dim: 4 : si32tensor<[1,16,6,192,4,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.concattensor<[1,1,1,1,3,1,1,bf16]>
tensor<[1,1,1,1,3,1,1,bf16]>
tensor<[1,1,1,1,3,1,1,bf16]>
tensor<[1,1,1,1,3,1,1,bf16]>
tensor<[1,1,1,1,3,1,1,bf16]>
tensor<[1,1,1,1,3,1,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, bf16>', 'dram')
dim: 6 : si32tensor<[1,1,1,1,3,1,6,bf16]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.concattensor<[1,256,7,25281,1,bf16]>
tensor<[1,256,7,25281,1,bf16]>
tensor<[1,256,7,25281,1,bf16]>
tensor<[1,256,7,25281,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')
dim: 4 : si32tensor<[1,256,7,25281,4,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.concattensor<[1,256,7,25281,1,bf16]>
tensor<[1,256,7,25281,1,bf16]>
tensor<[1,256,7,25281,1,bf16]>
tensor<[1,256,7,25281,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')
dim: 4 : si32tensor<[1,256,7,25281,4,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.concattensor<[1,256,7,25281,1,bf16]>
tensor<[1,256,7,25281,1,bf16]>
tensor<[1,256,7,25281,1,bf16]>
tensor<[1,256,7,25281,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')
dim: 4 : si32tensor<[1,256,7,25281,4,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.concattensor<[1,7,25281,2,1,bf16]>
tensor<[1,7,25281,2,1,bf16]>
tensor<[1,7,25281,2,1,bf16]>
tensor<[1,7,25281,2,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, bf16>', 'dram')
dim: 4 : si32tensor<[1,7,25281,2,4,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, bf16>', 'dram')nannan

ttnn.constant

This table is a trace for ttnn.constant op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.constantvalue: dense<[[1.280000e+05], [5.120000e+02], [1.000000e+00]]> : tensor<3x1xf32>tensor<[3,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (3, 1, 'f32', 'system_memory')nannan
ttnn.constantvalue: dense<[[1.689600e+05], [7.680000e+02], [1.000000e+00]]> : tensor<3x1xf32>tensor<[3,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (3, 1, 'f32', 'system_memory')nannan
ttnn.constantvalue: dense<[[3.360000e+04], [4.000000e+00], [1.000000e+00]]> : tensor<3x1xf32>tensor<[3,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (3, 1, 'f32', 'system_memory')nannan
ttnn.constantvalue: dense<[[6.720000e+05], [8.000000e+01], [1.000000e+00]]> : tensor<3x1xf32>tensor<[3,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (3, 1, 'f32', 'system_memory')nannan
ttnn.constantvalue: dense<[[1.228800e+04], [7.680000e+02], [1.280000e+02], [1.000000e+00]]> : tensor<4x1xf32>tensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (4, 1, 'f32', 'system_memory')nannan
ttnn.constantvalue: dense<[[6.144000e+03], [3.840000e+02], [6.400000e+01], [1.000000e+00]]> : tensor<4x1xf32>tensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (4, 1, 'f32', 'system_memory')nannan
ttnn.constantvalue: dense<[[1.200000e+01], [1.200000e+01], [1.200000e+01], [1.200000e+01], [4.000000e+00], [1.000000e+00]]> : tensor<6x1xf32>tensor<[6,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (6, 1, 'f32', 'system_memory')nannan
ttnn.constantvalue: dense<[[5.017600e+04], [1.960000e+02], [1.400000e+01], [1.000000e+00]]> : tensor<4x1xf32>tensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (4, 1, 'f32', 'system_memory')nannan
ttnn.constantvalue: dense<[[2.007040e+05], [7.840000e+02], [2.800000e+01], [1.000000e+00]]> : tensor<4x1xf32>tensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (4, 1, 'f32', 'system_memory')nannan
ttnn.constantvalue: dense<[[1.254400e+04], [4.900000e+01], [7.000000e+00], [1.000000e+00]]> : tensor<4x1xf32>tensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (4, 1, 'f32', 'system_memory')nannan
ttnn.constantvalue: dense<[[7.078680e+05], [1.011240e+05], [4.000000e+00], [1.000000e+00]]> : tensor<4x1xf32>tensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (4, 1, 'f32', 'system_memory')nannan
ttnn.constantvalue: dense<[[6.144000e+05], [7.680000e+04], [4.800000e+02], [3.000000e+00], [1.000000e+00]]> : tensor<5x1xf32>tensor<[5,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (5, 1, 'f32', 'system_memory')nannan

ttnn.conv2d

This table is a trace for ttnn.conv2d op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.conv2dtensor<[1,1,262144,128,bf16]>
tensor<[128,128,3,3,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (262144, 128, 'bf16', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 3 + d2, d3), memory_config: (49152, 3, 'bf16', 'system_memory')
batch_size: 1 : i32
dilation: array<i32: 1, 1>
groups: 1 : i32
in_channels: 128 : i32
input_height: 512 : i32
input_width: 512 : i32
kernel_size: array<i32: 3, 3>
out_channels: 128 : i32
padding: array<i32: 1, 1>
stride: array<i32: 1, 1>
tensor<[1,1,262144,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'dram')nannan
ttnn.conv2dtensor<[1,1,262144,128,bf16]>
tensor<[3,128,3,3,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (262144, 128, 'bf16', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 384 + d1 * 3 + d2, d3), memory_config: (1152, 3, 'bf16', 'system_memory')
batch_size: 1 : i32
dilation: array<i32: 1, 1>
groups: 1 : i32
in_channels: 128 : i32
input_height: 512 : i32
input_width: 512 : i32
kernel_size: array<i32: 3, 3>
out_channels: 3 : i32
padding: array<i32: 1, 1>
stride: array<i32: 1, 1>
tensor<[1,1,262144,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.conv2dtensor<[1,1,196,16,bf16]>
tensor<[4,16,3,3,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 196 + d1 * 196 + d2, d3), memory_config: (196, 16, 'bf16', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 48 + d1 * 3 + d2, d3), memory_config: (192, 3, 'bf16', 'system_memory')
batch_size: 1 : i32
dilation: array<i32: 1, 1>
groups: 1 : i32
in_channels: 16 : i32
input_height: 14 : i32
input_width: 14 : i32
kernel_size: array<i32: 3, 3>
out_channels: 4 : i32
padding: array<i32: 1, 1>
stride: array<i32: 1, 1>
tensor<[1,1,196,4,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 224 + d2, d3), memory_config: (7, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.conv2dtensor<[1,1,262144,256,bf16]>
tensor<[128,256,3,3,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (262144, 256, 'bf16', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 768 + d1 * 3 + d2, d3), memory_config: (98304, 3, 'bf16', 'system_memory')
batch_size: 1 : i32
dilation: array<i32: 1, 1>
groups: 1 : i32
in_channels: 256 : i32
input_height: 512 : i32
input_width: 512 : i32
kernel_size: array<i32: 3, 3>
out_channels: 128 : i32
padding: array<i32: 1, 1>
stride: array<i32: 1, 1>
tensor<[1,1,262144,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'dram')nannan
ttnn.conv2dtensor<[1,1,262144,256,bf16]>
tensor<[256,256,3,3,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (262144, 256, 'bf16', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 768 + d1 * 3 + d2, d3), memory_config: (196608, 3, 'bf16', 'system_memory')
batch_size: 1 : i32
dilation: array<i32: 1, 1>
groups: 1 : i32
in_channels: 256 : i32
input_height: 512 : i32
input_width: 512 : i32
kernel_size: array<i32: 3, 3>
out_channels: 256 : i32
padding: array<i32: 1, 1>
stride: array<i32: 1, 1>
tensor<[1,1,262144,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 8, 'tile<32x32, bf16>', 'dram')nannan
ttnn.conv2dtensor<[1,1,1048576,3,bf16]>
tensor<[64,3,3,3,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (1048576, 3, 'bf16', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9 + d1 * 3 + d2, d3), memory_config: (576, 3, 'bf16', 'system_memory')
batch_size: 1 : i32
dilation: array<i32: 1, 1>
groups: 1 : i32
in_channels: 3 : i32
input_height: 1024 : i32
input_width: 1024 : i32
kernel_size: array<i32: 3, 3>
out_channels: 64 : i32
padding: array<i32: 1, 1>
stride: array<i32: 1, 1>
tensor<[1,1,1048576,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (32768, 2, 'tile<32x32, bf16>', 'dram')nannan
ttnn.conv2dtensor<[1,1,65536,480,bf16]>
tensor<[64,480,3,3,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (65536, 480, 'bf16', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1440 + d1 * 3 + d2, d3), memory_config: (92160, 3, 'bf16', 'system_memory')
batch_size: 1 : i32
dilation: array<i32: 1, 1>
groups: 1 : i32
in_channels: 480 : i32
input_height: 256 : i32
input_width: 256 : i32
kernel_size: array<i32: 3, 3>
out_channels: 64 : i32
padding: array<i32: 1, 1>
stride: array<i32: 1, 1>
tensor<[1,1,65536,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 2, 'tile<32x32, bf16>', 'dram')nannan
ttnn.conv2dtensor<[1,1,65536,512,bf16]>
tensor<[256,512,3,3,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (65536, 512, 'bf16', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1536 + d1 * 3 + d2, d3), memory_config: (393216, 3, 'bf16', 'system_memory')
batch_size: 1 : i32
dilation: array<i32: 1, 1>
groups: 1 : i32
in_channels: 512 : i32
input_height: 256 : i32
input_width: 256 : i32
kernel_size: array<i32: 3, 3>
out_channels: 256 : i32
padding: array<i32: 1, 1>
stride: array<i32: 1, 1>
tensor<[1,1,65536,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 8, 'tile<32x32, bf16>', 'dram')nannan
ttnn.conv2dtensor<[1,1,65536,512,bf16]>
tensor<[512,512,3,3,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (65536, 512, 'bf16', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1536 + d1 * 3 + d2, d3), memory_config: (786432, 3, 'bf16', 'system_memory')
batch_size: 1 : i32
dilation: array<i32: 1, 1>
groups: 1 : i32
in_channels: 512 : i32
input_height: 256 : i32
input_width: 256 : i32
kernel_size: array<i32: 3, 3>
out_channels: 512 : i32
padding: array<i32: 1, 1>
stride: array<i32: 1, 1>
tensor<[1,1,65536,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')nannan
ttnn.conv2dtensor<[1,1,1048576,64,bf16]>
tensor<[48,64,3,3,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (1048576, 64, 'bf16', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 192 + d1 * 3 + d2, d3), memory_config: (9216, 3, 'bf16', 'system_memory')
batch_size: 1 : i32
dilation: array<i32: 1, 1>
groups: 1 : i32
in_channels: 64 : i32
input_height: 1024 : i32
input_width: 1024 : i32
kernel_size: array<i32: 3, 3>
out_channels: 48 : i32
padding: array<i32: 1, 1>
stride: array<i32: 1, 1>
tensor<[1,1,1048576,48,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (32768, 2, 'tile<32x32, bf16>', 'dram')nannan
ttnn.conv2dtensor<[1,1,65536,64,bf16]>
tensor<[98,64,7,7,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (65536, 64, 'bf16', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 448 + d1 * 7 + d2, d3), memory_config: (43904, 7, 'bf16', 'system_memory')
batch_size: 1 : i32
dilation: array<i32: 1, 1>
groups: 1 : i32
in_channels: 64 : i32
input_height: 256 : i32
input_width: 256 : i32
kernel_size: array<i32: 7, 7>
out_channels: 98 : i32
padding: array<i32: 3, 3>
stride: array<i32: 1, 1>
tensor<[1,1,65536,98,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 4, 'tile<32x32, bf16>', 'dram')nannan
ttnn.conv2dtensor<[1,1,307200,64,bf16]>
tensor<[1,64,3,3,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (307200, 64, 'bf16', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 192 + d1 * 3 + d2, d3), memory_config: (192, 3, 'bf16', 'system_memory')
batch_size: 1 : i32
dilation: array<i32: 1, 1>
groups: 1 : i32
in_channels: 64 : i32
input_height: 480 : i32
input_width: 640 : i32
kernel_size: array<i32: 3, 3>
out_channels: 1 : i32
padding: array<i32: 1, 1>
stride: array<i32: 1, 1>
tensor<[1,1,307200,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (9600, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.conv2dtensor<[1,1,307200,64,bf16]>
tensor<[64,64,3,3,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (307200, 64, 'bf16', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 192 + d1 * 3 + d2, d3), memory_config: (12288, 3, 'bf16', 'system_memory')
batch_size: 1 : i32
dilation: array<i32: 1, 1>
groups: 1 : i32
in_channels: 64 : i32
input_height: 480 : i32
input_width: 640 : i32
kernel_size: array<i32: 3, 3>
out_channels: 64 : i32
padding: array<i32: 1, 1>
stride: array<i32: 1, 1>
tensor<[1,1,307200,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (9600, 2, 'tile<32x32, bf16>', 'dram')nannan
ttnn.conv2dtensor<[1,1,16384,960,bf16]>
tensor<[64,960,3,3,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 16384 + d2, d3), memory_config: (16384, 960, 'bf16', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2880 + d1 * 3 + d2, d3), memory_config: (184320, 3, 'bf16', 'system_memory')
batch_size: 1 : i32
dilation: array<i32: 1, 1>
groups: 1 : i32
in_channels: 960 : i32
input_height: 128 : i32
input_width: 128 : i32
kernel_size: array<i32: 3, 3>
out_channels: 64 : i32
padding: array<i32: 1, 1>
stride: array<i32: 1, 1>
tensor<[1,1,16384,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 16384 + d2, d3), memory_config: (512, 2, 'tile<32x32, bf16>', 'dram')nannan

ttnn.embedding

This table is a trace for ttnn.embedding op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.embeddingtensor<[1000000,ui32]>
tensor<[2048000,1,bf16]>
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1000000, 'ui32', 'dram')
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (64000, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1000000,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (31250, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.embeddingtensor<[2027520,ui32]>
tensor<[168960,1,bf16]>
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 2027520, 'ui32', 'dram')
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (5280, 1, 'tile<32x32, bf16>', 'dram')
tensor<[2027520,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (63360, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.embeddingtensor<[1200,ui32]>
tensor<[33600,1,bf16]>
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1200, 'ui32', 'dram')
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1050, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1200,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (38, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.embeddingtensor<[24000,ui32]>
tensor<[672000,1,bf16]>
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 24000, 'ui32', 'dram')
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (21000, 1, 'tile<32x32, bf16>', 'dram')
tensor<[24000,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (750, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.embeddingtensor<[3,ui32]>
tensor<[12,1,bf16]>
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 3, 'ui32', 'dram')
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')
tensor<[3,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.embeddingtensor<[45303552,ui32]>
tensor<[50176,1,bf16]>
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 45303552, 'ui32', 'dram')
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1568, 1, 'tile<32x32, bf16>', 'dram')
tensor<[45303552,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1415736, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.embeddingtensor<[45303552,ui32]>
tensor<[200704,1,bf16]>
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 45303552, 'ui32', 'dram')
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (6272, 1, 'tile<32x32, bf16>', 'dram')
tensor<[45303552,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1415736, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.embeddingtensor<[45303552,ui32]>
tensor<[12544,1,bf16]>
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 45303552, 'ui32', 'dram')
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (392, 1, 'tile<32x32, bf16>', 'dram')
tensor<[45303552,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1415736, 1, 'tile<32x32, bf16>', 'dram')nannan

ttnn.from_device

This table is a trace for ttnn.from_device op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.from_devicetensor<[1,1,262144,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'dram')tensor<[1,1,262144,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'system_memory')nannan
ttnn.from_devicetensor<[1,1,262144,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'dram')tensor<[1,1,262144,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'system_memory')nannan
ttnn.from_devicetensor<[1,1,196,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 224 + d2, d3), memory_config: (7, 1, 'tile<32x32, bf16>', 'dram')tensor<[1,1,196,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 224 + d2, d3), memory_config: (7, 1, 'tile<32x32, bf16>', 'system_memory')nannan
ttnn.from_devicetensor<[1,1,262144,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 8, 'tile<32x32, bf16>', 'dram')tensor<[1,1,262144,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 8, 'tile<32x32, bf16>', 'system_memory')nannan
ttnn.from_devicetensor<[1,1,262144,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 8, 'tile<32x32, bf16>', 'dram')tensor<[1,1,262144,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 8, 'tile<32x32, bf16>', 'system_memory')nannan
ttnn.from_devicetensor<[1,1,1048576,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (32768, 1, 'tile<32x32, bf16>', 'dram')tensor<[1,1,1048576,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (32768, 1, 'tile<32x32, bf16>', 'system_memory')nannan
ttnn.from_devicetensor<[1,1,65536,480,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 15, 'tile<32x32, bf16>', 'dram')tensor<[1,1,65536,480,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 15, 'tile<32x32, bf16>', 'system_memory')nannan
ttnn.from_devicetensor<[1,1,65536,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')tensor<[1,1,65536,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'system_memory')nannan
ttnn.from_devicetensor<[1,1,65536,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')tensor<[1,1,65536,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'system_memory')nannan
ttnn.from_devicetensor<[1,1,1048576,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (32768, 2, 'tile<32x32, bf16>', 'dram')tensor<[1,1,1048576,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (32768, 2, 'tile<32x32, bf16>', 'system_memory')nannan
ttnn.from_devicetensor<[1,1,65536,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 2, 'tile<32x32, bf16>', 'dram')tensor<[1,1,65536,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 2, 'tile<32x32, bf16>', 'system_memory')nannan
ttnn.from_devicetensor<[1,1,307200,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (9600, 2, 'tile<32x32, bf16>', 'dram')tensor<[1,1,307200,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (9600, 2, 'tile<32x32, bf16>', 'system_memory')nannan
ttnn.from_devicetensor<[1,1,307200,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (9600, 2, 'tile<32x32, bf16>', 'dram')tensor<[1,1,307200,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (9600, 2, 'tile<32x32, bf16>', 'system_memory')nannan
ttnn.from_devicetensor<[1,1,16384,960,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 16384 + d2, d3), memory_config: (512, 30, 'tile<32x32, bf16>', 'dram')tensor<[1,1,16384,960,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 16384 + d2, d3), memory_config: (512, 30, 'tile<32x32, bf16>', 'system_memory')nannan
ttnn.from_devicetensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'si32', 'dram')tensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[250,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 250, 'si32', 'dram')tensor<[250,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 250, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[1000000,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 31250, 'tile<32x32, u32>', 'dram')tensor<[1000000,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 31250, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.from_devicetensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'dram')tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[768,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 768, 'si32', 'dram')tensor<[768,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 768, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[2027520,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 63360, 'tile<32x32, u32>', 'dram')tensor<[2027520,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 63360, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.from_devicetensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'dram')tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[4,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 4, 'si32', 'dram')tensor<[4,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 4, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[1200,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 38, 'tile<32x32, u32>', 'dram')tensor<[1200,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 38, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.from_devicetensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'dram')tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[80,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 80, 'si32', 'dram')tensor<[80,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 80, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[24000,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 750, 'tile<32x32, u32>', 'dram')tensor<[24000,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 750, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.from_devicetensor<[6,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'si32', 'dram')tensor<[6,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[6,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'ui32', 'dram')tensor<[6,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'ui32', 'system_memory')nannan
ttnn.from_devicetensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'si32', 'dram')tensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'ui32', 'dram')tensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'ui32', 'system_memory')nannan
ttnn.from_devicetensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'dram')tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'dram')tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'system_memory')nannan
ttnn.from_devicetensor<[6,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'si32', 'dram')tensor<[6,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[6,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'ui32', 'dram')tensor<[6,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'ui32', 'system_memory')nannan
ttnn.from_devicetensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'si32', 'dram')tensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'ui32', 'dram')tensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'ui32', 'system_memory')nannan
ttnn.from_devicetensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'dram')tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'dram')tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'system_memory')nannan
ttnn.from_devicetensor<[3,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 3, 'si32', 'dram')tensor<[3,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 3, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[3,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 3, 'ui32', 'dram')tensor<[3,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 3, 'ui32', 'system_memory')nannan
ttnn.from_devicetensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'dram')tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'dram')tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'system_memory')nannan
ttnn.from_devicetensor<[3,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')tensor<[3,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.from_devicetensor<[45303552,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1415736, 'tile<32x32, u32>', 'dram')tensor<[45303552,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1415736, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.from_devicetensor<[45303552,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1415736, 'tile<32x32, u32>', 'dram')tensor<[45303552,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1415736, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.from_devicetensor<[45303552,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1415736, 'tile<32x32, u32>', 'dram')tensor<[45303552,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1415736, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.from_devicetensor<[25281,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 25281, 'si32', 'dram')tensor<[25281,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 25281, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[25281,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 25281, 'ui32', 'dram')tensor<[25281,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 25281, 'ui32', 'system_memory')nannan
ttnn.from_devicetensor<[7,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 7, 'si32', 'dram')tensor<[7,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 7, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[7,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 7, 'ui32', 'dram')tensor<[7,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 7, 'ui32', 'system_memory')nannan
ttnn.from_devicetensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'dram')tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'dram')tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'system_memory')nannan
ttnn.from_devicetensor<[160,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 160, 'ui32', 'dram')tensor<[160,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 160, 'ui32', 'system_memory')nannan
ttnn.from_devicetensor<[160,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 160, 'si32', 'dram')tensor<[160,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 160, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[8,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 8, 'si32', 'dram')tensor<[8,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 8, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[8,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 8, 'ui32', 'dram')tensor<[8,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 8, 'ui32', 'system_memory')nannan
ttnn.from_devicetensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'dram')tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'system_memory')nannan
ttnn.from_devicetensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'dram')tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'system_memory')nannan
ttnn.from_devicetensor<[1,1,784,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 800 + d1 * 800 + d2, d3), memory_config: (25, 1, 'tile<32x32, bf16>', 'dram')tensor<[1,1,784,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 800 + d1 * 800 + d2, d3), memory_config: (25, 1, 'tile<32x32, bf16>', 'system_memory')nannan
ttnn.from_devicetensor<[1,1,196,4,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 224 + d2, d3), memory_config: (7, 1, 'tile<32x32, bf16>', 'dram')tensor<[1,1,196,4,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 224 + d2, d3), memory_config: (7, 1, 'tile<32x32, bf16>', 'system_memory')nannan
ttnn.from_devicetensor<[1,1,196,528,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 224 + d2, d3), memory_config: (7, 17, 'tile<32x32, bf16>', 'dram')tensor<[1,1,196,528,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 224 + d2, d3), memory_config: (7, 17, 'tile<32x32, bf16>', 'system_memory')nannan
ttnn.from_devicetensor<[1,256,7,25281,2,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')tensor<[1,256,7,25281,2,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'system_memory')nannan
ttnn.from_devicetensor<[1,256,7,25281,1,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45303552 + d1 * 176967 + d2 * 25281 + d3, d4), memory_config: (45303552, 1, 'f32', 'dram')tensor<[1,256,7,25281,1,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45303552 + d1 * 176967 + d2 * 25281 + d3, d4), memory_config: (45303552, 1, 'f32', 'system_memory')nannan

ttnn.full

This table is a trace for ttnn.full op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,256,7,25281,2,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[16,250,250,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,2640,768,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,300,4,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,300,80,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 1.000000e+00 : f32tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 1.000000e+00 : f32tensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[6,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 1.000000e+00 : f32tensor<[6,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,16,6,192,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 1.000000e+00 : f32tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 1.000000e+00 : f32tensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[6,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 1.000000e+00 : f32tensor<[6,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,16,6,192,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 1.000000e+00 : f32tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[3,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 3, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 1.000000e+00 : f32tensor<[3,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 3, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,1,1,1,3,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,256,7,25281,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,256,7,25281,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,256,7,25281,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 1.000000e+00 : f32tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[7,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 7, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 1.000000e+00 : f32tensor<[7,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 7, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[25281,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 25281, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 1.000000e+00 : f32tensor<[25281,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 25281, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,7,25281,2,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 1.000000e+00 : f32tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[8,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 8, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 1.000000e+00 : f32tensor<[8,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 8, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[160,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 160, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 1.000000e+00 : f32tensor<[160,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 160, 'ui32', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,8,160,160,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 6553600 + d1 * 819200 + d2 * 5120 + d3 * 32 + d4, d5), memory_config: (204800, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,16,14,14,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 512 + d1 * 32 + d2, d3), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,16,28,28,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 512 + d1 * 32 + d2, d3), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,4,14,14,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 128 + d1 * 32 + d2, d3), memory_config: (4, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.full!ttnn.devicefillValue: 0.000000e+00 : f32tensor<[1,256,7,25281,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, u32>', 'dram')nannan

ttnn.get_device

This table is a trace for ttnn.get_device op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc5)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc5)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc5)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc5)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc5)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc5)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc5)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc5)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc5)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc5)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc5)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc5)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc5)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc5)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc3)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc4)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc4)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc4)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc4)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc4)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc4)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc4)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc7)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc7)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc7)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc4)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc4)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc3)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc3)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc3)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc3)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc3)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc3)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc3)nannan
ttnn.get_devicemesh_offset: #ttnn<mesh_offset 0x0>
mesh_shape: #ttnn<mesh_shape 1x1>
!ttnn.device loc(#loc5)nannan

ttnn.matmul

This table is a trace for ttnn.matmul op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.matmultensor<[16,250,250,3,f32]>
tensor<[3,1,f32]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, f32>', 'dram')
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')
transpose_a: False
transpose_b: False
tensor<[16,250,250,1,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.matmultensor<[1,2640,768,3,f32]>
tensor<[3,1,f32]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, f32>', 'dram')
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')
transpose_a: False
transpose_b: False
tensor<[1,2640,768,1,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.matmultensor<[1,300,4,3,f32]>
tensor<[3,1,f32]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, f32>', 'dram')
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')
transpose_a: False
transpose_b: False
tensor<[1,300,4,1,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.matmultensor<[1,300,80,3,f32]>
tensor<[3,1,f32]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, f32>', 'dram')
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')
transpose_a: False
transpose_b: False
tensor<[1,300,80,1,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.matmultensor<[1,1,1,1,3,1,6,f32]>
tensor<[6,1,f32]>
mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, f32>', 'dram')
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')
transpose_a: False
transpose_b: False
tensor<[1,1,1,1,3,1,1,f32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.matmultensor<[1,256,7,25281,4,f32]>
tensor<[4,1,f32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')
transpose_a: False
transpose_b: False
tensor<[1,256,7,25281,1,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.matmultensor<[1,256,7,25281,4,f32]>
tensor<[4,1,f32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')
transpose_a: False
transpose_b: False
tensor<[1,256,7,25281,1,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.matmultensor<[1,256,7,25281,4,f32]>
tensor<[4,1,f32]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')
transpose_a: False
transpose_b: False
tensor<[1,256,7,25281,1,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')nannan

ttnn.max_pool2d

This table is a trace for ttnn.max_pool2d op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.max_pool2dtensor<[1,1,784,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 784 + d1 * 784 + d2, d3), memory_config: (784, 16, 'bf16', 'dram')batch_size: 1 : si32
ceil_mode: False
channels: 16 : si32
dilation: array<i32: 1, 1>
input_height: 28 : si32
input_width: 28 : si32
kernel_size: array<i32: 2, 2>
padding: array<i32: 0, 0>
stride: array<i32: 2, 2>
tensor<[1,1,196,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 196 + d1 * 196 + d2, d3), memory_config: (196, 16, 'bf16', 'dram')nannan
ttnn.max_pool2dtensor<[1,1,196,4,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 196 + d1 * 196 + d2, d3), memory_config: (196, 4, 'bf16', 'dram')batch_size: 1 : si32
ceil_mode: False
channels: 4 : si32
dilation: array<i32: 1, 1>
input_height: 14 : si32
input_width: 14 : si32
kernel_size: array<i32: 2, 2>
padding: array<i32: 0, 0>
stride: array<i32: 2, 2>
tensor<[1,1,49,4,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 49 + d1 * 49 + d2, d3), memory_config: (49, 4, 'bf16', 'dram')nannan
ttnn.max_pool2dtensor<[1,1,196,528,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 196 + d1 * 196 + d2, d3), memory_config: (196, 528, 'bf16', 'dram')batch_size: 1 : si32
ceil_mode: False
channels: 528 : si32
dilation: array<i32: 1, 1>
input_height: 14 : si32
input_width: 14 : si32
kernel_size: array<i32: 3, 3>
padding: array<i32: 1, 1>
stride: array<i32: 1, 1>
tensor<[1,1,196,528,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 196 + d1 * 196 + d2, d3), memory_config: (196, 528, 'bf16', 'dram')nannan

ttnn.maximum

This table is a trace for ttnn.maximum op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.maximumtensor<[1,16,14,14,bf16]>
tensor<[1,16,14,14,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 512 + d1 * 32 + d2, d3), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 512 + d1 * 32 + d2, d3), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,16,14,14,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 512 + d1 * 32 + d2, d3), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.maximumtensor<[1,16,28,28,bf16]>
tensor<[1,16,28,28,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 512 + d1 * 32 + d2, d3), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 512 + d1 * 32 + d2, d3), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,16,28,28,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 512 + d1 * 32 + d2, d3), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.maximumtensor<[1,4,14,14,bf16]>
tensor<[1,4,14,14,bf16]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 128 + d1 * 32 + d2, d3), memory_config: (4, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 128 + d1 * 32 + d2, d3), memory_config: (4, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,4,14,14,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 128 + d1 * 32 + d2, d3), memory_config: (4, 1, 'tile<32x32, bf16>', 'dram')nannan

ttnn.multiply

This table is a trace for ttnn.multiply op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.multiplytensor<[1,1,6,1,1,bf16]>
tensor<[1,1,6,1,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,1,6,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.multiplytensor<[1,16,1,1,1,bf16]>
tensor<[1,16,1,1,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,16,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.multiplytensor<[1,1,1,1,1,bf16]>
tensor<[1,1,1,1,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,1,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.multiplytensor<[1,1,6,1,1,bf16]>
tensor<[1,1,6,1,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,1,6,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.multiplytensor<[1,16,1,1,1,bf16]>
tensor<[1,16,1,1,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,16,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.multiplytensor<[1,1,1,1,1,bf16]>
tensor<[1,1,1,1,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,1,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.multiplytensor<[1,1,1,1,3,1,1,bf16]>
tensor<[1,1,1,1,3,1,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,1,1,1,3,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.multiplytensor<[1,1,1,1,1,1,1,bf16]>
tensor<[1,1,1,1,1,1,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4 * 32 + d5, d6), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4 * 32 + d5, d6), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,1,1,1,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4 * 32 + d5, d6), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.multiplytensor<[1,1,25281,1,1,bf16]>
tensor<[1,1,25281,1,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 808992 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (25281, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 808992 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (25281, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,1,25281,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 808992 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (25281, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.multiplytensor<[1,7,1,1,1,bf16]>
tensor<[1,7,1,1,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 224 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (7, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 224 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (7, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,7,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 224 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (7, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.multiplytensor<[1,1,1,1,1,bf16]>
tensor<[1,1,1,1,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,1,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.multiplytensor<[160,bf16]>
tensor<[160,bf16]>
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, bf16>', 'dram')
tensor<[160,bf16]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, bf16>', 'dram')nannan
ttnn.multiplytensor<[1,8,1,1,1,1,bf16]>
tensor<[1,8,1,1,1,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 256 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (8, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 256 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (8, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,8,1,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 256 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (8, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.multiplytensor<[1,1,1,1,1,1,bf16]>
tensor<[1,1,1,1,1,1,bf16]>
mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')
mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')
tensor<[1,1,1,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')nannan

ttnn.permute

This table is a trace for ttnn.permute op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.permutetensor<[1,128,512,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 512 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 2, 3, 1>tensor<[1,512,512,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 512 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,512,512,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 512 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 3, 1, 2>tensor<[1,128,512,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 512 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,128,512,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 512 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 2, 3, 1>tensor<[1,512,512,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 512 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,512,512,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 512 + d2, d3), memory_config: (8192, 1, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 3, 1, 2>tensor<[1,3,512,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1536 + d1 * 512 + d2, d3), memory_config: (48, 16, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,16,14,14,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 512 + d1 * 32 + d2, d3), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 2, 3, 1>tensor<[1,14,14,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 448 + d1 * 32 + d2, d3), memory_config: (14, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,14,14,4,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 448 + d1 * 32 + d2, d3), memory_config: (14, 1, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 3, 1, 2>tensor<[1,4,14,14,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 128 + d1 * 32 + d2, d3), memory_config: (4, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,256,512,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 131072 + d1 * 512 + d2, d3), memory_config: (4096, 16, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 2, 3, 1>tensor<[1,512,512,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 512 + d2, d3), memory_config: (8192, 8, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,512,512,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 512 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 3, 1, 2>tensor<[1,128,512,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 512 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,256,512,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 131072 + d1 * 512 + d2, d3), memory_config: (4096, 16, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 2, 3, 1>tensor<[1,512,512,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 512 + d2, d3), memory_config: (8192, 8, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,512,512,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 512 + d2, d3), memory_config: (8192, 8, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 3, 1, 2>tensor<[1,256,512,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 131072 + d1 * 512 + d2, d3), memory_config: (4096, 16, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,3,1024,1024,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 3072 + d1 * 1024 + d2, d3), memory_config: (96, 32, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 2, 3, 1>tensor<[1,1024,1024,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1024 + d2, d3), memory_config: (32768, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,1024,1024,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1024 + d2, d3), memory_config: (32768, 2, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 3, 1, 2>tensor<[1,64,1024,1024,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 1024 + d2, d3), memory_config: (2048, 32, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,480,256,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 122880 + d1 * 256 + d2, d3), memory_config: (3840, 8, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 2, 3, 1>tensor<[1,256,256,480,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 256 + d2, d3), memory_config: (2048, 15, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,256,256,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 256 + d2, d3), memory_config: (2048, 2, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 3, 1, 2>tensor<[1,64,256,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 256 + d2, d3), memory_config: (512, 8, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,512,256,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 131072 + d1 * 256 + d2, d3), memory_config: (4096, 8, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 2, 3, 1>tensor<[1,256,256,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 256 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,256,256,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 256 + d2, d3), memory_config: (2048, 8, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 3, 1, 2>tensor<[1,256,256,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 256 + d2, d3), memory_config: (2048, 8, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,512,256,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 131072 + d1 * 256 + d2, d3), memory_config: (4096, 8, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 2, 3, 1>tensor<[1,256,256,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 256 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,256,256,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 256 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 3, 1, 2>tensor<[1,512,256,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 131072 + d1 * 256 + d2, d3), memory_config: (4096, 8, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,64,1024,1024,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 1024 + d2, d3), memory_config: (2048, 32, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 2, 3, 1>tensor<[1,1024,1024,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1024 + d2, d3), memory_config: (32768, 2, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,1024,1024,48,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1024 + d2, d3), memory_config: (32768, 2, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 3, 1, 2>tensor<[1,48,1024,1024,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 49152 + d1 * 1024 + d2, d3), memory_config: (1536, 32, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,64,256,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 256 + d2, d3), memory_config: (512, 8, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 2, 3, 1>tensor<[1,256,256,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 256 + d2, d3), memory_config: (2048, 2, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,256,256,98,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 256 + d2, d3), memory_config: (2048, 4, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 3, 1, 2>tensor<[1,98,256,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 25088 + d1 * 256 + d2, d3), memory_config: (784, 8, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,64,480,640,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 30720 + d1 * 480 + d2, d3), memory_config: (960, 20, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 2, 3, 1>tensor<[1,480,640,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 640 + d2, d3), memory_config: (9600, 2, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,480,640,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 640 + d2, d3), memory_config: (9600, 1, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 3, 1, 2>tensor<[1,1,480,640,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 480 + d1 * 480 + d2, d3), memory_config: (15, 20, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,64,480,640,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 30720 + d1 * 480 + d2, d3), memory_config: (960, 20, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 2, 3, 1>tensor<[1,480,640,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 640 + d2, d3), memory_config: (9600, 2, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,480,640,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 640 + d2, d3), memory_config: (9600, 2, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 3, 1, 2>tensor<[1,64,480,640,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 30720 + d1 * 480 + d2, d3), memory_config: (960, 20, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,960,128,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 122880 + d1 * 128 + d2, d3), memory_config: (3840, 4, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 2, 3, 1>tensor<[1,128,128,960,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 128 + d2, d3), memory_config: (512, 30, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,128,128,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 128 + d2, d3), memory_config: (512, 2, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 3, 1, 2>tensor<[1,64,128,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 128 + d2, d3), memory_config: (256, 4, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,16,28,28,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 512 + d1 * 32 + d2, d3), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 2, 3, 1>tensor<[1,28,28,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 896 + d1 * 32 + d2, d3), memory_config: (28, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,14,14,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 448 + d1 * 32 + d2, d3), memory_config: (14, 1, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 3, 1, 2>tensor<[1,16,14,14,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 512 + d1 * 32 + d2, d3), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,4,14,14,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 128 + d1 * 32 + d2, d3), memory_config: (4, 1, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 2, 3, 1>tensor<[1,14,14,4,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 448 + d1 * 32 + d2, d3), memory_config: (14, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,7,7,4,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 32 + d2, d3), memory_config: (7, 1, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 3, 1, 2>tensor<[1,4,7,7,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 128 + d1 * 32 + d2, d3), memory_config: (4, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,528,14,14,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16896 + d1 * 32 + d2, d3), memory_config: (528, 1, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 2, 3, 1>tensor<[1,14,14,528,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 448 + d1 * 32 + d2, d3), memory_config: (14, 17, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,14,14,528,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 448 + d1 * 32 + d2, d3), memory_config: (14, 17, 'tile<32x32, bf16>', 'dram')permutation: array<i64: 0, 3, 1, 2>tensor<[1,528,14,14,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16896 + d1 * 32 + d2, d3), memory_config: (528, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.permutetensor<[1,220,12,1,768,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 84480 + d1 * 384 + d2 * 32 + d3, d4), memory_config: (2640, 24, 'tile<32x32, f32>', 'dram')permutation: array<i64: 1, 2, 4, 0, 3>tensor<[220,12,768,1,1,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 294912 + d1 * 24576 + d2 * 32 + d3, d4), memory_config: (2027520, 1, 'tile<32x32, f32>', 'dram')nannan

ttnn.reshape

This table is a trace for ttnn.reshape op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.reshapetensor<[1,512,512,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 512 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1 : i32, 262144 : i32, 128 : i32]tensor<[1,1,262144,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,1,262144,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 512 : i32, 512 : i32, 128 : i32]tensor<[1,512,512,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 512 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[128,bf16]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 4, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 128 : i32, 1 : i32, 1 : i32]tensor<[1,128,1,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 4096 + d1 * 32 + d2, d3), memory_config: (128, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,512,512,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 512 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1 : i32, 262144 : i32, 128 : i32]tensor<[1,1,262144,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,1,262144,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 1, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 512 : i32, 512 : i32, 3 : i32]tensor<[1,512,512,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 512 + d2, d3), memory_config: (8192, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[3,bf16]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 3 : i32, 1 : i32, 1 : i32]tensor<[1,3,1,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 96 + d1 * 32 + d2, d3), memory_config: (3, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,14,14,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 448 + d1 * 32 + d2, d3), memory_config: (14, 1, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1 : i32, 196 : i32, 16 : i32]tensor<[1,1,196,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 224 + d2, d3), memory_config: (7, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,1,196,4,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 224 + d2, d3), memory_config: (7, 1, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 14 : i32, 14 : i32, 4 : i32]tensor<[1,14,14,4,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 448 + d1 * 32 + d2, d3), memory_config: (14, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[4,bf16]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 4 : i32, 1 : i32, 1 : i32]tensor<[1,4,1,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 128 + d1 * 32 + d2, d3), memory_config: (4, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,512,512,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 512 + d2, d3), memory_config: (8192, 8, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1 : i32, 262144 : i32, 256 : i32]tensor<[1,1,262144,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 8, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,1,262144,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 512 : i32, 512 : i32, 128 : i32]tensor<[1,512,512,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 512 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[128,bf16]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 4, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 128 : i32, 1 : i32, 1 : i32]tensor<[1,128,1,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 4096 + d1 * 32 + d2, d3), memory_config: (128, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,512,512,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 512 + d2, d3), memory_config: (8192, 8, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1 : i32, 262144 : i32, 256 : i32]tensor<[1,1,262144,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 8, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,1,262144,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 8, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 512 : i32, 512 : i32, 256 : i32]tensor<[1,512,512,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 512 + d2, d3), memory_config: (8192, 8, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[256,bf16]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 8, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 256 : i32, 1 : i32, 1 : i32]tensor<[1,256,1,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,1024,1024,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1024 + d2, d3), memory_config: (32768, 1, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1 : i32, 1048576 : i32, 3 : i32]tensor<[1,1,1048576,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (32768, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,1,1048576,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (32768, 2, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1024 : i32, 1024 : i32, 64 : i32]tensor<[1,1024,1024,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1024 + d2, d3), memory_config: (32768, 2, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[64,bf16]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 2, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 64 : i32, 1 : i32, 1 : i32]tensor<[1,64,1,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2048 + d1 * 32 + d2, d3), memory_config: (64, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,256,256,480,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 256 + d2, d3), memory_config: (2048, 15, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1 : i32, 65536 : i32, 480 : i32]tensor<[1,1,65536,480,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 15, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,1,65536,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 2, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 256 : i32, 256 : i32, 64 : i32]tensor<[1,256,256,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 256 + d2, d3), memory_config: (2048, 2, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[64,bf16]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 2, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 64 : i32, 1 : i32, 1 : i32]tensor<[1,64,1,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2048 + d1 * 32 + d2, d3), memory_config: (64, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,256,256,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 256 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1 : i32, 65536 : i32, 512 : i32]tensor<[1,1,65536,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,1,65536,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 8, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 256 : i32, 256 : i32, 256 : i32]tensor<[1,256,256,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 256 + d2, d3), memory_config: (2048, 8, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[256,bf16]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 8, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 256 : i32, 1 : i32, 1 : i32]tensor<[1,256,1,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,256,256,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 256 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1 : i32, 65536 : i32, 512 : i32]tensor<[1,1,65536,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,1,65536,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 256 : i32, 256 : i32, 512 : i32]tensor<[1,256,256,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 256 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[512,bf16]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 512 : i32, 1 : i32, 1 : i32]tensor<[1,512,1,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 32 + d2, d3), memory_config: (512, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,1024,1024,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1024 + d2, d3), memory_config: (32768, 2, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1 : i32, 1048576 : i32, 64 : i32]tensor<[1,1,1048576,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (32768, 2, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,1,1048576,48,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (32768, 2, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1024 : i32, 1024 : i32, 48 : i32]tensor<[1,1024,1024,48,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1024 + d2, d3), memory_config: (32768, 2, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[48,bf16]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 2, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 48 : i32, 1 : i32, 1 : i32]tensor<[1,48,1,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1536 + d1 * 32 + d2, d3), memory_config: (48, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,256,256,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 256 + d2, d3), memory_config: (2048, 2, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1 : i32, 65536 : i32, 64 : i32]tensor<[1,1,65536,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 2, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,1,65536,98,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 4, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 256 : i32, 256 : i32, 98 : i32]tensor<[1,256,256,98,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 256 + d2, d3), memory_config: (2048, 4, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[98,bf16]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 4, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 98 : i32, 1 : i32, 1 : i32]tensor<[1,98,1,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 3136 + d1 * 32 + d2, d3), memory_config: (98, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,480,640,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 640 + d2, d3), memory_config: (9600, 2, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1 : i32, 307200 : i32, 64 : i32]tensor<[1,1,307200,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (9600, 2, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,1,307200,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (9600, 1, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 480 : i32, 640 : i32, 1 : i32]tensor<[1,480,640,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 640 + d2, d3), memory_config: (9600, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,bf16]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,480,640,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 640 + d2, d3), memory_config: (9600, 2, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1 : i32, 307200 : i32, 64 : i32]tensor<[1,1,307200,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (9600, 2, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,1,307200,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (9600, 2, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 480 : i32, 640 : i32, 64 : i32]tensor<[1,480,640,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 640 + d2, d3), memory_config: (9600, 2, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[64,bf16]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 2, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 64 : i32, 1 : i32, 1 : i32]tensor<[1,64,1,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2048 + d1 * 32 + d2, d3), memory_config: (64, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,128,128,960,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 128 + d2, d3), memory_config: (512, 30, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1 : i32, 16384 : i32, 960 : i32]tensor<[1,1,16384,960,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 16384 + d2, d3), memory_config: (512, 30, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,1,16384,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 16384 + d2, d3), memory_config: (512, 2, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 128 : i32, 128 : i32, 64 : i32]tensor<[1,128,128,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 128 + d2, d3), memory_config: (512, 2, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[64,bf16]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 2, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 64 : i32, 1 : i32, 1 : i32]tensor<[1,64,1,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2048 + d1 * 32 + d2, d3), memory_config: (64, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [16 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[16,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (16, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[250,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 8, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 250 : i32, 1 : i32, 1 : i32]tensor<[1,250,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8000 + d1 * 32 + d2, d3), memory_config: (250, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[16,250,250,ui32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 256 + d1, d2), memory_config: (128, 8, 'tile<32x32, u32>', 'dram')shape: [16 : i32, 250 : i32, 250 : i32, 1 : i32]tensor<[16,250,250,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[16,250,512,f32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 256 + d1, d2), memory_config: (128, 16, 'tile<32x32, f32>', 'dram')shape: [2048000 : i32, 1 : i32]tensor<[2048000,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (64000, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[16,250,250,1,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, f32>', 'dram')shape: [1000000 : i32]tensor<[1000000,f32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 31250, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[1000000,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (31250, 1, 'tile<32x32, f32>', 'dram')shape: [16 : i32, 250 : i32, 250 : i32]tensor<[16,250,250,f32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 256 + d1, d2), memory_config: (128, 8, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,2640,768,ui32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 2656 + d1, d2), memory_config: (83, 24, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 2640 : i32, 768 : i32, 1 : i32]tensor<[1,2640,768,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[768,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 24, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 768 : i32, 1 : i32]tensor<[1,1,768,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 768 + d1 * 768 + d2, d3), memory_config: (24, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,220,768,f32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 224 + d1, d2), memory_config: (7, 24, 'tile<32x32, f32>', 'dram')shape: [168960 : i32, 1 : i32]tensor<[168960,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (5280, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[1,2640,768,1,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, f32>', 'dram')shape: [2027520 : i32]tensor<[2027520,f32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 63360, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[2027520,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (63360, 1, 'tile<32x32, f32>', 'dram')shape: [1 : i32, 2640 : i32, 768 : i32]tensor<[1,2640,768,f32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 2656 + d1, d2), memory_config: (83, 24, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,300,4,ui32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 320 + d1, d2), memory_config: (10, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 300 : i32, 4 : i32, 1 : i32]tensor<[1,300,4,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[4,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 4 : i32, 1 : i32]tensor<[1,1,4,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,8400,4,f32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 8416 + d1, d2), memory_config: (263, 1, 'tile<32x32, f32>', 'dram')shape: [33600 : i32, 1 : i32]tensor<[33600,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1050, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[1,300,4,1,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, f32>', 'dram')shape: [1200 : i32]tensor<[1200,f32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 38, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[1200,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (38, 1, 'tile<32x32, f32>', 'dram')shape: [1 : i32, 300 : i32, 4 : i32]tensor<[1,300,4,f32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 320 + d1, d2), memory_config: (10, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,300,80,ui32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 320 + d1, d2), memory_config: (10, 3, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 300 : i32, 80 : i32, 1 : i32]tensor<[1,300,80,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[80,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 3, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 80 : i32, 1 : i32]tensor<[1,1,80,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 96 + d1 * 96 + d2, d3), memory_config: (3, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,8400,80,f32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 8416 + d1, d2), memory_config: (263, 3, 'tile<32x32, f32>', 'dram')shape: [672000 : i32, 1 : i32]tensor<[672000,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (21000, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[1,300,80,1,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, f32>', 'dram')shape: [24000 : i32]tensor<[24000,f32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 750, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[24000,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (750, 1, 'tile<32x32, f32>', 'dram')shape: [1 : i32, 300 : i32, 80 : i32]tensor<[1,300,80,f32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 320 + d1, d2), memory_config: (10, 3, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[6,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 6 : i32, 1 : i32, 1 : i32]tensor<[1,1,6,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 16 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,16,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[192,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 1 : i32, 192 : i32, 1 : i32]tensor<[1,1,1,192,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 192 + d3, d4), memory_config: (6, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,16,6,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 512 + d1 * 32 + d2, d3), memory_config: (16, 4, 'tile<32x32, bf16>', 'dram')shape: [12288 : i32, 1 : i32]tensor<[12288,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (384, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[6,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 6 : i32, 1 : i32, 1 : i32]tensor<[1,1,6,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 16 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,16,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[192,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 1 : i32, 192 : i32, 1 : i32]tensor<[1,1,1,192,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 192 + d3, d4), memory_config: (6, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,16,6,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 512 + d1 * 32 + d2, d3), memory_config: (16, 2, 'tile<32x32, bf16>', 'dram')shape: [6144 : i32, 1 : i32]tensor<[6144,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (192, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[3,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 1 : i32, 1 : i32, 3 : i32, 1 : i32, 1 : i32]tensor<[1,1,1,1,3,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 1 : i32, 1 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,1,1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4 * 32 + d5, d6), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,1,1,1,3,4,f32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')shape: [12 : i32, 1 : i32]tensor<[12,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[1,1,1,1,3,1,1,f32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, f32>', 'dram')shape: [3 : i32]tensor<[3,f32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[3,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')shape: [1 : i32, 1 : i32, 1 : i32, 1 : i32, 3 : i32, 1 : i32]tensor<[1,1,1,1,3,1,f32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,256,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 256 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,256,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 8192 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (256, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,256,7,25281,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 256 : i32, 7 : i32, 25281 : i32, 1 : i32]tensor<[1,256,7,25281,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,256,14,14,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 1, 'tile<32x32, f32>', 'dram')shape: [50176 : i32, 1 : i32]tensor<[50176,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1568, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[1,256,7,25281,1,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')shape: [45303552 : i32]tensor<[45303552,f32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1415736, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[45303552,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1415736, 1, 'tile<32x32, f32>', 'dram')shape: [1 : i32, 256 : i32, 7 : i32, 25281 : i32]tensor<[1,256,7,25281,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,256,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 256 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,256,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 8192 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (256, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,256,7,25281,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 256 : i32, 7 : i32, 25281 : i32, 1 : i32]tensor<[1,256,7,25281,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,256,28,28,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 1, 'tile<32x32, f32>', 'dram')shape: [200704 : i32, 1 : i32]tensor<[200704,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (6272, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[1,256,7,25281,1,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')shape: [45303552 : i32]tensor<[45303552,f32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1415736, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[45303552,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1415736, 1, 'tile<32x32, f32>', 'dram')shape: [1 : i32, 256 : i32, 7 : i32, 25281 : i32]tensor<[1,256,7,25281,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,256,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 256 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,256,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 8192 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (256, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,256,7,25281,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 256 : i32, 7 : i32, 25281 : i32, 1 : i32]tensor<[1,256,7,25281,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,256,7,7,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 1, 'tile<32x32, f32>', 'dram')shape: [12544 : i32, 1 : i32]tensor<[12544,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (392, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[1,256,7,25281,1,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')shape: [45303552 : i32]tensor<[45303552,f32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1415736, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[45303552,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1415736, 1, 'tile<32x32, f32>', 'dram')shape: [1 : i32, 256 : i32, 7 : i32, 25281 : i32]tensor<[1,256,7,25281,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[25281,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 791, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 25281 : i32, 1 : i32, 1 : i32]tensor<[1,1,25281,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 808992 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (25281, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[7,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 7 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,7,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 224 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (7, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[2,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 1 : i32, 2 : i32, 1 : i32]tensor<[1,1,1,2,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[8,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 8 : i32, 1 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,8,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 256 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (8, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 1 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[160,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 160 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,1,160,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 5120 + d1 * 5120 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (160, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[160,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 1 : i32, 160 : i32, 1 : i32, 1 : i32]tensor<[1,1,1,160,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 5120 + d1 * 5120 + d2 * 5120 + d3 * 32 + d4, d5), memory_config: (160, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.reshapetensor<[1,28,28,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 896 + d1 * 32 + d2, d3), memory_config: (28, 1, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1 : i32, 784 : i32, 16 : i32]tensor<[1,1,784,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 800 + d1 * 800 + d2, d3), memory_config: (25, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,1,196,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 224 + d2, d3), memory_config: (7, 1, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 14 : i32, 14 : i32, 16 : i32]tensor<[1,14,14,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 448 + d1 * 32 + d2, d3), memory_config: (14, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,14,14,4,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 448 + d1 * 32 + d2, d3), memory_config: (14, 1, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1 : i32, 196 : i32, 4 : i32]tensor<[1,1,196,4,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 224 + d2, d3), memory_config: (7, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,1,49,4,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64 + d1 * 64 + d2, d3), memory_config: (2, 1, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 7 : i32, 7 : i32, 4 : i32]tensor<[1,7,7,4,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 32 + d2, d3), memory_config: (7, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,14,14,528,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 448 + d1 * 32 + d2, d3), memory_config: (14, 17, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 1 : i32, 196 : i32, 528 : i32]tensor<[1,1,196,528,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 224 + d2, d3), memory_config: (7, 17, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,1,196,528,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 224 + d2, d3), memory_config: (7, 17, 'tile<32x32, bf16>', 'dram')shape: [1 : i32, 14 : i32, 14 : i32, 528 : i32]tensor<[1,14,14,528,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 448 + d1 * 32 + d2, d3), memory_config: (14, 17, 'tile<32x32, bf16>', 'dram')nannan
ttnn.reshapetensor<[1,256,7,25281,1,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')shape: [1 : i32, 256 : i32, 7 : i32, 25281 : i32]tensor<[1,256,7,25281,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[1,256,7,25281,1,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')shape: [1 : i32, 256 : i32, 7 : i32, 25281 : i32]tensor<[1,256,7,25281,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, f32>', 'dram')nannan
ttnn.reshapetensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')shape: [1 : i32, 1 : i32, 1 : i32, 1 : i32]tensor<[1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan

ttnn.slice

This table is a trace for ttnn.slice op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.slicetensor<[1,256,7,25281,2,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')begins: [0 : i32, 0 : i32, 0 : i32, 0 : i32, 0 : i32]
ends: [1 : i32, 256 : i32, 7 : i32, 25281 : i32, 1 : i32]
step: [1 : i32, 1 : i32, 1 : i32, 1 : i32, 1 : i32]
tensor<[1,256,7,25281,1,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.slicetensor<[1,256,7,25281,2,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45303552 + d1 * 176967 + d2 * 25281 + d3, d4), memory_config: (45303552, 2, 'f32', 'dram')begins: [0 : i32, 0 : i32, 0 : i32, 0 : i32, 1 : i32]
ends: [1 : i32, 256 : i32, 7 : i32, 25281 : i32, 2 : i32]
step: [1 : i32, 1 : i32, 1 : i32, 1 : i32, 1 : i32]
tensor<[1,256,7,25281,1,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45303552 + d1 * 176967 + d2 * 25281 + d3, d4), memory_config: (45303552, 1, 'f32', 'dram')nannan

ttnn.to_device

This table is a trace for ttnn.to_device op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.to_devicetensor<[1,1,262144,128,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (262144, 128, 'bf16', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<262144x128>>, >tensor<[1,1,262144,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (262144, 128, 'bf16', 'dram')nannan
ttnn.to_devicetensor<[1,1,262144,128,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (262144, 128, 'bf16', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<262144x128>>, >tensor<[1,1,262144,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (262144, 128, 'bf16', 'dram')nannan
ttnn.to_devicetensor<[1,1,196,16,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 196 + d1 * 196 + d2, d3), memory_config: (196, 16, 'bf16', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<196x16>>, >tensor<[1,1,196,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 196 + d1 * 196 + d2, d3), memory_config: (196, 16, 'bf16', 'dram')nannan
ttnn.to_devicetensor<[1,1,262144,256,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (262144, 256, 'bf16', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<262144x256>>, >tensor<[1,1,262144,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (262144, 256, 'bf16', 'dram')nannan
ttnn.to_devicetensor<[1,1,262144,256,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (262144, 256, 'bf16', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<262144x256>>, >tensor<[1,1,262144,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (262144, 256, 'bf16', 'dram')nannan
ttnn.to_devicetensor<[1,1,1048576,3,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (1048576, 3, 'bf16', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1048576x3>>, >tensor<[1,1,1048576,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (1048576, 3, 'bf16', 'dram')nannan
ttnn.to_devicetensor<[1,1,65536,480,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (65536, 480, 'bf16', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<65536x480>>, >tensor<[1,1,65536,480,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (65536, 480, 'bf16', 'dram')nannan
ttnn.to_devicetensor<[1,1,65536,512,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (65536, 512, 'bf16', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<65536x512>>, >tensor<[1,1,65536,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (65536, 512, 'bf16', 'dram')nannan
ttnn.to_devicetensor<[1,1,65536,512,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (65536, 512, 'bf16', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<65536x512>>, >tensor<[1,1,65536,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (65536, 512, 'bf16', 'dram')nannan
ttnn.to_devicetensor<[1,1,1048576,64,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (1048576, 64, 'bf16', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1048576x64>>, >tensor<[1,1,1048576,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (1048576, 64, 'bf16', 'dram')nannan
ttnn.to_devicetensor<[1,1,65536,64,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (65536, 64, 'bf16', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<65536x64>>, >tensor<[1,1,65536,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (65536, 64, 'bf16', 'dram')nannan
ttnn.to_devicetensor<[1,1,307200,64,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (307200, 64, 'bf16', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<307200x64>>, >tensor<[1,1,307200,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (307200, 64, 'bf16', 'dram')nannan
ttnn.to_devicetensor<[1,1,307200,64,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (307200, 64, 'bf16', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<307200x64>>, >tensor<[1,1,307200,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (307200, 64, 'bf16', 'dram')nannan
ttnn.to_devicetensor<[1,1,16384,960,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 16384 + d2, d3), memory_config: (16384, 960, 'bf16', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<16384x960>>, >tensor<[1,1,16384,960,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 16384 + d2, d3), memory_config: (16384, 960, 'bf16', 'dram')nannan
ttnn.to_devicetensor<[3,1,f32]>
!ttnn.device
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[3,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.to_devicetensor<[16,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[250,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 8, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x8>>, >tensor<[250,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 8, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[1000000,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1000000, 'ui32', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1000000>>, >tensor<[1000000,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1000000, 'ui32', 'dram')nannan
ttnn.to_devicetensor<[3,1,f32]>
!ttnn.device
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[3,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.to_devicetensor<[1,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[768,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 24, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x24>>, >tensor<[768,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 24, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[2027520,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 2027520, 'ui32', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x2027520>>, >tensor<[2027520,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 2027520, 'ui32', 'dram')nannan
ttnn.to_devicetensor<[3,1,f32]>
!ttnn.device
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[3,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.to_devicetensor<[1,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[4,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[4,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[1200,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1200, 'ui32', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1200>>, >tensor<[1200,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1200, 'ui32', 'dram')nannan
ttnn.to_devicetensor<[3,1,f32]>
!ttnn.device
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[3,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.to_devicetensor<[1,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[80,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 3, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x3>>, >tensor<[80,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 3, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[24000,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 24000, 'ui32', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x24000>>, >tensor<[24000,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 24000, 'ui32', 'dram')nannan
ttnn.to_devicetensor<[4,1,f32]>
!ttnn.device
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.to_devicetensor<[6,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[6,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[6,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[6,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.to_devicetensor<[16,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[16,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.to_devicetensor<[1,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[1,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.to_devicetensor<[4,1,f32]>
!ttnn.device
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.to_devicetensor<[6,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[6,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[6,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[6,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.to_devicetensor<[16,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[16,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.to_devicetensor<[1,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[1,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.to_devicetensor<[6,1,f32]>
!ttnn.device
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[6,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.to_devicetensor<[3,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[3,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[3,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[3,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.to_devicetensor<[1,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[1,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.to_devicetensor<[3,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 3, 'ui32', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x3>>, >tensor<[3,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 3, 'ui32', 'dram')nannan
ttnn.to_devicetensor<[4,1,f32]>
!ttnn.device
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.to_devicetensor<[45303552,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 45303552, 'ui32', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x45303552>>, >tensor<[45303552,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 45303552, 'ui32', 'dram')nannan
ttnn.to_devicetensor<[4,1,f32]>
!ttnn.device
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.to_devicetensor<[45303552,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 45303552, 'ui32', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x45303552>>, >tensor<[45303552,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 45303552, 'ui32', 'dram')nannan
ttnn.to_devicetensor<[4,1,f32]>
!ttnn.device
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.to_devicetensor<[45303552,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 45303552, 'ui32', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x45303552>>, >tensor<[45303552,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 45303552, 'ui32', 'dram')nannan
ttnn.to_devicetensor<[4,1,f32]>
!ttnn.device
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.to_devicetensor<[25281,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 791, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x791>>, >tensor<[25281,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 791, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[25281,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 791, 'tile<32x32, u32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x791>>, >tensor<[25281,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 791, 'tile<32x32, u32>', 'dram')nannan
ttnn.to_devicetensor<[7,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[7,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[7,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[7,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.to_devicetensor<[1,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[1,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.to_devicetensor<[5,1,f32]>
!ttnn.device
mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[5,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.to_devicetensor<[160,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, u32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x5>>, >tensor<[160,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, u32>', 'dram')nannan
ttnn.to_devicetensor<[160,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x5>>, >tensor<[160,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[8,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[8,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[8,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[8,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.to_devicetensor<[1,si32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.to_devicetensor<[1,ui32]>
!ttnn.device
mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1x1>>, >tensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.to_devicetensor<[1,1,784,16,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 784 + d1 * 784 + d2, d3), memory_config: (784, 16, 'bf16', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<784x16>>, >tensor<[1,1,784,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 784 + d1 * 784 + d2, d3), memory_config: (784, 16, 'bf16', 'dram')nannan
ttnn.to_devicetensor<[1,1,196,4,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 196 + d1 * 196 + d2, d3), memory_config: (196, 4, 'bf16', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<196x4>>, >tensor<[1,1,196,4,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 196 + d1 * 196 + d2, d3), memory_config: (196, 4, 'bf16', 'dram')nannan
ttnn.to_devicetensor<[1,1,196,528,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 196 + d1 * 196 + d2, d3), memory_config: (196, 528, 'bf16', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<196x528>>, >tensor<[1,1,196,528,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 196 + d1 * 196 + d2, d3), memory_config: (196, 528, 'bf16', 'dram')nannan
ttnn.to_devicetensor<[1,256,7,25281,2,f32]>
!ttnn.device
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45303552 + d1 * 176967 + d2 * 25281 + d3, d4), memory_config: (45303552, 2, 'f32', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<45303552x2>>, >tensor<[1,256,7,25281,2,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45303552 + d1 * 176967 + d2 * 25281 + d3, d4), memory_config: (45303552, 2, 'f32', 'dram')nannan
ttnn.to_devicetensor<[1,256,7,25281,1,f32]>
!ttnn.device
mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'system_memory')memory_config: #ttnn.memory_config<#dram, <<1417472x1>>, >tensor<[1,256,7,25281,1,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')nannan

ttnn.to_layout

This table is a trace for ttnn.to_layout op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.to_layouttensor<[1,1,262144,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1,1,262144,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (262144, 128, 'bf16', 'system_memory')nannan
ttnn.to_layouttensor<[1,1,262144,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 4, 'tile<32x32, bf16>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1,1,262144,128,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (262144, 128, 'bf16', 'system_memory')nannan
ttnn.to_layouttensor<[1,1,196,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 224 + d2, d3), memory_config: (7, 1, 'tile<32x32, bf16>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1,1,196,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 196 + d1 * 196 + d2, d3), memory_config: (196, 16, 'bf16', 'system_memory')nannan
ttnn.to_layouttensor<[1,1,262144,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 8, 'tile<32x32, bf16>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1,1,262144,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (262144, 256, 'bf16', 'system_memory')nannan
ttnn.to_layouttensor<[1,1,262144,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (8192, 8, 'tile<32x32, bf16>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1,1,262144,256,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 262144 + d1 * 262144 + d2, d3), memory_config: (262144, 256, 'bf16', 'system_memory')nannan
ttnn.to_layouttensor<[1,1,1048576,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (32768, 1, 'tile<32x32, bf16>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1,1,1048576,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (1048576, 3, 'bf16', 'system_memory')nannan
ttnn.to_layouttensor<[1,1,65536,480,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 15, 'tile<32x32, bf16>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1,1,65536,480,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (65536, 480, 'bf16', 'system_memory')nannan
ttnn.to_layouttensor<[1,1,65536,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1,1,65536,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (65536, 512, 'bf16', 'system_memory')nannan
ttnn.to_layouttensor<[1,1,65536,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 16, 'tile<32x32, bf16>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1,1,65536,512,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (65536, 512, 'bf16', 'system_memory')nannan
ttnn.to_layouttensor<[1,1,1048576,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (32768, 2, 'tile<32x32, bf16>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1,1,1048576,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 1048576 + d1 * 1048576 + d2, d3), memory_config: (1048576, 64, 'bf16', 'system_memory')nannan
ttnn.to_layouttensor<[1,1,65536,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (2048, 2, 'tile<32x32, bf16>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1,1,65536,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 65536 + d1 * 65536 + d2, d3), memory_config: (65536, 64, 'bf16', 'system_memory')nannan
ttnn.to_layouttensor<[1,1,307200,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (9600, 2, 'tile<32x32, bf16>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1,1,307200,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (307200, 64, 'bf16', 'system_memory')nannan
ttnn.to_layouttensor<[1,1,307200,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (9600, 2, 'tile<32x32, bf16>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1,1,307200,64,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 307200 + d1 * 307200 + d2, d3), memory_config: (307200, 64, 'bf16', 'system_memory')nannan
ttnn.to_layouttensor<[1,1,16384,960,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 16384 + d2, d3), memory_config: (512, 30, 'tile<32x32, bf16>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1,1,16384,960,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 16384 + d1 * 16384 + d2, d3), memory_config: (16384, 960, 'bf16', 'system_memory')nannan
ttnn.to_layouttensor<[3,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (3, 1, 'f32', 'system_memory')layout: #ttnn.layouttensor<[3,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')nannan
ttnn.to_layouttensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'si32', 'system_memory')layout: #ttnn.layouttensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[250,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 250, 'si32', 'system_memory')layout: #ttnn.layouttensor<[250,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 8, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[1000000,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 31250, 'tile<32x32, u32>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1000000,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1000000, 'ui32', 'system_memory')nannan
ttnn.to_layouttensor<[3,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (3, 1, 'f32', 'system_memory')layout: #ttnn.layouttensor<[3,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')nannan
ttnn.to_layouttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'system_memory')layout: #ttnn.layouttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[768,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 768, 'si32', 'system_memory')layout: #ttnn.layouttensor<[768,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 24, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[2027520,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 63360, 'tile<32x32, u32>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[2027520,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 2027520, 'ui32', 'system_memory')nannan
ttnn.to_layouttensor<[3,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (3, 1, 'f32', 'system_memory')layout: #ttnn.layouttensor<[3,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')nannan
ttnn.to_layouttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'system_memory')layout: #ttnn.layouttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[4,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 4, 'si32', 'system_memory')layout: #ttnn.layouttensor<[4,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[1200,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 38, 'tile<32x32, u32>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1200,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1200, 'ui32', 'system_memory')nannan
ttnn.to_layouttensor<[3,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (3, 1, 'f32', 'system_memory')layout: #ttnn.layouttensor<[3,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')nannan
ttnn.to_layouttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'system_memory')layout: #ttnn.layouttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[80,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 80, 'si32', 'system_memory')layout: #ttnn.layouttensor<[80,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 3, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[24000,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 750, 'tile<32x32, u32>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[24000,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 24000, 'ui32', 'system_memory')nannan
ttnn.to_layouttensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (4, 1, 'f32', 'system_memory')layout: #ttnn.layouttensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')nannan
ttnn.to_layouttensor<[6,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'si32', 'system_memory')layout: #ttnn.layouttensor<[6,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[6,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'ui32', 'system_memory')layout: #ttnn.layouttensor<[6,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.to_layouttensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'si32', 'system_memory')layout: #ttnn.layouttensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'ui32', 'system_memory')layout: #ttnn.layouttensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.to_layouttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'system_memory')layout: #ttnn.layouttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'system_memory')layout: #ttnn.layouttensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.to_layouttensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (4, 1, 'f32', 'system_memory')layout: #ttnn.layouttensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')nannan
ttnn.to_layouttensor<[6,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'si32', 'system_memory')layout: #ttnn.layouttensor<[6,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[6,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'ui32', 'system_memory')layout: #ttnn.layouttensor<[6,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.to_layouttensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'si32', 'system_memory')layout: #ttnn.layouttensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 16, 'ui32', 'system_memory')layout: #ttnn.layouttensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.to_layouttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'system_memory')layout: #ttnn.layouttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'system_memory')layout: #ttnn.layouttensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.to_layouttensor<[6,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (6, 1, 'f32', 'system_memory')layout: #ttnn.layouttensor<[6,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')nannan
ttnn.to_layouttensor<[3,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 3, 'si32', 'system_memory')layout: #ttnn.layouttensor<[3,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[3,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 3, 'ui32', 'system_memory')layout: #ttnn.layouttensor<[3,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.to_layouttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'system_memory')layout: #ttnn.layouttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'system_memory')layout: #ttnn.layouttensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.to_layouttensor<[3,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[3,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 3, 'ui32', 'system_memory')nannan
ttnn.to_layouttensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (4, 1, 'f32', 'system_memory')layout: #ttnn.layouttensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')nannan
ttnn.to_layouttensor<[45303552,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1415736, 'tile<32x32, u32>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[45303552,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 45303552, 'ui32', 'system_memory')nannan
ttnn.to_layouttensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (4, 1, 'f32', 'system_memory')layout: #ttnn.layouttensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')nannan
ttnn.to_layouttensor<[45303552,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1415736, 'tile<32x32, u32>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[45303552,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 45303552, 'ui32', 'system_memory')nannan
ttnn.to_layouttensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (4, 1, 'f32', 'system_memory')layout: #ttnn.layouttensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')nannan
ttnn.to_layouttensor<[45303552,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1415736, 'tile<32x32, u32>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[45303552,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 45303552, 'ui32', 'system_memory')nannan
ttnn.to_layouttensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (4, 1, 'f32', 'system_memory')layout: #ttnn.layouttensor<[4,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')nannan
ttnn.to_layouttensor<[25281,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 25281, 'si32', 'system_memory')layout: #ttnn.layouttensor<[25281,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 791, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[25281,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 25281, 'ui32', 'system_memory')layout: #ttnn.layouttensor<[25281,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 791, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.to_layouttensor<[7,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 7, 'si32', 'system_memory')layout: #ttnn.layouttensor<[7,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[7,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 7, 'ui32', 'system_memory')layout: #ttnn.layouttensor<[7,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.to_layouttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'system_memory')layout: #ttnn.layouttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'system_memory')layout: #ttnn.layouttensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.to_layouttensor<[5,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (5, 1, 'f32', 'system_memory')layout: #ttnn.layouttensor<[5,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'system_memory')nannan
ttnn.to_layouttensor<[160,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 160, 'ui32', 'system_memory')layout: #ttnn.layouttensor<[160,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.to_layouttensor<[160,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 160, 'si32', 'system_memory')layout: #ttnn.layouttensor<[160,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[8,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 8, 'si32', 'system_memory')layout: #ttnn.layouttensor<[8,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[8,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 8, 'ui32', 'system_memory')layout: #ttnn.layouttensor<[8,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.to_layouttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'si32', 'system_memory')layout: #ttnn.layouttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'system_memory')nannan
ttnn.to_layouttensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'ui32', 'system_memory')layout: #ttnn.layouttensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'system_memory')nannan
ttnn.to_layouttensor<[1,1,784,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 800 + d1 * 800 + d2, d3), memory_config: (25, 1, 'tile<32x32, bf16>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1,1,784,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 784 + d1 * 784 + d2, d3), memory_config: (784, 16, 'bf16', 'system_memory')nannan
ttnn.to_layouttensor<[1,1,196,16,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 196 + d1 * 196 + d2, d3), memory_config: (196, 16, 'bf16', 'dram')layout: #ttnn.layouttensor<[1,1,196,16,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 224 + d2, d3), memory_config: (7, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.to_layouttensor<[1,1,196,4,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 224 + d2, d3), memory_config: (7, 1, 'tile<32x32, bf16>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1,1,196,4,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 196 + d1 * 196 + d2, d3), memory_config: (196, 4, 'bf16', 'system_memory')nannan
ttnn.to_layouttensor<[1,1,49,4,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 49 + d1 * 49 + d2, d3), memory_config: (49, 4, 'bf16', 'dram')layout: #ttnn.layouttensor<[1,1,49,4,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64 + d1 * 64 + d2, d3), memory_config: (2, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.to_layouttensor<[1,1,196,528,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 224 + d2, d3), memory_config: (7, 17, 'tile<32x32, bf16>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1,1,196,528,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 196 + d1 * 196 + d2, d3), memory_config: (196, 528, 'bf16', 'system_memory')nannan
ttnn.to_layouttensor<[1,1,196,528,bf16]>
!ttnn.device
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 196 + d1 * 196 + d2, d3), memory_config: (196, 528, 'bf16', 'dram')layout: #ttnn.layouttensor<[1,1,196,528,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 224 + d1 * 224 + d2, d3), memory_config: (7, 17, 'tile<32x32, bf16>', 'dram')nannan
ttnn.to_layouttensor<[1,256,7,25281,2,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'system_memory')layout: #ttnn.layout<row_major>tensor<[1,256,7,25281,2,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45303552 + d1 * 176967 + d2 * 25281 + d3, d4), memory_config: (45303552, 2, 'f32', 'system_memory')nannan
ttnn.to_layouttensor<[1,256,7,25281,1,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45303552 + d1 * 176967 + d2 * 25281 + d3, d4), memory_config: (45303552, 1, 'f32', 'system_memory')layout: #ttnn.layouttensor<[1,256,7,25281,1,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'system_memory')nannan

ttnn.typecast

This table is a trace for ttnn.typecast op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.typecasttensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[16,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (16, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[16,1,1,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (16, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[16,250,250,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[16,250,250,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[250,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 8, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[250,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 8, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,250,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8000 + d1 * 32 + d2, d3), memory_config: (250, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,250,1,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8000 + d1 * 32 + d2, d3), memory_config: (250, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[16,250,250,si32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 256 + d1, d2), memory_config: (128, 8, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[16,250,250,ui32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 256 + d1, d2), memory_config: (128, 8, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[16,250,250,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[16,250,250,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[16,250,250,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[16,250,250,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[16,250,250,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[16,250,250,3,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[16,250,250,3,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[16,250,250,3,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 64000 + d1 * 256 + d2, d3), memory_config: (32000, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.typecasttensor<[1000000,f32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 31250, 'tile<32x32, f32>', 'dram')dtype: #tt.supportedDataTypestensor<[1000000,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 31250, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[2048000,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (64000, 1, 'tile<32x32, f32>', 'dram')dtype: #tt.supportedDataTypestensor<[2048000,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (64000, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1000000,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (31250, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1000000,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (31250, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.typecasttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,2640,768,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,2640,768,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,2640,768,si32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 2656 + d1, d2), memory_config: (83, 24, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,2640,768,ui32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 2656 + d1, d2), memory_config: (83, 24, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[768,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 24, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[768,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 24, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,768,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 768 + d1 * 768 + d2, d3), memory_config: (24, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,768,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 768 + d1 * 768 + d2, d3), memory_config: (24, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,2640,768,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,2640,768,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,2640,768,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,2640,768,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,2640,768,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,2640,768,3,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,2640,768,3,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,2640,768,3,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 2027520 + d1 * 768 + d2, d3), memory_config: (63360, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.typecasttensor<[2027520,f32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 63360, 'tile<32x32, f32>', 'dram')dtype: #tt.supportedDataTypestensor<[2027520,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 63360, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[168960,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (5280, 1, 'tile<32x32, f32>', 'dram')dtype: #tt.supportedDataTypestensor<[168960,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (5280, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[2027520,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (63360, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[2027520,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (63360, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.typecasttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,300,4,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,300,4,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,300,4,si32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 320 + d1, d2), memory_config: (10, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,300,4,ui32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 320 + d1, d2), memory_config: (10, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[4,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[4,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,4,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,4,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,300,4,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,300,4,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,300,4,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,300,4,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,300,4,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,300,4,3,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,300,4,3,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,300,4,3,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 9600 + d1 * 32 + d2, d3), memory_config: (300, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.typecasttensor<[1200,f32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 38, 'tile<32x32, f32>', 'dram')dtype: #tt.supportedDataTypestensor<[1200,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 38, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[33600,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1050, 1, 'tile<32x32, f32>', 'dram')dtype: #tt.supportedDataTypestensor<[33600,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1050, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1200,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (38, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1200,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (38, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.typecasttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,300,80,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,300,80,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,300,80,si32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 320 + d1, d2), memory_config: (10, 3, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,300,80,ui32]>mapping_from: (d0, d1, d2), mapping_to: (d0 * 320 + d1, d2), memory_config: (10, 3, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[80,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 3, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[80,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 3, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,80,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 96 + d1 * 96 + d2, d3), memory_config: (3, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,80,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 96 + d1 * 96 + d2, d3), memory_config: (3, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,300,80,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,300,80,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,300,80,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,300,80,1,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,300,80,3,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,300,80,3,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,300,80,3,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,300,80,3,f32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 28800 + d1 * 96 + d2, d3), memory_config: (900, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.typecasttensor<[24000,f32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 750, 'tile<32x32, f32>', 'dram')dtype: #tt.supportedDataTypestensor<[24000,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 750, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[672000,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (21000, 1, 'tile<32x32, f32>', 'dram')dtype: #tt.supportedDataTypestensor<[672000,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (21000, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[24000,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (750, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[24000,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (750, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.typecasttensor<[6,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[6,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,6,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,6,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,1,6,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,6,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,1,6,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,6,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,16,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,16,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,16,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,16,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,16,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,16,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,16,6,192,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,16,6,192,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[192,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[192,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,192,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 192 + d3, d4), memory_config: (6, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,192,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 192 + d3, d4), memory_config: (6, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,16,6,192,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,16,6,192,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,16,6,192,4,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,16,6,192,4,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecastnannan
ttnn.typecasttensor<[6,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[6,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,6,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,6,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,1,6,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,6,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,1,6,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,6,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 32 + d3, d4), memory_config: (6, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[16,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[16,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,16,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,16,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,16,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,16,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,16,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,16,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 512 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (16, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,16,6,192,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,16,6,192,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[192,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[192,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 6, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,192,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 192 + d3, d4), memory_config: (6, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,192,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 192 + d1 * 192 + d2 * 192 + d3, d4), memory_config: (6, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,16,6,192,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,16,6,192,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,16,6,192,4,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,16,6,192,4,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 18432 + d1 * 1152 + d2 * 192 + d3, d4), memory_config: (576, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecastnannan
ttnn.typecasttensor<[3,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[3,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,3,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,3,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,3,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,3,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,3,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,3,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4 * 32 + d5, d6), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4 * 32 + d5, d6), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4 * 32 + d5, d6), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4 * 32 + d5, d6), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4 * 32 + d5, d6), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4 * 32 + d5, d6), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,3,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,3,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,3,1,6,bf16]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,3,1,6,si32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,3,1,6,si32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,3,1,6,f32]>mapping_from: (d0, d1, d2, d3, d4, d5, d6), mapping_to: (d0 * 96 + d1 * 96 + d2 * 96 + d3 * 96 + d4 * 32 + d5, d6), memory_config: (3, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.typecasttensor<[3,f32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')dtype: #tt.supportedDataTypestensor<[3,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[12,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')dtype: #tt.supportedDataTypestensor<[12,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[3,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[3,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,256,1,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,256,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 8192 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (256, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 8192 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (256, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,4,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,4,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,4,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,4,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.typecasttensor<[45303552,f32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1415736, 'tile<32x32, f32>', 'dram')dtype: #tt.supportedDataTypestensor<[45303552,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1415736, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[50176,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1568, 1, 'tile<32x32, f32>', 'dram')dtype: #tt.supportedDataTypestensor<[50176,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1568, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[45303552,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1415736, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[45303552,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1415736, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,256,1,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,256,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 8192 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (256, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 8192 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (256, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,4,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,4,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,4,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,4,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.typecasttensor<[45303552,f32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1415736, 'tile<32x32, f32>', 'dram')dtype: #tt.supportedDataTypestensor<[45303552,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1415736, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[200704,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (6272, 1, 'tile<32x32, f32>', 'dram')dtype: #tt.supportedDataTypestensor<[200704,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (6272, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[45303552,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1415736, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[45303552,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1415736, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,256,1,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,256,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 8192 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (256, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 8192 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (256, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,4,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,4,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,4,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,4,f32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 45359104 + d1 * 177184 + d2 * 25312 + d3, d4), memory_config: (1417472, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.typecasttensor<[45303552,f32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1415736, 'tile<32x32, f32>', 'dram')dtype: #tt.supportedDataTypestensor<[45303552,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1415736, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[12544,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (392, 1, 'tile<32x32, f32>', 'dram')dtype: #tt.supportedDataTypestensor<[12544,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (392, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[45303552,1,bf16]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1415736, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[45303552,1,f32]>mapping_from: (d0, d1), mapping_to: (d0, d1), memory_config: (1415736, 1, 'tile<32x32, f32>', 'dram')nannan
ttnn.typecasttensor<[25281,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 791, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[25281,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 791, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,25281,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 808992 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (25281, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,25281,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 808992 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (25281, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,1,25281,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 808992 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (25281, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,25281,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 808992 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (25281, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,1,25281,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 808992 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (25281, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,25281,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 808992 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (25281, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[7,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[7,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,7,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 224 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (7, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,7,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 224 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (7, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,7,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 224 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (7, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,7,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 224 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (7, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,7,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 224 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (7, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,7,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 224 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (7, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,7,25281,2,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,7,25281,2,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[2,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[2,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,2,1,ui32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,2,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3, d4), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,7,25281,2,1,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,7,25281,2,1,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,7,25281,2,4,bf16]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,7,25281,2,4,si32]>mapping_from: (d0, d1, d2, d3, d4), mapping_to: (d0 * 5662944 + d1 * 808992 + d2 * 32 + d3, d4), memory_config: (176967, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[160,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[160,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[160,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[160,bf16]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[160,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[160,bf16]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[160,bf16]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[160,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[8,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[8,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,8,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 256 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (8, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,8,1,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 256 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (8, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,8,1,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 256 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (8, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,8,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 256 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (8, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,8,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 256 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (8, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,8,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 256 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (8, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (1, 1, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 32 + d1 * 32 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,8,160,160,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 6553600 + d1 * 819200 + d2 * 5120 + d3 * 32 + d4, d5), memory_config: (204800, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,8,160,160,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 6553600 + d1 * 819200 + d2 * 5120 + d3 * 32 + d4, d5), memory_config: (204800, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[160,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[160,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 5, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,160,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 5120 + d1 * 5120 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (160, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,160,1,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 5120 + d1 * 5120 + d2 * 32 + d3 * 32 + d4, d5), memory_config: (160, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,160,1,1,ui32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 5120 + d1 * 5120 + d2 * 5120 + d3 * 32 + d4, d5), memory_config: (160, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,160,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 5120 + d1 * 5120 + d2 * 5120 + d3 * 32 + d4, d5), memory_config: (160, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,8,160,160,1,1,si32]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 6553600 + d1 * 819200 + d2 * 5120 + d3 * 32 + d4, d5), memory_config: (204800, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,8,160,160,1,1,bf16]>mapping_from: (d0, d1, d2, d3, d4, d5), mapping_to: (d0 * 6553600 + d1 * 819200 + d2 * 5120 + d3 * 32 + d4, d5), memory_config: (204800, 1, 'tile<32x32, bf16>', 'dram')nannan
ttnn.typecasttensor<[1,si32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,ui32]>mapping_from: (d0), mapping_to: (0, d0), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')nannan
ttnn.typecasttensor<[1,1,1,1,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,1,1,1,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 32 + d1 * 32 + d2, d3), memory_config: (1, 1, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,ui32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, u32>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, si32>', 'dram')nannan
ttnn.typecasttensor<[1,256,7,25281,bf16]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, bf16>', 'dram')dtype: #tt.supportedDataTypestensor<[1,256,7,25281,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, si32>', 'dram')nannan

ttnn.where

This table is a trace for ttnn.where op. Traces are generated from nightly tt-torch runs. To see nightly runs: Nightly Runs

NameInput ShapesInput LayoutsAttributesOutput ShapesOutput LayoutsPCCATOL
ttnn.wheretensor<[1,256,7,25281,si32]>
tensor<[1,256,7,25281,si32]>
tensor<[1,256,7,25281,si32]>
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, si32>', 'dram')
mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, si32>', 'dram')
tensor<[1,256,7,25281,si32]>mapping_from: (d0, d1, d2, d3), mapping_to: (d0 * 8192 + d1 * 32 + d2, d3), memory_config: (256, 791, 'tile<32x32, si32>', 'dram')nannan