ttnn.tilize

ttnn.tilize(input_tensor: ttnn.Tensor, *, memory_config: ttnn.MemoryConfig = None, dtype: data type = None, use_multicore: bool = True, use_low_perf: bool = False, sub_core_grids: CoreRangeSet = using the entire device) ttnn.Tensor

Changes data layout of input tensor to TILE.

Input tensor must be on TT accelerator device, in ROW_MAJOR layout, and have BFLOAT16 data type.

Output tensor will be on TT accelerator device, in TILE layout, and have BFLOAT16 data type.

Parameters:

input_tensor (ttnn.Tensor) – the input tensor.

Keyword Arguments:
  • memory_config (ttnn.MemoryConfig, optional) – Memory configuration for the operation. Defaults to None.

  • dtype (data type, optional) – Data type of the output tensor. Defaults to None.

  • use_multicore (bool, optional) – Whether to use multicore. Defaults to True.

  • use_low_perf (bool, optional) – Use a low performance version that uses less memory. USE ONLY IF ABSOLUTELY NEEDED IN MODELS. Defaults to False.

  • sub_core_grids (CoreRangeSet, optional) – Used to restrict tilize to a set of cores, Defaults to using the entire device

Returns:

ttnn.Tensor – the output tensor.

Example

# Create a tensor to tilize
input_tensor = ttnn.rand((1, 1, 64, 32), dtype=ttnn.bfloat16, layout=ttnn.ROW_MAJOR_LAYOUT, device=device)

# Tilize the tensor
tilized_tensor = ttnn.tilize(input_tensor)
logger.info("Tilized Tensor Shape:", tilized_tensor.shape)