ttnn.to_layout
- ttnn.to_layout = FastOperation(python_fully_qualified_name='ttnn.to_layout', function=<ttnn._ttnn.operations.core.to_layout_t object>, preprocess_golden_function_inputs=<function default_preprocess_golden_function_inputs>, golden_function=<function _golden_function>, postprocess_golden_function_outputs=<function default_postprocess_golden_function_outputs>, is_cpp_operation=True, is_experimental=False)
-
Organizes the ttnn.Tensor tensor into either ttnn.ROW_MAJOR_LAYOUT or ttnn.TILE_LAYOUT.
When requesting ttnn.ROW_MAJOR_LAYOUT, the tensor will be returned unpadded in the last two dimensions. When requesting ttnn.TILE_LAYOUT, the tensor will be automatically padded where the width and height become multiples of 32. In the case where the layout is the same, the operation simply pads or unpads the last two dimensions depending on the requested layout.
- Args:
-
tensor (ttnn.Tensor): the input tensor to be organized. layout (ttnn.Layout): the desired layout, either ttnn.ROW_MAJOR_LAYOUT or ttnn.TILE_LAYOUT. dtype (ttnn.DataType, optional): the optional output data type. memory_config (ttnn.MemoryConfig, optional): the optional output memory configuration. device (ttnn.Device | ttnn.MeshDevice): the device/mesh device whose worker thread on the host should be used for the layout conversion.
- Returns:
-
ttnn.Tensor: the tensor with the requested layout.
- Example:
-
>>> device_id = 0 >>> device = ttnn.open_device(device_id=device_id) >>> tensor = ttnn.to_device(ttnn.from_torch(torch.randn((10, 64, 32), dtype=torch.bfloat16)), device) >>> tensor = ttnn.to_layout(tensor, layout=ttnn.TILE_LAYOUT) >>> print(tensor[0,0,:3]) Tensor([1.42188, -1.25, -0.398438], dtype=bfloat16)