ttnn.from_torch

ttnn.from_torch(tensor: torch.Tensor, dtype: ttnn.DataType | None = None, *, tile: ttnn.Tile | None = None, pad_value: float | None = None, layout: ttnn.Layout | None = ttnn.ROW_MAJOR_LAYOUT, device: ttnn.Device | None = None, memory_config: ttnn.MemoryConfig | None = None, mesh_mapper: ttnn.TensorToMesh | None = None, cq_id: int | None = 0) ttnn.Tensor

Converts the torch.Tensor tensor into a ttnn.Tensor. For bfloat8_b or bfloat4_b format, the function itself is called twice, first call runs in bfloat16 format, and calls to_layout to convert from row_major layout to tile layout (for padding purpose in case input is not tile padded). Second call runs in desired format and does not call to_layout for bfloat8_b or bfloat4_b as we now convert to tile layout during tensor creation (ttnn.Tensor).

Parameters:
  • tensor (torch.Tensor) – the input tensor.

  • dtype (ttnn.DataType, optional) – the desired ttnn data type. Defaults to None.

Keyword Arguments:
  • tile (ttnn.Tile, optional) – the desired tiling configuration for the tensor. Defaults to None.

  • pad_value (float, optional) – the desired padding value for tiling. Only used if layout is TILE_LAYOUT. Defaults to None.

  • layout (ttnn.Layout, optional) – the desired ttnn layout. Defaults to ttnn.ROW_MAJOR_LAYOUT.

  • device (ttnn.Device, optional) – the desired ttnn device. Defaults to None.

  • memory_config (ttnn.MemoryConfig, optional) – The desired ttnn memory configuration. Defaults to None.

  • mesh_mapper (ttnn.TensorToMesh, optional) – The desired ttnn mesh mapper. Defaults to None.

  • cq_id (int, optional) – The command queue ID to use. Defaults to 0.

Returns:

ttnn.Tensor – The resulting ttnn tensor.

Example

>>> tensor = ttnn.from_torch(torch.randn((2,3)), dtype=ttnn.bfloat16)
>>> print(tensor)
Tensor([[1.375, -1.30469, -0.714844],
    [-0.761719, 0.53125, -0.652344]], dtype=bfloat16)