ttnn.from_torch

ttnn.from_torch = Operation(python_fully_qualified_name='ttnn.from_torch', function=<function from_torch>, preprocess_golden_function_inputs=<function default_preprocess_golden_function_inputs>, golden_function=<function _golden_function>, postprocess_golden_function_outputs=<function default_postprocess_golden_function_outputs>, is_cpp_operation=False, is_experimental=False)

Converts the torch.Tensor tensor into a ttnn.Tensor. For bfloat8_b or bfloat4_b format, the function itself is called twice, first call runs in bfloat16 format, and calls to_layout to convert from row_major layout to tile layout (for padding purpose in case input is not tile padded). Second call runs in desired format and does not call to_layout for bfloat8_b or bfloat4_b as we now convert to tile layout during tensor creation (ttnn.Tensor).

Parameters:
  • tensor (torch.Tensor) – the input tensor.

  • dtype (ttnn.DataType, optional) – the desired ttnn data type. Defaults to None.

Keyword Arguments:
  • tile (ttnn.Tile, optional) – the desired tiling configuration for the tensor. Defaults to None.

  • pad_value (float, optional) – the desired padding value for tiling. Only used if layout is TILE_LAYOUT. Defaults to None.

  • layout (ttnn.Layout, optional) – the desired ttnn layout. Defaults to ttnn.ROW_MAJOR_LAYOUT.

  • device (ttnn.MeshDevice, optional) – the desired ttnn device. Defaults to None.

  • memory_config (ttnn.MemoryConfig, optional) – The desired ttnn memory configuration. Defaults to None.

  • mesh_mapper (ttnn.TensorToMesh, optional) – The desired ttnn mesh mapper. Defaults to None.

  • cq_id (int, optional) – The command queue ID to use. Defaults to 0.

Returns:

ttnn.Tensor – The resulting ttnn tensor.

Example

>>> tensor = ttnn.from_torch(torch.randn((2,3)), dtype=ttnn.bfloat16)
>>> print(tensor)
Tensor([[1.375, -1.30469, -0.714844],
    [-0.761719, 0.53125, -0.652344]], dtype=bfloat16)