ttnn.as_tensor
- ttnn.as_tensor(tensor: torch.Tensor, dtype: ttnn.DataType | None, *, layout: ttnn.Layout | None = ttnn.ROW_MAJOR_LAYOUT, device: ttnn.Device | None = None, memory_config: ttnn.MemoryConfig | None = None, cache_file_name: str | pathlib.Path | None = None, preprocess: Callable[[ttnn.Tensor], ttnn.Tensor] | None = None, mesh_mapper: ttnn.TensorToMesh | None = None, use_device_tilizer: bool | None = False) ttnn.Tensor
-
Converts the torch.Tensor tensor into a ttnn.Tensor.
- Parameters:
-
tensor (torch.Tensor) – the input tensor.
dtype (ttnn.DataType, optional) – The ttnn data type.
- Keyword Arguments:
-
layout (ttnn.Layout, optional) – The ttnn layout. Defaults to ttnn.ROW_MAJOR_LAYOUT.
device (ttnn.Device, optional) – The ttnn device. Defaults to None.
memory_config (ttnn.MemoryConfig, optional) – The ttnn memory configuration. Defaults to None.
cache_file_name (str | pathlib.Path, optional) – The cache file name. Defaults to None.
preprocess (Callable[[ttnn.Tensor], ttnn.Tensor], optional) – The function to preprocess the tensor before serializing/converting to ttnn. Defaults to None.
mesh_mapper (ttnn.TensorToMesh, optional) – The TensorToMesh to define the mapping from torch to multi-device. Defaults to None.
-
use_device_tilizer (bool, optional) –
The flag that toggles whether to use host vs. device tilizer. Defaults to False.
For Grayskull, the on-device tilizer will truncate mantissa bits for bfp* formats.
For Wormhole, the on-device tilizer will raise a runtime error (RTE) for bfp8 but will truncate for bfp4/2 formats.
- Returns:
-
ttnn.Tensor – The resulting ttnn tensor.
Examples
>>> tensor = ttnn.as_tensor(torch.randn((2,3)), dtype=ttnn.bfloat16) >>> print(tensor) Tensor([[1.375, -1.30469, -0.714844], [-0.761719, 0.53125, -0.652344]], dtype=bfloat16)