ttnn.to_device
- ttnn.to_device() None
-
Copies the ttnn.Tensor
tensorto the tt_lib.device.MeshDevice.The tensor may be placed in DRAM or L1 memory.
Currently memory_config must be of an Interleaved tensor (not sharded)
:param *
tensor: the ttnn.Tensor :param *device: the ttnn.MeshDevice :param *memory_config: the optional MemoryConfig (DRAM_MEMORY_CONFIG or L1_MEMORY_CONFIG). Defaults to DRAM_MEMORY_CONFIG.Example
# Open the device # device_id = 0 # device = ttnn.open_device(device_id=device_id) # Create a TT-NN tensor and move it to the specified device tensor_on_host = ttnn.from_torch(torch.randn((10, 64, 32)), dtype=ttnn.bfloat16) ttnn_tensor = ttnn.to_device(tensor_on_host, device=device) logger.info("TT-NN tensor shape after moving to device", ttnn_tensor.shape)