ttnn.to_device
- ttnn.to_device = Operation(python_fully_qualified_name='ttnn.to_device', function=<function PyCapsule.to_device>, preprocess_golden_function_inputs=<function default_preprocess_golden_function_inputs>, golden_function=<function _golden_function>, postprocess_golden_function_outputs=<function default_postprocess_golden_function_outputs>, is_cpp_operation=False, is_experimental=False)
-
Copies the ttnn.Tensor
tensor
to the tt_lib.device.MeshDevice.The tensor may be placed in DRAM or L1 memory.
Currently memory_config must be of an Interleaved tensor (not sharded)
:param *
tensor
: the ttnn.Tensor :param *device
: the ttnn.MeshDevice :param *memory_config
: the optional MemoryConfig (DRAM_MEMORY_CONFIG or L1_MEMORY_CONFIG). Defaults to DRAM_MEMORY_CONFIG.Example:
>>> device_id = 0 >>> device = ttnn.open_device(device_id=device_id) >>> tensor_on_host = ttnn.from_torch(torch.randn((10, 64, 32)), dtype=ttnn.bfloat16) >>> tensor_on_device = ttnn.to_device(tensor_on_host, device, memory_config=ttnn.L1_MEMORY_CONFIG) >>> print(tensor_on_device[0,0,:3]) Tensor([ 0.800781, -0.455078, -0.585938], dtype=bfloat16 )