ttnn.interleaved_to_sharded

ttnn.interleaved_to_sharded() None

Converts a tensor from interleaved to sharded memory layout

:param * input_tensor: input tensor :type * input_tensor: ttnn.Tensor :param * grid: Grid of sharded tensor :type * grid: ttnn.CoreGrid :param * shard_shape: Sharding shape. :type * shard_shape: List(int[2]) :param * shard_scheme: Sharding scheme(height, width or block). :type * shard_scheme: ttl.tensor.TensorMemoryLayout :param * shard_orientation: Shard orientation (ROW or COL major). :type * shard_orientation: ttl.tensor.ShardOrientation :param * sharded_memory_config: Instead of shard_shape, shard_scheme and orientation you can provide a single MemoryConfig representing the sharded tensor. :type * sharded_memory_config: MemoryConfig

:keyword * output_dtype: Output data type, defaults to same as input. :kwtype * output_dtype: Optional[ttnn.DataType]

Example 1 (using grid, shape, scheme, orienttion):

>>> sharded_tensor = ttnn.sharded_to_interleaved(tensor, ttnn.CoreGrid(3,3), [32,32], ttl.tensor.TensorMemoryLayout.HEIGHT_SHARDED, ttl.tensor.ShardOrientation.ROW_MAJOR)
Example 2 (using sharded memory config):
>>> sharded_memory_config_dict = dict(
    core_grid=ttnn.CoreRangeSet(
        {
            ttnn.CoreRange(
                ttnn.CoreCoord(0, 0), ttnn.CoreCoord(1, 1)
            ),
        }
    ),
    strategy=ttnn.ShardStrategy.BLOCK,
),
>>> shard_memory_config = ttnn.create_sharded_memory_config(input_shape, **input_sharded_memory_config_args)
>>> sharded_tensor = ttnn.sharded_to_interleaved(tensor, shard_memory_config)