ttnn.create_sharded_memory_config

ttnn.create_sharded_memory_config(shape: Shape | Tuple[int, ...] | List[int], core_grid: CoreGrid | CoreRange, strategy: ShardStrategy, orientation: ShardOrientation | None = None, halo: bool = False, use_height_and_width_as_shard_shape: bool = False) MemoryConfig

Creates a MemoryConfig object with a sharding spec, required for sharded ops.

Parameters:
  • shape (ttnn.Shape | Tuple[int, ...] | List[int]) – the shape of the tensor.

  • core_grid (ttnn.CoreGrid | ttnn.CoreRange) – the core_grid on which to distribute the sharded tensor on (writes to the cores L1s).

  • strategy (ttnn.ShardStrategy) – the sharding strategy of either height, width or block.

  • orientation (ttnn.ShardOrientation, optional) – the order in which to traverse the cores when reading/writing shards. Defaults to None.

  • halo (bool, optional) – if the shards have overlapping values. Defaults to False.

  • use_height_and_width_as_shard_shape (bool, optional) – if True, the height and width of the tensor will be used as the shard shape. Defaults to False. If is False, the shard shape will be calculated based on the core_grid and the tensor shape where tensor shape is seen as [math.prod(dims), width]

Returns:

ttnn.MemoryConfig – the MemoryConfig object.

Note

Currently sharding only supports L1 tensors.

Example

>>> tensor = ttnn.create_sharded_memory_config((5, 8), (320,64), ttnn.ShardStrategy.BLOCK, ttnn.ShardOrientation.ROW_MAJOR, False)