ttnn.create_sharded_memory_config

ttnn.create_sharded_memory_config(shape: ttnn.Shape | Tuple[int, ...] | List[int], core_grid: ttnn.CoreGrid | ttnn.CoreRange, strategy: ShardStrategy, orientation: ShardOrientation | None = None, halo: bool = False, use_height_and_width_as_shard_shape: bool = False) MemoryConfig

Creates a MemoryConfig object with a sharding spec, required for sharded ops. Currently sharding only supports L1 tensors.

Args:
  • shape: the shape of the tensor

  • core_grid: the core_grid on which to distribute the sharded tensor on (writes to the cores L1s)

  • strategy: the sharding strategy of either height, width or block

  • orientation: the order in which to traverse the cores when reading/writing shards. Defaults to ttnn.ShardOrientation.ROW_MAJOR

  • halo: if the shards have overlapping values. Defaults to False

  • use_height_and_width_as_shard_shape: if True, the height and width of the tensor will be used as the shard shape. Defaults to False. If is False, the shard shape will be calculated based on the core_grid and the tensor shape where tensor shape is seen as [math.prod(dims), width]

Example::
>>> tensor = ttnn.create_sharded_memory_config((5, 8), (320,64), ttnn.ShardStrategy.BLOCK, ttnn.ShardOrientation.ROW_MAJOR, False)