ttnn.slice

ttnn.slice(input_tensor, slice_start, slice_end, slice_step, *, memory_config, pad_value, sub_core_grids=None) ttnn.Tensor

Returns a sliced tensor. If the input tensor is on host, the slice will be performed on host, and if its on device it will be performed on device.

Parameters:
  • input_tensor – Input Tensor.

  • slice_start – Start indices of input tensor. Values along each dim must be < input_tensor_shape[i].

  • slice_end – End indices of input tensor. Values along each dim must be < input_tensor_shape[i].

  • slice_step – (Optional[List[int[tensor rank]]) Step size for each dim. Default is None, which works out be 1 for each dimension.

Keyword Arguments:
  • memory_config – Memory Config of the output tensor

  • pad_value – Optional value to fill padding for tiled tensors. Padding values are unmodified (and undefined) by default

  • sub_core_grids – (ttnn.CoreRangeSet, optional): Sub core grids. Defaults to None.

Returns:

ttnn.Tensor – the output tensor.

Example

# Create a tensor to slice
input_tensor = ttnn.rand((1, 1, 64, 32), dtype=ttnn.bfloat16, layout=ttnn.Layout.TILE, device=device)

# Slice the tensor
sliced_tensor = ttnn.slice(input_tensor, [0, 0, 0, 0], [1, 1, 64, 16], [1, 1, 2, 1])
logger.info("Sliced Tensor Shape:", sliced_tensor.shape)  # Sliced Tensor Shape: Shape([1, 1, 32, 16])

# Create a tensor to slice without step
input_tensor = ttnn.rand((1, 1, 64, 32), dtype=ttnn.bfloat16, layout=ttnn.Layout.TILE, device=device)
output = ttnn.slice(input_tensor, [0, 0, 0, 0], [1, 1, 32, 32])
logger.info("Sliced Tensor Shape:", output.shape)  # Sliced Tensor Shape: Shape([1, 1, 32, 32])