ttnn.slice
- ttnn.slice = Operation(python_fully_qualified_name='ttnn.slice', function=<ttnn._ttnn.operations.data_movement.slice_t object>, preprocess_golden_function_inputs=<function default_preprocess_golden_function_inputs>, golden_function=None, postprocess_golden_function_outputs=<function default_postprocess_golden_function_outputs>, is_cpp_operation=True, is_experimental=False)
-
Returns a sliced tensor. If the input tensor is on host, the slice will be performed on host, and if its on device it will be performed on device.
- Parameters:
-
input_tensor – Input Tensor.
slice_start – Start indices of input tensor. Values along each dim must be < input_tensor_shape[i].
slice_end – End indices of input tensor. Values along each dim must be < input_tensor_shape[i].
slice_step – (Optional[List[int[tensor rank]]) Step size for each dim. Default is None, which works out be 1 for each dimension.
- Keyword Arguments:
-
tensor (memory_config Memory Config of the output) –
queue_id (uint8, optional) –
pad_value – Optional value to fill padding for tiled tensors. Padding values are unmodified (and undefined) by default
- Returns:
-
ttnn.Tensor – the output tensor.
Example
>>> tensor = ttnn.slice(ttnn.from_torch(torch.zeros((1, 1, 64, 32), dtype=torch.bfloat16), device=device), [0, 0, 0, 0], [1, 1, 64, 16], [1, 1, 2, 1]) >>> print(tensor.shape) [1, 1, 32, 16] >>> input = ttnn.from_torch(torch.zeros((1, 1, 64, 32), dtype=torch.bfloat16), device=device) >>> output = ttnn.slice(input, [0, 0, 0, 0], [1, 1, 32, 32]) >>> print(output.shape) [1, 1, 32, 32]