ttnn.pad
- ttnn.pad = Operation(python_fully_qualified_name='ttnn.pad', function=<ttnn._ttnn.operations.data_movement.pad_t object>, preprocess_golden_function_inputs=<function _preprocess_golden_function_inputs>, golden_function=<function _golden_function>, postprocess_golden_function_outputs=<function _postprocess_golden_function_outputs>, is_cpp_operation=True, is_experimental=False)
-
Returns a padded tensor, with a specified value at the specified location. If the input tensor is on host, the pad will be performed on host, and if its on device it will be performed on device. Any rank of tensor is supported, however tensors with rank > 4 can only apply padding to the lower 3 dimensions.
:param *
input_tensor
: (ttnn.Tensor): the input tensor. :param *padding
: (list[Tuple[int,int]]): padding to apply. Each element of padding should be a tuple of 2 integers, with the first integer specifying the number of values to add before the tensor and the second integer specifying the number of values to add after the tensor. Mutually exclusive to output_tensor_shape and input_tensor_start. :param *value
: (Union[float,int]): value to pad with.:keyword *
use_multicore
: (Optional[bool]) switch to use multicore implementation :keyword *memory_config
: (Optional[ttnn.MemoryConfig]): Memory configuration for the operation. Defaults to None. :keyword *queue_id
: (Optional[int]): command queue id. Defaults to 0.- Returns:
-
List of ttnn.Tensor – the output tensor.
Example
input_tensor = ttnn.pad(pad_input, [(0, 0), (0, 0), (0, 12), (0, 12)], 0) assert (ttnn.to_torch(input_tensor[:, :, 20:32, 20:32]) == 0).all() assert input_tensor.shape == Shape([1, 8, 32, 32])