ttnn.relu_min

ttnn.relu_min(input_tensor: ttnn.Tensor, lower_limit: float, *, memory_config: ttnn.MemoryConfig = None, output_tensor: ttnn.Tensor = None) ttnn.Tensor

Applies relu_min to input_tensor element-wise with lower_limit.

This will carry out ReLU operation at min value instead of the standard 0

\[\mathrm{output\_tensor}_i = \verb|relu_min|(\mathrm{input\_tensor}_i, \verb|lower_limit|)\]
Parameters:
  • input_tensor (ttnn.Tensor) – the input tensor.

  • lower_limit (float) – The min value for ReLU function.

Keyword Arguments:
  • memory_config (ttnn.MemoryConfig, optional) – Memory configuration for the operation. Defaults to None.

  • output_tensor (ttnn.Tensor, optional) – preallocated output tensor. Defaults to None.

Returns:

ttnn.Tensor – the output tensor.

Note

Supported dtypes, layouts, and ranks:

Dtypes

Layouts

Ranks

BFLOAT16

TILE

2, 3, 4

System memory is not supported.

Example

# Create a tensor with specific values
tensor = ttnn.from_torch(
    torch.tensor([[1, 2], [3, 4]], dtype=torch.bfloat16),
    dtype=ttnn.bfloat16,
    layout=ttnn.TILE_LAYOUT,
    device=device,
)
lower_limit = 3

# Apply ReLU with lower limit
output = ttnn.relu_min(tensor, lower_limit)
logger.info(f"ReLU min: {output}")