ttnn.clamp

ttnn.clamp(input_tensor: ttnn.Tensor, *, min: ttnn.Tensor or number = None, max: ttnn.Tensor or number = None, memory_config: ttnn.MemoryConfig = None, output_tensor: ttnn.Tensor = None) ttnn.Tensor

Applies clamp to input_tensor element-wise.

Parameters:

input_tensor (ttnn.Tensor) – the input tensor.

Keyword Arguments:
  • min (ttnn.Tensor or number) – Minimum value. Defaults to None.

  • max (ttnn.Tensor or number) – Maximum value. Defaults to None.

  • memory_config (ttnn.MemoryConfig, optional) – Memory configuration for the operation. Defaults to None.

  • output_tensor (ttnn.Tensor, optional) – preallocated output tensor. Defaults to None.

Returns:

ttnn.Tensor – the output tensor.

Note

Supported dtypes, layouts, and ranks:

Dtypes

Layouts

Ranks

BFLOAT16, BFLOAT8_B, INT32, FLOAT32

TILE

2, 3, 4

INT32 is supported only for Tensor-scalar-scalar version.

Example

# Create tensors for clamping with tensor bounds
input_tensor = ttnn.from_torch(
    torch.tensor([[1, 2], [3, 4]], dtype=torch.bfloat16),
    dtype=ttnn.bfloat16,
    layout=ttnn.TILE_LAYOUT,
    device=device,
)
min_tensor = ttnn.from_torch(
    torch.tensor([[0, 2], [0, 4]], dtype=torch.bfloat16),
    dtype=ttnn.bfloat16,
    layout=ttnn.TILE_LAYOUT,
    device=device,
)
max_tensor = ttnn.from_torch(
    torch.tensor([[1, 2], [3, 4]], dtype=torch.bfloat16),
    dtype=ttnn.bfloat16,
    layout=ttnn.TILE_LAYOUT,
    device=device,
)

# Clamp values using tensor bounds
output = ttnn.clamp(input_tensor, min_tensor, max_tensor)
logger.info(f"Clamp with tensor bounds: {output}")

# Create tensor for clamping with scalar bounds
input_tensor = ttnn.from_torch(
    torch.tensor([[1, 2], [3, 4]], dtype=torch.bfloat16),
    dtype=ttnn.bfloat16,
    layout=ttnn.TILE_LAYOUT,
    device=device,
)

# Clamp values using scalar bounds
output = ttnn.clamp(input_tensor, min=2, max=9)
logger.info(f"Clamp with scalar bounds: {output}")