ttnn.hardmish

ttnn.hardmish(input_tensor: ttnn.Tensor, *, memory_config: ttnn.MemoryConfig = None, output_tensor: ttnn.Tensor = None) ttnn.Tensor

Applies hardmish to input_tensor element-wise.

\[\mathrm{{output\_tensor}}_i = \mathrm{{input\_tensor}}_i \times \frac{{\min(\max(\mathrm{{input\_tensor}}_i + 2.8, 0), 5)}}{{5}}\]
Parameters:

input_tensor (ttnn.Tensor) – the input tensor. [Supported range -20 to inf]

Keyword Arguments:
  • memory_config (ttnn.MemoryConfig, optional) – memory configuration for the operation. Defaults to None.

  • output_tensor (ttnn.Tensor, optional) – preallocated output tensor. Defaults to None.

Returns:

ttnn.Tensor – the output tensor.

Note

Supported dtypes, layouts, and ranks:

Dtypes

Layouts

Ranks

BFLOAT16, BFLOAT8_B

TILE

2, 3, 4

Computes the Hard Mish activation function. Hard Mish is a piecewise-linear approximation of the Mish activation function, offering improved computational efficiency while maintaining similar performance characteristics.

Example

# Create a tensor with specific values
tensor = ttnn.from_torch(
    torch.tensor([[-2.0, -1.0], [1.0, 2.0]], dtype=torch.bfloat16), layout=ttnn.TILE_LAYOUT, device=device
)

# Apply Hard Mish activation function
output = ttnn.hardmish(tensor)
logger.info(f"Hard Mish: {output}")