ttnn.sigmoid_accurate

ttnn.sigmoid_accurate(input_tensor: ttnn.Tensor, fast_and_approximate_mode: bool = False, *, memory_config: ttnn.MemoryConfig = None, output_tensor: ttnn.Tensor = None) ttnn.Tensor

Applies sigmoid_accurate to input_tensor element-wise.

\[\mathrm{output\_tensor}_i = \verb|sigmoid_accurate|(\mathrm{input\_tensor}_i)\]
Parameters:
  • input_tensor (ttnn.Tensor) – the input tensor.

  • fast_and_approximate_mode (bool, optional) – Enables fast and approximate mode for exponential operation. When False, uses the accurate version of exponential algorithm. Defaults to False.

Keyword Arguments:
  • memory_config (ttnn.MemoryConfig, optional) – memory configuration for the operation. Defaults to None.

  • output_tensor (ttnn.Tensor, optional) – preallocated output tensor. Defaults to None.

Returns:

ttnn.Tensor – the output tensor.

Note

Supported dtypes, layouts, and ranks:

Dtypes

Layouts

Ranks

BFLOAT16, BFLOAT8_B

TILE

2, 3, 4

Example

# Create a tensor with specific values
tensor = ttnn.from_torch(
    torch.tensor([[1, 2], [3, 4]], dtype=torch.bfloat16), layout=ttnn.TILE_LAYOUT, device=device
)

# Apply accurate sigmoid activation function
output = ttnn.sigmoid_accurate(tensor)
logger.info(f"Sigmoid accurate: {output}")

# Test with fast_and_approximate_mode=False
tensor = ttnn.from_torch(
    torch.tensor([[1, 2], [3, 4]], dtype=torch.bfloat16), layout=ttnn.TILE_LAYOUT, device=device
)
output = ttnn.sigmoid_accurate(tensor, fast_and_approximate_mode=False)
logger.info(f"Sigmoid accurate (precise): {output}")