ttnn.softplus
- ttnn.softplus(input_tensor: ttnn.Tensor, *, beta: float = 1, threshold: float = 20, memory_config: ttnn.MemoryConfig = None, output_tensor: ttnn.Tensor = None) ttnn.Tensor
-
Applies softplus to
input_tensorelement-wise.\[\mathrm{output\_tensor}_i = softplus(\mathrm{input\_tensor}_i)\]- Parameters:
-
input_tensor (ttnn.Tensor) – the input tensor.
- Keyword Arguments:
-
beta (float, optional) – Scales the input before applying the Softplus function. By modifying
beta, you can adjust the steepness of the function. A higherbetavalue makes the function steeper, approaching a hard threshold like the ReLU function for large values ofbeta. Defaults to 1.threshold (float, optional) – Used to switch to a linear function for large values to improve numerical stability. This avoids issues with floating-point representation for very large values. Defaults to 20.
memory_config (ttnn.MemoryConfig, optional) – Memory configuration for the operation. Defaults to None.
output_tensor (ttnn.Tensor, optional) – preallocated output tensor. Defaults to None.
- Returns:
-
ttnn.Tensor – the output tensor.
Note
Supported dtypes, layouts, and ranks:
Dtypes
Layouts
Ranks
BFLOAT16, BFLOAT8_B
TILE
2, 3, 4
Example
# Create a tensor with specific values tensor = ttnn.from_torch( torch.tensor([[1, 2], [3, 4]], dtype=torch.bfloat16), dtype=ttnn.bfloat16, layout=ttnn.TILE_LAYOUT, device=device, ) # Apply Softplus activation function output = ttnn.softplus(tensor, beta=1.0, threshold=20.0) logger.info(f"Softplus: {output}")