ttnn.prelu
- ttnn.prelu(input_tensor_a: ttnn.Tensor, input_tensor_b: ttnn.Tensor or List[float] of length 1 or Number, *, memory_config: ttnn.MemoryConfig | None = None) ttnn.Tensor
-
Perform an eltwise-prelu operation.
\[\mathrm{output\_tensor} = \verb|prelu|(\mathrm{input\_tensor\_a,input\_tensor\_b})\]- Parameters:
-
input_tensor_a (ttnn.Tensor) – the input tensor.
input_tensor_b (ttnn.Tensor or List[float] of length 1 or Number) – weight.
- Keyword Arguments:
-
memory_config (ttnn.MemoryConfig, optional) – memory configuration for the operation. Defaults to None.
- Returns:
-
ttnn.Tensor – the output tensor.
Note
Supported dtypes, layouts, and ranks:
Dtypes
Layouts
Ranks
BFLOAT16, BFLOAT8_B
TILE
2, 3, 4, 5
PReLU supports the case where weight is a scalar or 1D list/array of size=1 or a 1D tensor
input_tensor_b
of size = the second dimension ininput_tensor_a
Example
>>> tensor1 = ttnn.from_torch(torch.rand([1, 2, 32, 32], dtype=torch.bfloat16), device=device) >>> tensor2 = ttnn.from_torch(torch.tensor([1, 2], dtype=torch.bfloat16), device=device) >>> output = ttnn.prelu(tensor1, tensor2/scalar)