ttnn.softmax
- ttnn.softmax() None
-
Compute softmax over
input_tensor
alongdim
.:param *
input_tensor
: the input tensor :param *dim
: the dimension along which to compute softmax.:keyword *
memory_config
: the memory configuration for the output tensor. If not provided, the memory configuration of the input tensor is used. :keyword *compute_kernel_config
: the compute kernel configuration for the op. If not provided, the default configuration of the op is used.Example
>>> tensor = ttnn.to_device(ttnn.from_torch(torch.zeros((1, 1, 64, 32), dtype=torch.bfloat16)), device) >>> output = ttnn.softmax(tensor, -1) >>> print(output[0, 0, 0, :3]) ttnn.Tensor([ 0.0310059, 0.0310059, 0.0310059], dtype=bfloat16 )