ttnn.prod

ttnn.prod(input_tensor: ttnn.Tensor, *, dim: int = None, dims: List[int] = None, keepdim: bool = False, memory_config: ttnn.MemoryConfig = None) List of ttnn.Tensor

Computes the product of all elements on specified dim of the input_tensor tensor.

If no dim is provided (or dim is set to None), it will compute the full product of every element in the input_tensor tensor.

If keepdim is True, the resulting tensor will have the same rank as the input_tensor tensor, but with the specified dim reduced to 1. Otherwise, the target dim will be squeezed, resulting in an output tensor with one less dimension than the input_tensor tensor.

prod is also overloaded with a niche NC version of this function, with the following definition:

ttnn.prod(input_tensor: ttnn.Tensor, output_tensor: ttnn.Tensor, dims: List[int], memory_config: Optional[ttnn.MemoryConfig] = None) -> ttnn.Tensor

This version allows for a list of dims to be specified instead of a dim, requires an output_tensor tensor, and does not support keepdim. It is only intended for use with the NC dimensions (0, 1).

Parameters:

input_tensor (ttnn.Tensor) – the input tensor.

Keyword Arguments:
  • dim (int, optional) – Dimension to perform prod. Defaults to None.

  • dims (List[int], optional) – Dimensions to perform prod. Defaults to None. Mutually exclusive with dim.

  • keepdim (bool, optional) – keep original dimension size. Defaults to False.

  • memory_config (ttnn.MemoryConfig, optional) – Memory configuration for the operation. Defaults to None.

Returns:

List of ttnn.Tensor – the output tensor.

Note

The input_tensor supports the following data type and layout:

input_tensor

dtype

layout

BFLOAT16

TILE, ROW_MAJOR

The output_tensor will be in the following data type and layout:

output_tensor

dtype

layout

BFLOAT16

TILE

Memory Support:
  • Interleaved: DRAM and L1

Limitations:
  • All input tensors must be on-device.

  • When dim is not specified (i.e. full product), the input_tensor must be bfloat16, and keepdim=True is not supported (as this operation results in a scalar).

  • Sharding is not supported for this operation

Example

tensor = ttnn.rand((1,2), device=device)
output = ttnn.prod(tensor, dim=0)
output_all_dims = ttnn.prod(tensor)
Example (NC Product):
dims = [0,1]
input_shape = [2, 3, 4, 5]
output_shape = [1, 1, 4, 5] # shape on any dimension being reduced will be 1

input_tensor = ttnn.rand(input_shape, device)
output_tensor = ttnn.rand(output_shape, device)

output = ttnn.prod(input_tensor=input_tensor, output_tensor=output_tensor, dims=dims)