ttnn.cumprod
- ttnn.cumprod(input: ttnn.Tensor, dim: int, *, dtype: ttnn.DataType | None, reverse_order: bool, optional, default False, out: ttnn.Tensor | None) ttnn.Tensor
-
Returns cumulative product of input along dimension dim For a given input of size N, the output will also contain N elements and be such that:
\[\mathrm{{output}}_i = \mathrm{{input}}_1 \times \mathrm{{input}}_2 \times \cdots \times \mathrm{{input}}_i\]- Parameters:
-
input (ttnn.Tensor) – input tensor. Must be on the device.
dim (int) – dimension along which to compute cumulative product
- Keyword Arguments:
-
dtype (ttnn.DataType, optional) – desired output type. If specified then input tensor will be casted to dtype before processing.
reverse_order (bool, optional, default False) – whether to perform accumulation from the end to the beginning of accumulation axis.
out (ttnn.Tensor, optional) – preallocated output. If specified, out must have same shape as input, and must be on the same device.
- Returns:
-
ttnn.Tensor – the output tensor.
Note
If both dtype and output are specified then output.dtype must match dtype.
Supported dtypes, layout, ranks and dim values:
Dtypes
Layouts
Ranks
dim
BFLOAT16, FLOAT32
TILE
1, 2, 3, 4, 5
-rank <= dim < rank
INT32, UINT32
TILE
3, 4, 5
dim in {0, 1, …, rank - 3} or dim in {-rank, -rank + 1, …, -3}
- Memory Support:
-
Interleaved: DRAM and L1
- Limitations:
-
Preallocated output must have the same shape as the input
Preallocated output for integer types is not supported
Example
# Create tensor tensor_input = ttnn.rand((2, 3, 4), device=device) # Apply ttnn.cumprod() on dim=0 tensor_output = ttnn.cumprod(tensor_input, dim=0) logger.info(f"Cumprod result: {tensor_output}") # With preallocated output and dtype preallocated_output = ttnn.rand([2, 3, 4], dtype=ttnn.bfloat16, device=device) # Apply ttnn.cumprod() with out and dtype tensor_output = ttnn.cumprod(tensor_input, dim=0, dtype=ttnn.bfloat16, out=preallocated_output) logger.info(f"Cumprod with preallocated output result: {tensor_output}")