ttnn.concat
- ttnn.concat(input_tensor: List of ttnn.Tensor, dim: number, *, memory_config: ttnn.MemoryConfig | None = None, queue_id: int | None = 0, output_tensor: ttnn.Tensor | None = None, groups: int | None = 1) ttnn.Tensor
-
- Parameters:
-
input_tensor (List of ttnn.Tensor) – the input tensors.
dim (number) – the concatenating dimension.
- Keyword Arguments:
-
memory_config (ttnn.MemoryConfig, optional) – Memory configuration for the operation. Defaults to None.
queue_id (int, optional) – command queue id. Defaults to 0.
output_tensor (ttnn.Tensor, optional) – Preallocated output tensor. Defaults to None.
groups (int, optional) – When groups is set to a value greater than 1, the inputs are split into N groups partitions, and elements are interleaved from each group into the output tensor. Each group is processed independently, and elements from each group are concatenated in an alternating pattern based on the number of groups. This is useful for recombining grouped convolution outputs during residual concatenation. Defaults to 1. Currently, groups > 1 is only supported for two height sharded input tensors.
- Returns:
-
ttnn.Tensor – the output tensor.
Example
>>> tensor1 = ttnn.from_torch(torch.zeros((1, 1, 64, 32), dtype=torch.bfloat16), device=device) >>> tensor2 = ttnn.from_torch(torch.zeros((1, 1, 64, 32), dtype=torch.bfloat16), device=device) >>> output = ttnn.concat([tensor1, tensor2], dim=3) >>> print(output.shape) [1, 1, 64, 64]