ttnn.linear
- ttnn.linear(input_tensor_a: ttnn.Tensor, input_tensor_b: ttnn.Tensor, *, bias: ttnn.Tensor | None = None, transpose_a: bool | None = False, transpose_b: bool | None = False, memory_config: ttnn.MemoryConfig | None = None, dtype: ttnn.DataType | None = None, program_config: MatmulProgramConfig | None = None, activation: str | None = None, compute_kernel_config: ttnn.DeviceComputeKernelConfig | None = None, core_grid: ttnn.CoreGrid | None = None, output_tile: List of [int] | None = None, optional_output_tensor: ttnn.Tensor | None = None) ttnn.Tensor
-
Returns the linear transformation of the inputs.
The limitations and behaviours are the same as for matmul.
- Parameters:
-
input_tensor_a (ttnn.Tensor) – the first tensor to be multiplied. Needs to be on the device.
input_tensor_b (ttnn.Tensor) – the second tensor to be multiplied. Needs to be on the device.
- Keyword Arguments:
-
bias (ttnn.Tensor, optional) – the bias tensor to be added. If specified, needs to be on the device. Defaults to None.
transpose_a (bool, optional) – Whether to transpose input_tensor_a. Defaults to False.
transpose_b (bool, optional) – Whether to transpose input_tensor_b. Defaults to False.
memory_config (ttnn.MemoryConfig, optional) – the memory configuration of the output tensor. Defaults to None, which will result in using ttnn.DRAM_MEMORY_CONFIG.
dtype (ttnn.DataType, optional) – the data type of the output tensor. Defaults to None.
program_config (MatmulProgramConfig, optional) – the program configuration for the matmul operation. Defaults to None.
activation (str, optional) – the activation function to be applied. Defaults to None.
compute_kernel_config (ttnn.DeviceComputeKernelConfig, optional) – the compute kernel configuration for the matmul operation. Defaults to None.
core_grid (ttnn.CoreGrid, optional) – the grid on which to distribute the sharded tensor on (writes to the cores L1s). Defaults to None.
output_tile (List of [int], optional) – Specifies the output tile configuration. Defaults to None.
optional_output_tensor (ttnn.Tensor, optional) – User provided on-device output tensor where the result of linear is to be written. Defaults to None.
- Returns:
-
ttnn.Tensor – the output tensor.
Example
>>> # batched matrix x broadcasted matrix >>> activations = ttnn.to_device(ttnn.from_torch(torch.randn((10, 64, 32), dtype=torch.bfloat16)), device) >>> weight = ttnn.to_device(ttnn.from_torch(torch.randn((32, 128), dtype=torch.bfloat16)), device) >>> bias = ttnn.to_device(ttnn.from_torch(torch.randn((128,), dtype=torch.bfloat16)), device) >>> output = ttnn.linear(activations, weight, bias=bias) >>> print(output.shape) [10, 64, 128]