ttnn.rsub_bw

ttnn.rsub_bw(grad_tensor: ttnn.Tensor, input_tensor_a: ttnn.Tensor, input_tensor_b: ttnn.Tensor, *, are_required_outputs: List[bool] | None = [True, True], memory_config: ttnn.MemoryConfig | None = None, input_grad: ttnn.Tensor | None = None, other_grad: ttnn.Tensor | None = None, queue_id: int | None = 0) None

Performs backward operations for subraction of input_tensor_a from input_tensor_b with given grad_tensor (reversed order of subtraction operator).

Parameters:
Keyword Arguments:
  • are_required_outputs (List[bool], optional) – List of required outputs. Defaults to [True, True].

  • memory_config (ttnn.MemoryConfig, optional) – Memory configuration for the operation. Defaults to None.

  • input_grad (ttnn.Tensor, optional) – Preallocated output tensor for gradient of input_tensor_a. Defaults to None.

  • other_grad (ttnn.Tensor, optional) – Preallocated output tensor for gradient of input_tensor_b. Defaults to None.

  • queue_id (int, optional) – command queue id. Defaults to 0.

Note

Supported dtypes, layouts, and ranks:

Dtypes

Layouts

Ranks

BFLOAT16, BFLOAT8_B

TILE

2, 3, 4

bfloat8_b/bfloat4_b is only supported on TILE_LAYOUT

Example

>>> grad_tensor = ttnn.from_torch(torch.tensor([[1, 2], [3, 4]], dtype=torch.bfloat16), layout=ttnn.TILE_LAYOUT, device=device)
>>> tensor1 = ttnn.from_torch(torch.tensor([[1, 2], [3, 4]], dtype=torch.bfloat16, requires_grad=True), layout=ttnn.TILE_LAYOUT, device=device)
>>> tensor2 = ttnn.from_torch(torch.tensor([[1, 2], [3, 4]], dtype=torch.bfloat16, requires_grad=True), layout=ttnn.TILE_LAYOUT, device=device)
>>> output = ttnn.rsub_bw(grad_tensor, tensor1, tensor2)