ttnn.div_no_nan_bw

ttnn.div_no_nan_bw(grad_tensor: ttnn.Tensor, input_tensor: ttnn.Tensor, scalar: float, *, memory_config: ttnn.MemoryConfig = None) List of ttnn.Tensor

Performs backward operations for div_no_nan on input_tensor, scalar with given grad_tensor.

\[\mathrm{{output\_tensor}}_i = \verb|div_no_nan_bw|(\mathrm{{grad\_tensor}}_i, \mathrm{{input\_tensor}}_i, \verb|scalar|)\]
Parameters:
  • grad_tensor (ttnn.Tensor) – the input gradient tensor.

  • input_tensor (ttnn.Tensor) – the input tensor.

  • scalar (float) – Denominator value.

Keyword Arguments:

memory_config (ttnn.MemoryConfig, optional) – memory configuration for the operation. Defaults to None.

Returns:

List of ttnn.Tensor – the output tensor.

Note

Supported dtypes, layouts, and ranks:

Dtypes

Layouts

Ranks

BFLOAT16, BFLOAT8_B

TILE

2, 3, 4

Example

# Create sample tensors for backward division without NaN operation
grad_tensor = ttnn.from_torch(
    torch.tensor([[1, 2], [3, 4]], dtype=torch.bfloat16), layout=ttnn.TILE_LAYOUT, device=device
)
input_tensor = ttnn.from_torch(
    torch.tensor([[1, 2], [3, 4]], dtype=torch.bfloat16, requires_grad=True), layout=ttnn.TILE_LAYOUT, device=device
)
scalar = 2.0

# Call the div_no_nan_bw function
output = ttnn.div_no_nan_bw(grad_tensor, input_tensor, scalar)
logger.info(f"Division No NaN Backward: {output}")