ttnn.bitcast

ttnn.bitcast(input_tensor: ttnn.Tensor, dtype: ttnn.DataType, *, memory_config: ttnn.MemoryConfig = None, output_tensor: ttnn.Tensor = None) ttnn.Tensor

Bitcast reinterprets the bit pattern without conversion (unlike typecast which converts values).

Parameters:
  • input_tensor (ttnn.Tensor) – the input tensor.

  • dtype (ttnn.DataType) – output data type. Must have the same bit size as input dtype. Supported pairs: UINT16 <-> BFLOAT16 (both 16 bits), UINT32 <-> FLOAT32 (both 32 bits), UINT32 <-> INT32 (both 32 bits).

Keyword Arguments:
  • memory_config (ttnn.MemoryConfig, optional) – Memory configuration for the operation. Defaults to None.

  • output_tensor (ttnn.Tensor, optional) – preallocated output tensor. Defaults to None.

Returns:

ttnn.Tensor – the output tensor.

Note

Supported dtypes, layouts, and ranks:

Dtypes

Layouts

Ranks

BFLOAT16, FLOAT32, INT32, UINT16, UINT32

TILE

2, 3, 4

Example

# Create a tensor with uint16 values
tensor = ttnn.from_torch(
    torch.tensor([[16457, 16429], [32641, 31744]], dtype=torch.uint16),
    dtype=ttnn.uint16,
    layout=ttnn.TILE_LAYOUT,
    device=device,
)

# Bitcast uint16 to bfloat16 (reinterprets bit pattern)
output = ttnn.bitcast(tensor, ttnn.bfloat16)
logger.info(f"Bitcast uint16->bfloat16: {output}")