tt.cpu_module
(tt::CPUModuleOp)
Module-wrapper operation for CPU ops
Syntax:
operation ::= `tt.cpu_module` attr-dict-with-keyword regions
Custom module operation that can a single ModuleOp, which should contain all funcs which should be run on CPU.
Example:
tt.cpu_module {
module {
func.func foo() { ... }
}
}
Traits: IsolatedFromAbove
, NoRegionArguments
, NoTerminator
, SingleBlock
, SymbolTable
tt.device_module
(tt::DeviceModuleOp)
Module-wrapper operation for device ops
Syntax:
operation ::= `tt.device_module` attr-dict-with-keyword $bodyRegion
Custom module operation that can a single ModuleOp, which should contain all funcs which should be run on device.
Example:
tt.device_module {
module {
func.func foo() { ... }
}
}
Traits: IsolatedFromAbove
, NoRegionArguments
, NoTerminator
, SingleBlock
, SymbolTable
tt.device
(tt::DeviceOp)
Named device
Syntax:
operation ::= `tt.device` $sym_name `=` $device_attr attr-dict
Interfaces: Symbol
Attributes:
Attribute | MLIR Type | Description |
---|---|---|
sym_name | ::mlir::StringAttr | string attribute |
device_attr | ::mlir::tt::DeviceAttr | Device attribute in TT dialect.{{% markdown %}} Describes the physical layout of a device in the system and is made up of a few components: - A grid attribute that describes the device's compute grid shape. It not only describes the shape of the compute grid, but also carries an affine map that describes how the logical grid maps to the physical grid. - Two affine maps that describe how a tensor layout's linear attribute maps to the L1 and DRAM memory spaces. - A mesh shape that describes the virtual layout of the chips with respect to each other. Note that in a multi-chip system, this grid encapsulates the entire system's grid shape, e.g. 8x16 grid could be made up of a 1x2 mesh of chips side-by-side. The mesh attribute configures how the above grid/map attributes are created such that they implement this mesh topology. - An array of chip ids that this device is made up of. This array's length must match the volume of the mesh shape and should be interpreted in row-major order. {{% /markdown %}} |
tt.get_tuple_element
(tt::GetTupleElementOp)
GetTupleElement operation
Syntax:
operation ::= `tt.get_tuple_element` $operand `[` $index `]` attr-dict `:` functional-type(operands, results)
Extracts element at index
position of the operand
tuple and produces a result
.
Example:
%result = tt.get_tuple_element %operand[0] : (tuple<tensor<32x32xbf16>, tensor<1x32xf32>>) -> tensor<32x32xbf16>
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable
, InferTypeOpInterface
, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Attributes:
Attribute | MLIR Type | Description |
---|---|---|
index | ::mlir::IntegerAttr | 32-bit signless integer attribute whose value is non-negative |
Operands:
Operand | Description |
---|---|
operand | nested tuple with any combination of ranked tensor of any type values values |
Results:
Result | Description |
---|---|
result | ranked tensor of any type values |
tt.tuple
(tt::TupleOp)
Tuple operation
Syntax:
operation ::= `tt.tuple` $operands attr-dict `:` custom<TupleOpType>(type($operands), type($result))
Produces a result
tuple from operands operands
.
Example:
%result = tt.tuple %operand0, %operand1 : tuple<tensor<32xbf16, tensor<1x32xf32>>
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable
, InferTypeOpInterface
, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Operands:
Operand | Description |
---|---|
operands | variadic of ranked tensor of any type values |
Results:
Result | Description |
---|---|
result | nested tuple with any combination of ranked tensor of any type values values |