ttnn.model_preprocessing.preprocess_model_parameters

ttnn.model_preprocessing.preprocess_model_parameters(initialize_model: Callable[[], torch.nn.Module] | None = None, *, model_name: str | None = None, version: str | None = None, convert_to_ttnn: Callable[[torch.nn.Module, str], bool] | None = None, custom_preprocessor: Callable[[torch.nn.Module, str], dict | ParameterDict] | None = None, device: ttnn.Device | None = None, prefix: str | None = None) ParameterDict

Preprocess parameters of a given model.

Parameters:
  • model_name – Name of the model to be used by the cache. If not provided, the cache will be disabled.

  • version – Version of the model to be used by the cache. If not provided, the current git hash will be used. If the version doesn’t match the cached version, the cache will be invalidated.

  • initialize_model – Function for initializing the model. It’s not required if the model has already been cached and the cache is valid.

  • convert_to_ttnn – Function for determining whether to convert the parameters of a given module to ttnn.Tensor. If not provided, all modules will be converted.

  • custom_preprocessor – Function for preprocessing the parameters of a given module using user-specified logic. If not provided, the default preprocessor will be used.

  • device – Device on which to put ttnn.Tensor parameters

  • prefix – Prefix string to attach to the names of the modules/parameters. It’s useful for making the names of submodules appear in the same way as in the original model.