TorchScript#
Created On: Sep 07, 2018 | Last Updated On: Jul 16, 2025
Warning
TorchScript is deprecated, please use torch.export instead.
Creating TorchScript Code#
Script the function. |
|
Trace a function and return an executable or |
|
Compiles |
|
Trace a module and return an executable |
|
Create an asynchronous task executing func and a reference to the value of the result of this execution. |
|
Force completion of a torch.jit.Future[T] asynchronous task, returning the result of the task. |
|
Wrapper for C++ torch::jit::Module with methods, attributes, and parameters. |
|
Functionally equivalent to a |
|
Freeze ScriptModule, inline submodules, and attributes as constants. |
|
Perform a set of optimization passes to optimize a model for the purposes of inference. |
|
Enable or disables onednn JIT fusion based on the parameter enabled. |
|
Return whether onednn JIT fusion is enabled. |
|
Set the type and number of specializations that can occur during fusion. |
|
Give errors if not all nodes have been fused in inference, or symbolically differentiated in training. |
|
Save an offline version of this module for use in a separate process. |
|
Load a |
|
This decorator indicates to the compiler that a function or method should be ignored and left as a Python function. |
|
This decorator indicates to the compiler that a function or method should be ignored and replaced with the raising of an exception. |
|
Decorate to annotate classes or modules of different types. |
|
Provide container type refinement in TorchScript. |
|
This method is a pass-through function that returns value, mostly used to indicate to the TorchScript compiler that the left-hand side expression is a class instance attribute with type of type. |
|
Use to give type of the_value in TorchScript compiler. |