AutoAugmentΒΆ
- class torchvision.transforms.v2.AutoAugment(policy: AutoAugmentPolicy = AutoAugmentPolicy.IMAGENET, interpolation: Union[InterpolationMode, int] = InterpolationMode.NEAREST, fill: Union[int, float, Sequence[int], Sequence[float], None, dict[Union[type, str], Union[int, float, collections.abc.Sequence[int], collections.abc.Sequence[float], NoneType]]] = None)[source]ΒΆ
AutoAugment data augmentation method based on βAutoAugment: Learning Augmentation Strategies from Dataβ.
This transformation works on images and videos only.
If the input is
torch.Tensor
, it should be of typetorch.uint8
, and it is expected to have [β¦, 1 or 3, H, W] shape, where β¦ means an arbitrary number of leading dimensions. If img is PIL Image, it is expected to be in mode βLβ or βRGBβ.- Parameters:
policy (AutoAugmentPolicy, optional) β Desired policy enum defined by
torchvision.transforms.autoaugment.AutoAugmentPolicy
. Default isAutoAugmentPolicy.IMAGENET
.interpolation (InterpolationMode, optional) β Desired interpolation enum defined by
torchvision.transforms.InterpolationMode
. Default isInterpolationMode.NEAREST
. If input is Tensor, onlyInterpolationMode.NEAREST
,InterpolationMode.BILINEAR
are supported.fill (sequence or number, optional) β Pixel fill value for the area outside the transformed image. If given a number, the value is used for all bands respectively.
Examples using
AutoAugment
:- static get_params(transform_num: int) tuple[int, torch.Tensor, torch.Tensor] [source]ΒΆ
Get parameters for autoaugment transformation
- Returns:
params required by the autoaugment transformation