Rate this Page
โ˜… โ˜… โ˜… โ˜… โ˜…

torch.cuda.memory.caching_allocator_alloc#

torch.cuda.memory.caching_allocator_alloc(size, device=None, stream=None)[source]#

Perform a memory allocation using the CUDA memory allocator.

Memory is allocated for a given device and a stream, this function is intended to be used for interoperability with other frameworks. Allocated memory is released through caching_allocator_delete().

Parameters
  • size (int) โ€“ number of bytes to be allocated.

  • device (torch.device or int, optional) โ€“ selected device. If it is None the default CUDA device is used.

  • stream (torch.cuda.Stream or int, optional) โ€“ selected stream. If is None then the default stream for the selected device is used.

Note

See Memory management for more details about GPU memory management.