![]() ![]() ![]() ![]() Upsampling and downsampling are slow if using the tensor coming from read_image:ġ threads:.Python platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.23ĬuDNN version: Probably one of the following: Please refer to my post on the Pytorch Forum here for the full analysis. I can’t really understand why this happens, since both calls to Resize are on tensors of type torch.FloatTensor. However, applying transforms.Resize on the tensor generated by io.read_image + transforms.ConvertImageDtype is much slower than applying the same resize operation on the output of PIL read + transforms.ToTensor. To add onto point 2, the two sets of functions I mention return the same type of tensor: torch.float. While io.read_image + transforms.ConvertImageDtype itself is significantly faster than using PIL, combining it with the transforms.Resize operation - specifically when upsampling - makes the operation much slower than the PIL alternative.io.read_image + transforms.ConvertImageDtype do not actually return the same tensor values as PIL + transforms.ToTensor, even though they are supposed to provide the same functionality.However, I have found that there are two issues: Recent releases of Torchvision and the documentations that support it seem to suggest that we can use io.read_image + transforms.ConvertImageDtype instead of the traditional _fn + transforms.ToTensor. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |