Closed
Description
torchvision
is currently building
Line 321 in cac4e22
vision/packaging/torchvision/meta.yaml
Line 13 in cac4e22
and testing against libjpeg
Pillow
is building against libjpeg-turbo
on Windows for some time now and since Pillow=9
on all platforms (Jan 2022).
This has two downsides for us:
- We can't use
Pillow
as reference for our own decoding and encoding ops. See - As the name implies,
libjpeg-turbo
is a faster implementation of the JPEG standard. Thus, our I/O ops are simply slower than usingPillow
, which hinders adoption.
Recently, @NicolasHug led a push to also use libjpeg-turbo
, but hit a few blockers:
- Our workflows use the
defaults
channel fromconda
. Unfortunately, ondefaults
libjpeg-turbo
is only available for Windows and macOS. - Adding
conda-forge
to the channels for Linux, leads to crazy environment solve times (10+ minutes), which ultimately time out the CI. In general this change should be possible ifconda-forge
has a lower priority thandefaults
. - Depending on the experimental
libmamba
solver indeed speeds ups the solve for the CI to not time out (it is still a little slower than before). Unfortunately, our CI setup does not properly work with it, since a CUDA 11.6 workflow is still pulling a PyTorch version build against CUDA 11.3.
From here on I currently see four options:
- Only build and test Windows and macOS binaries against
libjpeg-turbo
. This would mean that arguably most of our users won't see that speed-up. - Find a way to stop the CI from timing out when using
conda-forge
as extra channel. This can probably be done through the configuration or by emitting more output during the solve. - Fix our CI setup to work with the
libmamba
solver. - Package
libjpeg-turbo
for Linux ourselves. We already use thepytorch
orpytorch-nightly
channels. If it was available there, we wouldn't need to pull it fromconda-forge
. In Use libjpeg-turbo in CI instead of libjpeg #5941 (comment) @malfet only talks about testing against it, but maybe we can also build against it.
cc @seemethere