You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
if I understand the code correctly, we are assuming that train_tensors[0] is always feature tensors and train_tensors[1] is always labels.
However, Dataset in torch assumes the following structure:
In [1]: importtorchvision.datasetsIn [2]: train_tensors=torchvision.datasets.CIFAR10('cifar10/')
In [3]: train_tensors[0]
Out[3]: (<PIL.Image.Imageimagemode=RGBsize=32x32at0x7F8B6DAD3370>, 6)
In [4]: train_tensors[1]
Out[4]: (<PIL.Image.Imageimagemode=RGBsize=32x32at0x7F8B6DAD33A0>, 9)
It indicates that the __getitem__ of Dataset (actually it applies to other Dataset child classes as well) returns:
train_tensors[i] := the i-th instance in the given dataset
It does not matter unless we use torch stuff, but those stuffs will cause many issues when we try to merge image tasks
The text was updated successfully, but these errors were encountered:
Hey, actually we do follow the structure of a Dataset defined in torch see here. Specifically for image dataset, for FilePathDataset also it follows the structure here. I don't think we will have issues when we try to merge image tasks. Moreover, if we were to have issues, we would have had them also for tabular tasks as we use a torch DataLoader and it works fine. However, we may have issues with other parts of the ImageDataset code but we can look at it when we start working with images.
The continuation of issue#352.
if I understand the code correctly, we are assuming that
train_tensors[0]
is always feature tensors andtrain_tensors[1]
is always labels.However,
Dataset
intorch
assumes the following structure:It indicates that the
__getitem__
ofDataset
(actually it applies to otherDataset
child classes as well) returns:It does not matter unless we use torch stuff, but those stuffs will cause many issues when we try to merge image tasks
The text was updated successfully, but these errors were encountered: