5 Comments
User's avatar
Krishna Mohan's avatar

Thanks 👍

Expand full comment
brok's avatar

I have a question. The hack you proposed is great, but we actually need the on-fly transformations every single epoch assuming that we are using a scholastic augmentation (e.g.RandAugment()) to expose the model to new alterations every single training step, which is the common approach everywhere as it significantly boosts model's generalization ability.

Expand full comment
Guna's avatar

Nice Blog Sir. Instead of using the shuffle, Its better to use `sampler = RandomSample(dataset, replacement=False, num_sample=len(dataset)` which helps us to use every sample only for once.

Expand full comment
Yuvraj Dhepe's avatar

Pretty Awesome,

Thanks for pointing it out :)

Expand full comment
Yuvraj Dhepe's avatar

The only point why I can think of the transformations can be good on fly, is to have augmentations, cause if in every epoch transformations are applied, then it might be good for robustness. The only worry one could have is do they want their transformed data in first epoch transformed again or not.

Maybe you will clarify this in next blog, just thought to share my thoughts

Expand full comment