You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I attempted to reproduce the pre-training of MAE-ViT-Large, and performed 68 epochs on a chest x -rays dataset with about 300 k medical images, and the loss stopped improving when it reached around 0.0045 loss without pixlossnorm. Additionally, the reconstruction results fail to predict the masked regions correctly.
Could you suggest a reason for that? Any idea why this is the case?
The text was updated successfully, but these errors were encountered:
I would like to ask how you use finetune to train the reconstruction model on your own data set. I see that through main_finetune.py only models for classification tasks can be generated.
Hello,
thank you for the great work and the great repo.
I attempted to reproduce the pre-training of MAE-ViT-Large, and performed 68 epochs on a chest x -rays dataset with about 300 k medical images, and the loss stopped improving when it reached around 0.0045 loss without pixlossnorm. Additionally, the reconstruction results fail to predict the masked regions correctly.
Could you suggest a reason for that? Any idea why this is the case?
The text was updated successfully, but these errors were encountered: