On an ACCV-2020 paper's claims pertaining to this repo #454
Unanswered
vinayprabhu
asked this question in
General
Replies: 1 comment 1 reply
-
@vinayprabhu there are no models released to the public that were directly trained with JFT-300M, or even directly pretrained and then fine-tuned (to my knowledge), Google would not allow that. However, the Noisy-Student Efficientnets relased by Google did use JFT-300M as an unlabeled dataset as per the noisy student algorithm. See https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet#2-using-pretrained-efficientnet-checkpoints and https://arxiv.org/abs/1911.04252 .. I converted the weights from the official repository above and they are in the |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi Ross,
Thank you for the wonderful service to the community.
I came across this paper titled "Compensating for the Lack of Extra Training Data by Learning Extra Representation" which is about " ... a novel framework, Extra Representation (ExRep), to surmount the problem of not having access to the JFT-300M data by instead using ImageNet and the publicly available model that has been pre-trained on JFT-300M".
In section-3, they make this specific claim: " Our choice of EfficientNet-B1 as the backbone is attributed to its relatively small number of parameters (7.8M), which enables efficient computation. The weights of the model pre-trained on JFT-300M
in PyTorch are available here [3]" : [3]: https://github.com/rwightman/pytorch-image-models
I wanted to check with you if this claim is correct. To the best of my knowledge, your collection does not have JFT-300M pre-trained models correct?
A reply would be utmost appreciated!
Kind regards,
Beta Was this translation helpful? Give feedback.
All reactions