For quite some time now, we know about the benefits of transfer learning in Computer Vision (CV) applications. Nowadays, pre-trained Deep Convolution Neural Networks (DCNNs) are the first go-to pre-solutions to learn a new task. These large models are trained on huge supervised corpora, like the ImageNet. And most important, their features are known to adapt well to new problems. This is particularly interesting when annotated training data is scarce. In situations like this, we take the models’ pre-trained weights, append a new classifier layer on top of it, and retrain the network. This is called transfer learning, and is one of the most used techniques in CV. Aside from a few tricks when performing fine-tuning (if the case), it has been shown (many times) that if training for a new task, models initialized with pre-trained weights tend to learn faster and be more accurate then training from scratch using random initialization.
Features
- Unsupervised Representation Learning
- Contrastive Learning
- A Simple Framework for Contrastive Learning of Visual Representations
- SimCLR has the advantage of not needing extra logic to mine negatives
- SimCLR is trained with batch sizes as large as 8192
- SimCLR uses ResNet-50 as the main ConvNet backbone