VirTex: Learning Visual Representations from Textual Annotations
Karan Desai and Justin Johnson University of Michigan
Abstract
The de-facto approach to many vision tasks is to start from pretrained visual representations, typically learned via supervised training on ImageNet. Recent methods have explored unsupervised pretraining to scale to vast quantities of unlabeled images. In contrast, we aim to learn high-quality visual representations from fewer images. To this end we revisit supervised pretraining, and seek data-efficient alternatives to classification-based pretraining. We propose VirTex -- a pretraining approach using semantically dense captions to learn visual representations. We train convolutional networks from scratch on COCO Captions, and transfer them to downstream recognition tasks including image classification, object detection, and instance segmentation. On all tasks, VirTex yields features that match or exceed those learned on ImageNet -- supervised or unsupervised -- despite using up to ten times fewer images.
CVPR 2021. Paper available at: arxiv.org/abs/2006.06666.
Code available at: github.com/kdexd/virtex.
Get the pretrained ResNet-50 visual backbone from our best performing VirTex model in one line without any installation!
import torch
# That's it, this one line only requires PyTorch.
model = torch.hub.load("kdexd/virtex", "resnet50", pretrained=True)
More details in VirTex Model Zoo. Next, dive deeper into our code with User Guide and API References!
User Guide
- How to setup this codebase?
- VirTex Model Zoo
- How to train your VirTex model?
- How to evaluate on downstream tasks?
- PASCAL VOC 2007 Linear Classification
- ImageNet Linear Classification
- Instance Segmentation (and Object Detection) on COCO
- Instance Segmentation on LVIS
- Object Detection on PASCAL VOC 2007+12
- iNaturalist 2018 Fine-Grained Classification
- Image Captioning on COCO Captions val2017
- Running Image Captioning Inference on Arbitrary Images
API Reference
Citation
If you find this code useful, please consider citing:
@inproceedings{desai2021virtex,
title={{VirTex: Learning Visual Representations from Textual Annotations}},
author={Karan Desai and Justin Johnson},
booktitle={CVPR},
year={2021}
}
Acknowledgments
We thank Harsh Agrawal, Mohamed El Banani, Richard Higgins, Nilesh Kulkarni and Chris Rockwell for helpful discussions and feedback on the paper. We thank Ishan Misra for discussions regarding PIRL evaluation protocol; Saining Xie for discussions about replicating iNaturalist evaluation as MoCo; Ross Girshick and Yuxin Wu for help with Detectron2 model zoo; Georgia Gkioxari for suggesting the Instance Segmentation pretraining task ablation; and Stefan Lee for suggestions on figure aesthetics. We thank Jia Deng for access to extra GPUs during project development; and UMich ARC-TS team for support with GPU cluster management. Finally, we thank all the Starbucks outlets in Ann Arbor for many hours of free WiFi. This work was partially supported by the Toyota Research Institute (TRI). However, note that this article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity.