CLIP By OPEN-AI

Introduction

Nearly all state-of-the-art visual perception algorithms rely on the same formula: 

(1) pretrain a convolutional network on a large, manually annotated image classification dataset

(2) finetune the network on a smaller, task-specific dataset.

This technique has been widely used for several years and has led to impressive improvements on numerous tasks. 

State-of-the-art visual perception models for a wide range of tasks rely on supervised pretraining. ImageNet classification is the de facto pretraining task for these models. Yet, ImageNet is now nearly ten years old and is by modern standards “small”. Even so, relatively little is known about the behavior of pretraining with datasets that are multiple orders of magnitude larger. 

Even after all this, standard computer vision models have trouble generalizing to unseen test cases. This raises questions about the entire deep learning approach towards computer vision. 

Background

CLIP (Contrastive Language–Image Pre-training) deviates from the standard practice of fine-tuning a pretrained model by taking the path of zero-shot learning. As described in the previous blog on DALL-E, zero-shot learning is the ability of the model to perform tasks that it was not explicitly programmed to do. 

In 2016, Li et al. [1] demonstrated that using natural language-based predictions, their model achieved about 11.4% zero-shot accuracy on the imagenet dataset. They fine-tuned a 34 layer deep residual network that was pretrained on the imagenet dataset. Thirty million English comments from Flickr were used as a dataset to perform supervised learning. Li et al. trained their model to output n-grams for a given image. 

However, 11.4% accuracy is far from the current state of the art, i.e., 84% accuracy (Xie et al., 2020). It is even below the 50% accuracy of classic computer vision approaches (Deng et al., 2012).  This shows us that using just raw text as weakly supervised learning methods does not yield good results. 

On the other hand, Mahajan et al. (2018) showed that predicting ImageNet-related hashtags on Instagram images is an effective pre-training task. When fine-tuned to ImageNet, these pre-trained models increased accuracy by over 5% and improved the overall state of the art at the time. It is evident that there is a thin line between using finely annotated images to train your network and using practically unlimited raw text to train your network. 

Authors of CLIP created a new dataset consisting of 400 million training examples (images, text) and trained a simplified version of the ConVIRT model, i.e., the CLIP model, on their novel dataset. This model was trained from scratch and had similarities with the GPT model. It had knowledge about geo-localization, OCR, action recognition, and much more.

CLIP’s core idea

The core idea of the CLIP paper is essentially to learn visual representation from the massive corpus of natural language data. The paper showed that a simple pre-training task is sufficient to achieve a competitive performance boost in zero-shot learning. 

The objective of the CLIP model can be understood as following:

Given an image, a set of 32,768 randomly sampled text snippets was paired with it in our dataset. For example, given a task to predict a number from an image, the model is likely to predict that “the number is one” or, “the number is two”, or “the number is xyz” and so on.

The model would have to learn the extensive connections between visual data and their related words from the language data to achieve this. This is the intuition behind using a massive corpus of natural language data and their paired images to train the model.

Training objective

State-of-the-art computer vision systems use enormous amounts of computational resources. Mahajan et al. (2018) required 19 GPU years to train their ResNeXt101-32x48d and Xie et al. (2020) required 33 TPUv3 core-years to train their Noisy Student EfficientNet-L2.

Initially, the authors jointly trained an image CNN and text transformer from scratch to predict the caption of an image. However, this approach turned out to be highly compute intensive.  A 63 million parameter transformer language model, which already uses twice the compute of its ResNet-50 image encoder, learns to recognize ImageNet classes three times slower than a much simpler baseline that predicts a bag-ofwords encoding of the same text. 

On further introspection, this approach was found to be flawed because of the predictions that were expected from the transformer. Here, the transformer was required to output the hashtags/comments as it is rather than letting the CNN focus on the important visual data.

To overcome this, a contrastive objective was adopted which increased the efficiency of the CLIP model by 4x times. In other words say that we are given N (image, text) pairs of training examples. The CLIP model consists of a text and an image encoder which encodes textual and visual information into a multimodal embedding space. Now, the aim of the model is to increase the cosine similarity score of images and text which is actually associated which in this case there are N such pairs. On the other hand, the model also tries to minimize the similarity between images and texts which do not occur together which in this case would be N^2 – N such pairs. 

This would make more sense once you go through the attached python code snippet.

As you can see here, the contrastive pretraining involves maximising cosine similarity of encodings on the diagonal of the N*N matrix since they are the actual image, text pairs.

In the second figure, the CLIP model can be seen in action by correctly predicting the dog by maximising the similarity between the word dog and the visual information. 

Model Architecture

Here, the authors have used two different backbones (Resnet50 and Vision Transformer (ViT)) for the image encoder and a Transformer as the backbone for the text encoder.

The largest ResNet model, RN50x64, took 18 days to train on 592 V100 GPUs while the largest Vision Transformer took 12 days on 256 V100 GPUs. 

Let us understand the code for the CLIP model function by function to gain a better insight into the model architecture. 

The model is instanstiated and all necessary attributes are assigned by the constructor call. By specifying the vision_layers attribute as a tuple, list type of an object, we can use the resnet architecture as the visual representation encoder’s backbone. In any other case, the model instantiates the Visual Transformer as the backbone. Embed_dim is used to define the dimensions of the embedding space. Width and layer parameters are used to specify the width and number of layers of the respective bacbone networks. 

In this function, we are simply initializing parameters of the backbone networks. Note that we are not yet assigning pretrained weights to the backbone nets. 

Running a forward pass on the image encoder

Running a forward pass on the text encoder

The forward pass of the clip model involves running a forward pass through the text and image encoder network. These embedded features are then normalised and used as input to the cosine similarity. Finally the cosine similarity is computed and returned as logits.

Conclusion

CLIP is highly effective in learning visual representations through the freely available massive corpus of text data. It is known that by training giant neural networks on such a huge amount of data, zero-shot learning tends to take place. In fact, the model was also able to recognize a few classes that were not even a part of the training set. By making use of the contrastive objective function and visual transformer, OPEN-AI has developed a highly resilient and compute efficient model. 

Furthermore, on carrying out quantitative experiments, the authors found that CLIP model is significantly more flexible than the current SOTA by validating the scores on 30 different datasets. These tasks included OCR, geolocalisation and action recognition. The best CLIP model outperformed the best imagenet model on 20 out of the 26 datasets that were tested by the team.

CLIP also has its limitations on the other hand. It struggles with slightly complex tasks such as counting the number of objects in an image, predicting how far an object is from the camera (no sense of depth perception) and telling differences between similar objects. Even though it has a great zero shot accuracy on OCR, it performs poorly on classifying the MNIST dataset at an accuracy of 88%. 

Finally, we can conclude by saying that CLIP is a ground breaking work in terms of reducing efforts to find well annotated images dataset to perform image classification. Since it does not require task specific training data, we can keep feeding it massive amounts of raw text data and it would slowly get better and better at more unrelated tasks.