SwAV

Content

  1. Why SwAV?

  2. SwAV Loss

  3. SwAV Computing Codes Online

  4. Sinkhorn-Knopp Algorithm

  5. Why SwAV Works

Why SwAV?

About

  1. SwAV (Swapping Assignments between multiple Views of the same image) is introduced in Unsupervised Learning of Visual Features by Contrasting Cluster Assignments ([CMM+20]).

  2. SwAV achieves 75.3% top-1 accuracy on ImageNet with ResNet-50 for linear models trained on frozen features, which is only 1.2% difference from the supervised method (76.5%), closing the gap between unsupervised and supervised learning representation of visual features.

  1. SwAV is an unsupervised contrastive learning method that simultaneously clusters the data while enforcing consistency between cluster assignments produced for different augmentations (or “views”) of the same image, and uses a “swapped” prediction mechanism by predicting the code of a view from the representation of another view.

  2. SwAV can be trained with large and small batches and can scale to unlimited amounts of data. SwAV also does not require a large memory bank or a special momentum network, therefore is more memory efficient than previous contrastive methods.

  3. SwAV also introduces a new augmentation “multi-crop” that increases the number of views with no computational or memory overhead.

Motive

  1. Recent contrastive learning methods for unsupervised visual representation learning uses:

    1. A contrastive loss that compares pairs of images to push away semantically different images while pulling together semantically similar images

    2. Image augmentations that define invariances encoded in the features.

  2. Instance discrimination (contrasting instances of image views) is not practical for all pairwise comparisons on a large dataset. Clustering-based methods thus approximate task by discriminating between clusters images with similar features instead of individual images.

  3. However, current clustering-based methods are computationally inefficient.

  4. Therefore, SwAV proposes a scalable, online clustering-based method, and a “swapped” prediction problem to learn visual features.

Reference: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments (Caron et al. NeurIPS 2021)

SwAV Loss

SwAV can be interpreted as contrasting between multiple images views by comparing cluster assignments instead of their features. SwAV does this by computing a code from one view and predicting that code from the other view.

  1. Given image x, augment and obtain the 2 views’ image features zt,zs.

  2. Match image features zt,zs to prototypes {c1,...,ck} to compute codes qt,qs.

  3. Set up the “swapped” prediction problem with the loss function: L(zt,zs)=l(zt,qs)+l(zs,qt) where l(z,q)=kq(k)slogp(k)t measures fit between features z and code q, where p(k)t=exp(1τztck)kexp(1τztck).

  4. Taking this loss over all images and pairs of data augmentations lead to: 1NNn=1s,tT[1τzntCqns+1τznsCqntlogKk=1exp(zntτ)logKk=1exp(znsτ)]

  5. This loss is minimized with respect to prototypes C and parameters θ of image encoder fθ.

SwAV Computing Codes Online

SwAV clusters instances to the prototypes C, and compute codes using prototypes C such that all instances are equally partitioned by the prototypes.

  1. Given image features Z=[z1,...,zB], map them to prototypes C=[c1,...,ck], where the mapping (codes) is denoted by Q=[q1,...,qB].

  2. Q is optimized to maximized similarity between features and prototypes using maxQQTr(QCZ)+εH(Q) where H(Q)=i,jQijlogQij is the entropy function.

  3. CZ=[c1c2][z1z2z3]=[c1z1c1z2c1z3c2z1c2z2c2z3]

  4. Tr([q11q21q12q22q13q23][CZ])=q11c1z1+q21c2z1+q12c1z2+q22c2z2+q13c1z3+q23c2z3

  5. Q is a range of continuous values between 0 and 1. Q is 0 for the general case and close to 1 when a z representation is close to its prototype vector C. This is because optimizing Q to maximise the trace Tr(QCZ) results in the dot products where c and z are close together will take a bigger value. The values of q will try to be maximised.

  6. However, the maximization of values of q are regularized by H(Q)=i,jQijlogQij. The closer qij is to 0, the bigger the logQij value will be, and the maximum of QijlogQij will be in the middle of 0 and 1. A higher entropy will give a more homogenous distribution of Q.

  7. Q is optimized using the Sinkhorn-Knopp algorithm.

Reference: SwAV Loss Deep Dive by Ananya Harsh Jha

Sinkhorn-Knopp Algorithm

The goal of optimal transport is to transform one probability distribution into another with minimal cost.

  1. For example, let us allocate desserts to people according to preferences while constraining portion sizes.

  2. Let r=(3,3,3,4,2,2,2,1), the portion size of dessert each person can eat (n-dimensional).

  3. Let c=(4,2,6,4,4), the amount of each dessert available (m-dimensional).

  4. Let U(r,c)={PRn×m>0|P1m=r,P1n=c} be the set of positive n×m matrices for which rows sum to r and columns sum to c, which is the set of ways of allocating desserts to the people.

  5. Let M be the (n×m) cost (negative preference) matrix.

  6. The optimal transport problem is formally posed as dM(r,c)=minPU(r,c)i,jPijMij the optimal transport between r and c.

  7. The Sinkhorn distance is dM(r,c)=minPU(r,c)i,jPijMij1λh(P) where h(P)=i,jPij is the information entropy of P that acts as regularization.

  8. The Sinkhorn-Knopp algorithm is an efficient method to obtain the optimal distribution matrix Pλ and the associated dλM(r,c) based on the fact that elements of the optimal matrix are of the form (Pλ)ij=αiβjeλMij with α1,...,αn and β1,...,βn are constants to ensure rows and columns sum to r and c respectively.

  9. The Sinkhorn-Knopp algorithm is basically

    1. Initialise Pλ=eλM.

    2. Repeat 3-4 until convergence:

    3. Scale the rows such that row sums match r.

    4. Scale the columns such that column sums match c.

Reference: Notes on Optimal Transport by Michiel Stock

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

r = np.array([3,3,3,4,2,2,2,1])  # amount each of the 7 people can eat
c = np.array([4,2,6,4,4])  # portions of the 5 desserts available
M = np.array([[2,2,1,0,0],
              [0,-2,-2,-2,2],
              [1,2,2,2,-1],
              [2,1,0,1,-1],
              [0.5,2,2,1,0],
              [0,1,1,1,-1],
              [-2,2,2,1,1],
              [2,1,2,1,-1]])  # (n x m) preferences matrix
M = -M  # converting preferences to cost
Copy to clipboard
"""
Sinkhorn-Knopp Algorithm
"""
def sinkhorn_knopp(M, r, c, lam):
    n, m = M.shape
    P = np.exp(- lam * M)
    P /= P.sum()
    u = np.zeros(n)

    while np.max(np.abs(u - P.sum(1))) > 1e-8:  
        u = P.sum(1)
        P *= (r / u).reshape((-1,1))
        P *= (c / P.sum(0)).reshape((1,-1))
    return P
Copy to clipboard
P = sinkhorn_knopp(M, r, c, lam=10)

pd.DataFrame(P).plot.bar(stacked=True)
plt.show()
Copy to clipboard
../_images/swav_7_0.png

Why SwAV Works

  1. SwAV authors re-implemented and improveed previous clustering-based models to compare with SwAV.

  2. DeepCluster-v2 obtains 75.2% top-1 accuracy on ImageNet versus 75.3% for SwAV.

  3. However, DeepCluster-v2 is not online, making it impractical for extremely large datasets, e.g. billion scale trainings which sometimes use only a single training epoch.

  4. As seen, SwAV can work online and therefore can scale better to unlimited amounts of data.

CMM+20

Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. arXiv preprint arXiv:2006.09882, 2020.