Skip to content

Instantly share code, notes, and snippets.

@nprithviraj24
Created September 24, 2020 05:49
Show Gist options
  • Select an option

  • Save nprithviraj24/5ab4e1f533fd93664d596fa084d84d86 to your computer and use it in GitHub Desktop.

Select an option

Save nprithviraj24/5ab4e1f533fd93664d596fa084d84d86 to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Humpback Whale challenge -- RESULTS!\n",
"\n",
"Proposed solution to solve [this challenge](https://www.kaggle.com/c/humpback-whale-identification).\n",
"\n",
"Prithvi Raju [email](nprihviraj24@gmail.com)\n",
"\n",
"[Github Project](https://github.com/nprithviraj24/deep-learning/tree/master/few-shot-learning) <br />"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### ArXiv links to Reference papers: <br />\n",
"\n",
"[In Defense of the Triplet Loss for Person Re-Identification](https://arxiv.org/abs/1703.07737) <br /> \n",
"\n",
"[FaceNet](https://arxiv.org/abs/1503.03832)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<strong>Objective<strong>: identify individual whales in images. \n",
" \n",
"Constituents of dataset: \n",
"\n",
" - train.zip - a folder containing the training images\n",
" - train.csv - maps the training Image to the appropriate whale Id. Whales that are not predicted to have a label identified in the training data should be labeled as new_whale.\n",
" - test.zip - a folder containing the test images to predict the whale Id\n",
" - sample_submission.csv - a sample submission file in the correct format\n",
" \n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Evaluating the data\n",
"\n",
"Before building the model, lets understand the data first. "
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Number of training samples: (9850,)\n",
" Number of unique whales with pictures more than one: new_whale 810\n",
"w_1287fbc 34\n",
"w_98baff9 27\n",
"w_7554f44 26\n",
"w_1eafe46 23\n",
" ... \n",
"w_80c692d 2\n",
"w_eb44149 2\n",
"w_73cbacd 2\n",
"w_17a3581 2\n",
"w_0466071 2\n",
"Name: Id, Length: 2031, dtype: int64\n"
]
}
],
"source": [
"import pandas as pd\n",
"file = pd.read_csv('train.csv')\n",
"ids = file.Id\n",
"print(\" Number of training samples: \", ids.shape)\n",
"uni = file.Id.value_counts()\n",
"gt1 = uni[uni>1]\n",
"print(\" Number of unique whales with pictures more than one: \", gt1)\n"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of images in train folder: 9850\n"
]
},
{
"data": {
"text/plain": [
"Image 000466c4.jpg\n",
"Id w_1287fbc\n",
"Name: 1, dtype: object"
]
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import os\n",
"path, dirs, files = next(os.walk(\"train/train\"))\n",
"file_count = len(files)\n",
"print(\"Number of images in train folder: \", file_count)\n",
"\n",
"file.loc[1]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can conclude that there are 4251 classes for only 9850 images. Most of the \"class\" have only one training image."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Few-shot learning using CNN block from Siamese network and Batch hard strategy for optimization\n",
"\n",
"<br />\n",
"\n",
"<h2> Pipeline to create a maching network<h2>\n",
"\n",
"> Data preprocessing and augmentation \n",
"\n",
" I will be using preprocessing steps done in popular notebooks. I will try to reason why certain preprocessing steps are crucial.\n",
"\n",
"> Matching Network\n",
" \n",
"A method used to represent discrete variables in data manifold $ \\mathbb{R} $ as continuous vectors.\n",
" \n",
">> Build an Encoder Network \n",
"\n",
">> Generate \"image\" embeddings \n",
"\n",
">> Pairwise distance between query samples and support sets.\n",
"\n",
">> Calculating predictions by taking weighted average of the support set labels with the normalised distance.\n",
"\n",
"\n",
"> Batch hard strategy for addressing loss functions \n",
"\n",
" By using Online triplet mining.\n",
"\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Building the model, I incorporate the ResNet block, wonderfully explained in this [blog](http://teleported.in/posts/decoding-resnet-architecture/) post. \n",
"\n",
"Why ResNet?\n",
"\n",
"The rule of thumb suggests that ResNet has has worked in this case.\n",
"\n",
"`` The idea is to form a subblock with a 1x1 convolution reducing the number of features, a 3x3 convolution and another 1x1 convolution to restore the number of features to the original. The output of these convolutions is then added to the original tensor (bypass connection). I use 4 such subblocks by block, plus a single 1x1 convolution to increase the feature count after each pooling layer. ``\n",
"\n",
"\n",
"\n",
"### PyTorch model creation\n",
"\n",
"The branch model is composed of 6 blocks, each block processing maps with smaller and smaller resolution,, with intermediate pooling layers.\n",
"\n",
" Block 1 - 384x384\n",
" Block 2 - 96x96\n",
" Block 3 - 48x48\n",
" Block 4 - 24x24\n",
" Block 5 - 12x12\n",
" Block 6 - 6x6\n",
"\n",
"> Block 1 has a single convolution layer with stride 2 followed by 2x2 max pooling. Because of the high resolution, it uses a lot of memory, so a minimum of work is done here to save memory for subsequent blocks.\n",
"\n",
"> Block 2 has two 3x3 convolutions similar to VGG. These convolutions are less memory intensive then the subsequent ResNet blocks, and are used to save memory. Note that after this, the tensor has dimension 96x96x64, the same volume as the initial 384x384x1 image, thus we can assume no significant information has been lost.\n",
"\n",
"> Blocks 3 to 6 perform ResNet like convolution. I suggest reading the original paper, but the idea is to form a subblock with a 1x1 convolution reducing the number of features, a 3x3 convolution and another 1x1 convolution to restore the number of features to the original. The output of these convolutions is then added to the original tensor (bypass connection). I use 4 such subblocks by block, plus a single 1x1 convolution to increase the feature count after each pooling layer.\n",
"\n",
"> The final step of the branch model is a global max pooling, which makes the model robust to fluke not being always well centered.\n",
"\n",
"\n",
"I'm acknowledging this [notebook](https://www.kaggle.com/martinpiotte/whale-recognition-model-with-score-0-78563) because I incorporate few preprocessing techniques to make it work. When it comes to data preprocessing, I look what rule of thumb suggests because they make life so much easier. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Other interesting methods:\n",
"\n",
"<br />\n",
"\n",
"#### Generic One shot siamese network.\n",
"[Refer this](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=2ahUKEwjq1YPx57_mAhWd63MBHX7HAEAQFjAAegQIARAC&url=https%3A%2F%2Fwww.cs.cmu.edu%2F~rsalakhu%2Fpapers%2Foneshot1.pdf&usg=AOvVaw0gKET0McCdIoco9UX2KcsE)\n",
"- Learning a vector representation of a complex input, like an image, is an example of dimensionality reduction. \n",
"- Taking <strong> Contrastive loss </strong> where Distance between two embeddings of similar class are optimized by bringing it closer. \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Defining the Encoder Network Architecture \n",
"\n",
"I've tried to keep everything succinct and detailed. \n",
"\n",
"An encoder CNN architecture. Instead of an MLP, which uses linear, fully-connected layers, we can instead use:\n",
"* [Convolutional layers](https://pytorch.org/docs/stable/nn.html#conv2d), which can be thought of as stack of filtered images.\n",
"* [Maxpooling layers](https://pytorch.org/docs/stable/nn.html#maxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.\n",
"* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.\n",
"* Batch Normalization layer: The motivation behind it is purely statistical: it has been shown that normalized data, i.e., data with zero mean and unit variance, allows networks to converge much faster. So we want to normalize our mini-batch data, but, after applying a convolution, our data may not still have a zero mean and unit variance anymore. So we apply this batch normalization after each convolutional layer.\n",
"\n",
"### What about selecting the right kernel size?\n",
"We always prefer to use smaller filters, like 3×3 or 5×5 or 7×7, but which ones of theses works the best? \n",
"\n",
"<br />\n",
"\n",
"\n",
"#### Now it is seldom used in practice to create your own encoder network uniquely from scratch because there are so many architecture that will do the job. And these architectures are implemented in different frameworks.\n",
"\n",
"Example: \n",
"\n",
"ResNet18, VGG-16 etc. Slight modification to these networks will do the job. I used ResNet variations with a dense layer connected at the end.\n",
"\n",
"#### A Sample encoder architecture from famous notebook in the same challenge is given below\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"import torchvision\n",
"import torch.utils.data as utils\n",
"from torchvision import datasets\n",
"import torchvision.transforms as transforms\n",
"from torch.optim import lr_scheduler\n",
"import os\n",
"from PIL import Image\n",
"import torch\n",
"from torch.autograd import Variable\n",
"import PIL.ImageOps \n",
"import torch.nn as nn\n",
"from torch import optim\n",
"import torch.nn.functional as F\n",
"from torch.utils.data import DataLoader,Dataset\n",
"from torch.autograd import Variable\n",
"import matplotlib.pyplot as plt\n",
"import torchvision.utils\n",
"\n",
"class SiameseNetwork(nn.Module):\n",
" def __init__(self):\n",
" super(SiameseNetwork, self).__init__()\n",
" \n",
" # Setting up the Sequential of CNN Layers\n",
" self.cnn1 = nn.Sequential(\n",
" \n",
" nn.Conv2d(1, 96, kernel_size=11,stride=1),\n",
" nn.ReLU(inplace=True),\n",
" nn.LocalResponseNorm(5,alpha=0.0001,beta=0.75,k=2),\n",
" nn.MaxPool2d(3, stride=2),\n",
" \n",
" nn.Conv2d(96, 256, kernel_size=5,stride=1,padding=2),\n",
" nn.ReLU(inplace=True),\n",
" nn.LocalResponseNorm(5,alpha=0.0001,beta=0.75,k=2),\n",
" nn.MaxPool2d(3, stride=2),\n",
" nn.Dropout2d(p=0.3),\n",
"\n",
" nn.Conv2d(256,384 , kernel_size=3,stride=1,padding=1),\n",
" nn.ReLU(inplace=True),\n",
" nn.Conv2d(384,256 , kernel_size=3,stride=1,padding=1),\n",
" nn.ReLU(inplace=True),\n",
" nn.MaxPool2d(3, stride=2),\n",
" nn.Dropout2d(p=0.3),\n",
"\n",
" )\n",
" \n",
" # Defining the fully connected layers\n",
" self.fc1 = nn.Sequential(\n",
" nn.Linear(30976, 1024),\n",
" nn.ReLU(inplace=True),\n",
" nn.Dropout2d(p=0.5),\n",
" \n",
" nn.Linear(1024, 128),\n",
" nn.ReLU(inplace=True),\n",
" \n",
" nn.Linear(128,2))\n",
" \n",
" def forward_once(self, x):\n",
" # Forward pass \n",
" output = self.cnn1(x)\n",
" output = output.view(output.size()[0], -1)\n",
" output = self.fc1(output)\n",
" return output\n",
"\n",
" def forward(self, input1, input2):\n",
" # forward pass of input as embedding\n",
" output1 = self.forward_once(input1)\n",
" return output1\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Please Note: This notebook only briefs about how I've build the model which is yet to be tested. Theoretically, this model should solve the problem optimally. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Approach to the solution:\n",
" \n",
"[You can find my solution is this paper as well.](https://github.com/nprithviraj24/deep-learning/blob/master/few-shot-learning/Few-shot-learning-with-online-triplet-mining.pdf)\n",
" \n",
" \n",
"To train our model, we need to first make it learn which class does a one belong to. \n",
" \n",
" - Initially we take an image (class/label Z), call it as anchor image. After encoding, we represent this image somewhere in Euclidean space with D dimensions, let's assume the location is A.\n",
" - We take the another image of same class Z, call it as positive. We represent this image somewhere in the same Euclidean space with D dimensions, say B.\n",
" - A different third image is picked with different class, say Y, and call it as negative, represent in same space at point C. The below picture captures Anchor, positie and negative beautifully.\n",
"\n",
"Now our objective is to train the model such that <strong>same</strong> class images should be close that different class images. In short, let's consider $d$ as function of distance (generally, $L_2$ because it gives the squared value), then\n",
"\n",
"<center>$d(anchor, negative) > d(anchor, postive)$</center>\n",
"and it should be at least by a margin. \n",
"\n",
" \n",
" \n",
"![Obama](obama.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Why Triplet loss?\n",
"\n",
"[Reference](https://www.youtube.com/watch?v=d2XB5-tuCWU)\n",
"\n",
"Rather than calculating loss based on two examples ( contrastive loss ), triplet loss involves an anchor example and one positive or matching example (same class) and one negative or non-matching example (differing class).\n",
"\n",
"The loss function penalizes the model such that the distance between the matching examples is reduced and the distance between the non-matching examples is increased. Explanation: \n",
"For some distance on the embedding space d, the loss of a triplet (a,p,n) is:\n",
"<center> $L=max(d(a,p)−d(a,n)+margin,0)$ </center>\n",
"\n",
"We minimize this loss, which pushes $d(a,p)$ to 0 and $d(a,n)$ to be greater than $d(a,p)+margin$. As soon as n becomes an “easy negative”, the loss becomes zero.\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### So how do I build such a loss function optimally?\n",
"\n",
"#### Keyword: Optimally.\n",
"\n",
"Let me first acknowledge this [paper](https://arxiv.org/abs/1703.07737). \n",
"\n",
" Images are now referred as embeddings.\n",
" \n",
"Before calculating the loss, we need sample only the relevant triplets (i.e anchor, positive and negative). To explain it much better, let's categorize the triplets in three different categories:\n",
"- Easy negative: The one where $d(negative, anchor) >> d(positive, anchor)$, if this is the case, $L$ will be zero (recall from previous cell). So implicitly, there wont be gradient that will be propagated backwards.\n",
"- Hard negative: The case where $d(negative, anchor) < d(positive, anchor.)$. This means that network performed poorly, and there will be a significant gradient calculated to modify the weights (based on optimization).\n",
"- Semi-hard: tiplets where the negative is not closer to the anchor than the positive, but which still have positive loss: $d(a,p)<d(a,n)<d(a,p)+margin$\n",
"\n",
"<br />\n",
"\n",
"##### Offline triplet mining\n",
"\n",
"- The first way to produce triplets is to find them offline, at the beginning of each epoch for instance. We compute all the embeddings on the training set, and then only select hard or semi-hard triplets. We can then train one epoch on these triplets.\n",
"- Concretely, we would produce a list of triplets (i,j,k). We would then create batches of these triplets of size B, which means we will have to compute 3B embeddings to get the B triplets, compute the loss of these B triplets and then backpropagate into the network.\n",
"- Overall this technique is not very efficient since we need to do a full pass on the training set to generate triplets. It also requires to update the offline mined triplets regularly.\n",
"\n",
"\n",
"##### Online triplet mining\n",
"\n",
"The idea here is to compute useful triplets on the fly, for each batch of inputs. Given a batch of B examples (for instance B images of faces), we compute the B embeddings and we then can find a maximum of B3 triplets. Of course, most of these triplets are not valid (i.e. they don’t have 2 positives and 1 negative).\n",
"\n",
"Suppose that you have a batch of whale flukes as input of size B=PK, composed of P different flukes with K images each. A typical value is K=4\n",
" \n",
". The two strategies are:\n",
"\n",
" batch all: \n",
" - select all the valid triplets, and average the loss on the hard and semi-hard triplets.\n",
" - a crucial point here is to not take into account the easy triplets (those with loss 0), as averaging on them would make the overall loss very small this produces a total of PK(K−1)(PK−K) triplets (PK anchors, K−1 possible positives per anchor, PK−K possible negatives)\n",
"\n",
" batch hard: \n",
" - for each anchor, select the hardest positive (biggest distance d(a,p)) and the hardest negative among the batch this produces PK triplets \n",
" - the selected triplets are the hardest among the batch\n",
"\n",
"\n",
"### Important Note:\n",
"\n",
"As a machine learning practitioner, it is believed in community that one way of implementing an algorithm on a dataset will not always yield a similar result for different dataset. So naturally, we should always explore different options. \n",
"\n",
"<br />\n",
"\n",
"Since, I prefer to write in PyTorch, there's an equivalent code for implementing batch-hard-strategy <strong>criterion</strong>."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## UPDATE!!\n",
"The cells below will explain the code and approach on how this specific model is built in PyTorch. \n",
"\n",
"#### Please Note: It is assumed that this notebook is read along with ```train.py``` file."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Data Loading and DataLoader\n",
"\n",
"WhaleDataset class: Extended from torch's Dataset class.\n",
"<strong>\\__init__()</strong> and <strong>\\__getitem()__</strong> : are to be defined.\n",
"\n",
"### Splitting\n",
"Training data is further split into Training and Validation.\n",
"Validation: Where I only considered the data which had more than 3 classes. I only chose one of them to be part of validation.\n",
"\n",
"### Encoder:\n",
"Converting a (224,224) image to 500 D/1000 D needs a very deep convolutional neural network. Problem with Deep CNNs are vanishing gradient. So I had to chose a robust model such as ResNet which is immune to such complications. I have included a dense layer at the end to map the feature values to embedding spaces with __D__ dimensions.\n",
"\n",
"### Defining the Criterion (Loss) : TripletLossCosine\n",
"CosDistance = 1 - CosineSimilarity.\n",
"The class gets anchor, posities and negatives instances and TripletLoss is calculated using cosine Distance (which is 1 - CosSimilarity)\n",
"\n",
" def forward(self, anchor, positive, negative):\n",
" dist_to_positive = 1 - F.cosine_similarity(anchor, positive)\n",
" dist_to_negative = 1 - F.cosine_similarity(anchor, negative)\n",
" loss = F.relu(dist_to_positive - dist_to_negative + self.MARGIN)\n",
" loss = loss.mean()\n",
" return loss\n",
" \n",
" ### Train the model. \n",
" \n",
" ``` Please Note: Selections of hard positives and hard negatives doesn't comply with what I proposed earlier. It is often found that if we chose hard positives and negatives, we might achieve faster convergence but the model is aloso prove to get stuck at local minima (if we are dealing with non-convex loss function. ```\n",
" - Negative of an anchor is chosen randomly from all the images in training set that doesnt share the same label as the anchor.\n",
" - Positive of an anchor is chosen randomly from images sharing same label as the anchor. If there are none, then anchor's positive instance is itself.\n",
" \n",
" ### Hyperparameters: \n",
" Due to limited usage, I could only train 15 images on 5 different models.\n",
" The results of different models and hyperparameters are discussed in next cell.\n",
" \n",
" ### Testing\n",
" First of all, we calculate all the cosine_similarities of each test image with each training image. The idea is to select the closest (i.e near k neighbours) neighbours.\n",
" \n",
" sklearn 's cosine similarity function is used to calculate cosine similarities. This yields a 2D kernel matrix with (n_samples_X, n_samples_Y)\n",
" \n",
" To test on an image, ```FindTopK()``` is used to get it's nearest classes."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### RESULTS\n",
"\n",
"Please find this [Tensorboard](https://tensorboard.dev/experiment/oDvUYRgeTjWOJhowpJs58w/#scalars) for results.\n",
"\n",
"##### In Scalars: Runs signify:\n",
"\n",
"```res18-ep_15-TIME_2019-11-29 15:35:11.639278 ```\n",
"\n",
"<strong>res18</strong> >> architecture used. <br />\n",
"<strong>ep_15</strong> >> number of epochs. <br />\n",
"<strong>TIME_2019-11-29 15:35:11.639278</strong> >> Program initiation TIMESTAMP.\n",
"\n",
"\n",
"NOTE: If I explicity mention ```D-1000``` then it indicates that images are projected to 1000 D Euclidean space. Otherwise, 500D Euclidean space."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment