Title of Competition
stringclasses 343
values | Title of Writeup
stringlengths 6
139
| User
stringclasses 1
value | Writeup
stringlengths 419
80.4k
|
---|---|---|---|
Tweet Sentiment Extraction | 4th place solution overview | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks!
Sorry for confusion, probably what I wrote is not clear enough. I mean these steps:
1) <code>token_logits</code> - take output of the model's head which was trained to predict binary target for each token
2) <code>token_probas = self.sigmoid(token_logits)</code> - calculate sigmoid of them to get probabilities
3) <code>score_per_token = sum(np.log(token_probas[start:end+1])) / (end + 1 - start)</code> - get log and average.
And here we have some <code>score_per_token</code> for candidate in consideration. The larger the score the better the candidate.</p>
<p>Here the point is that the model predicts target with two classes: 0 and 1. 0 means that the candidate is wrong and 1 means that this candidate is correct. So I take the probability of the candidate to be correct for the given pair of tweet and sentiment.
Of course one can do it with just a binary target predicting single number and taking sigmoid of the output. Nothing wrong with it.</p>
<p>Yes, it does make values negative. This is for convenience mostly.
In general, to obtain a probability of an event which consists of several other independent events one has to multiply their probabilities. But if you work with logarithms of probabilities then you work with sums of values, not their products. It helps you to avoid getting too small values in products if some of the probabilities are small and also allows to calculate average value with arithmetic averaging and not geometric one. This is by no means a rigorous explanation why log here is a good idea, but some consideration why I decided to go with the log. And it worked well so I did not even try anything else.</p>
<h1>Main parts of the solution:</h1>
<p>• Pre-processing
• Predicting 3 (start, end) candidates and assigning scores to them
• Scoring them with external model and adding one more score to each candidate
• Blending scores for candidates from different models
• Selecting the best candidate
• Post-processing</p>
<h1>Pre-processing and post-processing.</h1>
<p>“Magic” part. I came up with the algorithm, which is similar to those already described in great details in other solutions, so I will explain it only briefly.</p>
<p>Pre-processing is the procedure that reduces the amount of ‘noise’ in initial data and the goal of post-processing is to bring it back. They are based on the counting of extra spaces in the part of a tweet before selected text. Extra space is any space which is present in a tweet, but not in ‘ ’.join(tweet.split()).</p>
<p>So, in pre-processing I shift indices of selected text to the right on the amount of extra spaces and in post-processing to the left. The nuance here is that the length of selected text should be calculated with the single spaces around it. So in post-processing usually target end_idx is start_idx + len(selected_text)+2, but if selected_text ends with dot, comma, question mark etc. only space before the text should be taken into account and end_idx is start_idx + len(selected_text)+1.</p>
<h1>Predicting 3 (start, end) candidates</h1>
<h3><em>Architecture</em></h3>
<p>The model in this case is a transformer. I used BERT, RoBERTa and ELECTRA.
The input for the model is the following:
BERT or ELECTRA: <code>[CLS] [POSITIVE] tweet [SEP]</code>
RoBERTa: <code><s> [POSITIVE] tweet </s></code>
‘[POSITIVE]’ can also be ‘[NEUTRAL]’ and ‘[NEGATIVE]’, these are added sentiment tokens. </p>
<p>Embeddings for them are initialized with the embeddings of corresponding words ‘positive’, ‘neutral’ and ‘negative’. At the early stage I also tried to put [SEP] between sentiment token and tweet, but it worked slightly worse. Did not experiment with this setup later.</p>
<p>As target each model gets indices of start and end tokens of selected text.
The model has four heads:
1) QA dense head (just a linear layer without any dropout) for predicting start and end tokens. Takes token representation as the concatenation of the corresponding hidden states from the last two layers of the transformer. Tried here to take weighted sum of hidden states from all layers with learnable weights, but it performed a bit worse.</p>
<p>Loss is computed with KL divergence to add label smoothing: true target token is given 0.9 probability and two of its neighbors (left and right) both take 0.05. If true target token is in the beginning of the sentence and we are calculating loss for start logits then true token still gets 0.9, but two following are taking 0.06 and 0.04. Analogous thing is implemented if true end token is the last: its proba is 0.9, but two previous have 0.06 and 0.04.</p>
<p>2) Linear layer to predict binary target for each token: if it should be in selected text or not. Takes hidden states from the last layer. Experimented with other layers a lot, but it did not improve the performance. The loss in binary cross-entropy.</p>
<p>3) Linear layer to predict a sentiment of each token. Also uses only the last layer of a transformer. Predicts 3 classes – neutral, positive and negative. Tokens from selected text are labeled as having the same sentiment as the tweet, while all other tokens are assigned neutral class. The loss here is the usual cross-entropy for each token separately.</p>
<p>4) Two linear layers with ReLU in between to predict the sentiment of the whole tweet. Concatenates mean and max pooling over all tokens in a tweet skipping cls and sentiment tokens. Then concatenates such representations from the last two layers of a transformer and passes it through the multi-sample dropout. Also utilizes the momentum exchange (arxiv 2002.11102) before calculating the loss with cross-entropy.</p>
<h3><em>Training phase</em></h3>
<p>During training, the total loss is calculated as the weighted sum of losses from all four heads. Training is performed on 8 folds with AdamW optimizer and using SWA over a get_cosine_with_hard_restarts_schedule_with_warmup scheduler for 10 epochs. SWA snapshots were taken at the end of each epoch, despite this steps did not coincide with the steps of the minimal learning rate for the combination of parameters I used (num_warmup_steps, num_cycles). And for some reason (maybe it increases diversity between snapshots?) it worked better than taking snapshots at the end of each learning rate cycle. </p>
<p>Tried to implement self-distillation from 2002.10345, which looks very interesting and promises to increase stability of the training, but it only made the performance quite significantly worse. Maybe did something wrong in implementation.</p>
<h3><em>Inference phase</em></h3>
<p>1) At the inference time, the first head is used to create a set of (start, end) candidates. First of all, each pair of (start, end) indices where end >= start is assigned a logit as a sum of individual start and end logits. All cases where end < start are given -999 logits. Then softmax is applied across all pairs to obtain probabilities for candidates and top 3 of them are selected to be used for the further processing. Tried other numbers of candidates, but 3 worked best. Let’s call the probability of a candidate from this head ‘<em>qa_prob</em>’.</p>
<p>2) The output of the second head is the set of logits: one for each token. To obtain a score for each of the selected (start, end) candidates I took the sigmoid from the tokens and calculated the average log of the resultant token probabilities across candidate tokens. Let’s call the output number as ‘<em>score_per_token</em>’.</p>
<p>3) The output of the third head is used in a very similar way to the previous. The only difference is that instead of sigmoid the softmax is taken over each token logits (there are 3 of them here – by the number of sentiments) and the proba corresponding to the sentiment of the tweet is selected. Then the same averaging operation as for previous head is applied to obtain a score for candidates. Let’s call it ‘<em>sentiment_per_token</em>’.
So in the end of this stage at inference time we have 3 (start, end) candidates with 3 scores assigned to each of them</p>
<h1>External scorer</h1>
<h3><em>Architecture</em></h3>
<p>Used ELECTRA with the following input:
<code>[CLS] ([POSITIVE]|[NEUTRAL]|[NEGATIVE]) tweet [SEP] selected_text_candidate [SEP]</code></p>
<p>Single head (linear->tanh->dropout->linear) on top of the transformer is fed with the concatenation of the cls token hidden states from the last two layers to predict if the current candidate for selected text is correct or not. Loss is computed with cross-entropy after application of momentum exchange.</p>
<p>Tried to add a head for predicting a jaccard for a candidate along with or instead of classification head, but it made results worse.</p>
<h3><em>Training phase</em></h3>
<p>Dataset for training is built with all tweets each having three candidates from the previous model and also tweet with true selected_text is added if it is not present among candidates. Trained it for 3 epochs with AdamW and SWA.</p>
<h3><em>Inference phase</em></h3>
<p>3 candidates for each tweet are scored with this model. It ouputs two logits which are softmaxed and then the log of class 1 proba is taken as the score for the candidate. Will call it ‘<em>external_score</em>’ in the following.
So after this step we have 3 candidates and each of them has 4 scores.</p>
<h1>Ensembling different transformers</h1>
<p>BERT, RoBERTa and ELECTRA are actually ensembles of 8 (by the number of folds) models for which usual logits averaging is implemented. For BERT I used 4 bert-base-cased and 4 bert-large-cased models. For RoBERTa – 5 roberta-base-squad2 and 3 roberta-large. For ELECTRA – 6 electra-base-discriminator and 2 electra-large-discriminator.
External scorer is the combination of 4 electra-base-discriminator.</p>
<p>Each of three models – BERT, RoBERTa and ELECTRA – outputs 3 candidates for a given tweet. If there is an intersection between these sets of candidates, then only this intersection is considered. If intersection is empty then the union of BERT and ELECTRA candidates worked best.</p>
<p>The final score for each candidate is the weighted sum of <em>qa_prob</em>, <em>score_per_token</em>, <em>sentiment_per_token</em> and <em>external_score</em> inside the model type (BERT, RoBERTa or ELECTRA) and then the weighted (here models are weighted) sum of these sums. The final prediction is the candidate with the largest score, which then goes through post-processing. Also in case if there are two candidates with very close score, tried to predict their concatenation, but it did not really bring much.</p>
<p>The solution appeared to be quite sensitive to the weighting coefficients, which is disadvantage, because they cannot be reliably selected in cross-validation.</p>
<p>Eventual CV score for a wide range of weights was around 0.732. Individual performance among models was the best for ELECTRA. And RoBERTa scored better then BERT.</p>
<p>Thanks for reading!</p> |
Tweet Sentiment Extraction | 5th place solution & magic | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, All
First of all, congrats winners! We were a little bit behind you guys. We enjoyed this competition very much. Due to the magic, this competition was a little bit different from standard ML competitions. It makes us feel solving some kind of puzzle although It was also fun for us</p>
<h1>The Magic</h1>
<p>I guess that this is just a bug introduced when they created this task. Here shows a representative example.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F436401%2F1bcbc6606aa7a3e112698c5ddfac12ad%2FScreen%20Shot%202020-06-16%20at%205.37.10%20PM.png?generation=1592354249239961&alt=media" alt=""></p>
<p>The given original annotation is “onna” but it is too weird. The true annotation should be “miss” (this is a negative sentence). We think that the host applied a wrong slice obtained on the normalized text without consequence spaces for the original text with plenty of spaces, emojis, or emoticons. Thus, this competition pipeline should be as follows.</p>
<ul>
<li>Recover true annotation from the buggy annotation (pre-processing).</li>
<li>Train model with true annotation.</li>
<li>Predict the right annotation.</li>
<li>Project back the right annotation to the buggy annotation (post-processing).</li>
</ul>
<p>We call the pre-processing and post-processing as magic. After we found that, our score jumped from 0.713 to 0.721. Maybe, we can also do the 4 steps with an end-2-end model as some people claimed that they did not use any postprocessing. From 0.721 to 0.731, we improved individual models, ensembled models, and improved the pre- and post-processing.</p>
<h1>Model</h1>
<ul>
<li>We use RoBERTa and BERTweet.</li>
<li>Ensembled all 5 member’s models in the char-level.</li>
<li>We do not do special things in our model training.</li>
<li>We only use train.csv. (no sentiment140 and complete-tweet-sentiment-extraction-data)</li>
</ul>
<h1>Post-process improvement</h1>
<p>Assuming the model (token level) is perfect, we maximize the Jaccard score with the pre- and post-process. This is an example.
<a href="https://www.kaggle.com/tkm2261/pre-postprosessing-guc">https://www.kaggle.com/tkm2261/pre-postprosessing-guc</a></p>
<p><code>
&gt;&gt;&gt;&gt; FOLD Jaccard all = 0.9798682269792468
&gt;&gt;&gt;&gt; FOLD Jaccard neutral = 0.9961648726550028
&gt;&gt;&gt;&gt; FOLD Jaccard positive = 0.969571077575057
&gt;&gt;&gt;&gt; FOLD Jaccard negative = 0.96793968688976
</code></p>
<p>Under the perfect model assumption, this result can be interpreted that we can achieve 0.9798682269792468 if the model is perfect. Then, we apply this postprocessing for our model prediction. This pipeline worked pretty well. Any members can test their own postprocessing idea and share it with team if it improves the score.</p>
<p>In summary, to our knowledge, this competition is a competition that reproduces embedded human errors. When I found the magic, I am also disappointed a little bit. But, maybe, noticing such bugs in data should be one skill of DS. We should not just apply model but dive into data carefully.</p>
<p>If you have any questions, plz feel free to post it in this thread.</p>
<p>Thank you.</p> |
Tweet Sentiment Extraction | 60th place (silver medal) solution (+ GitHub) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hey everyone, I am very excited here, this will be my competition first silver medal. Like many others I have been here from the very beginning, I am very exhausted, and after the Jigsaw competition will take a few months away from NLP competitions 😄 .
I have made almost 300 experiments (here is <a href="https://github.com/dimitreOliveira/Tweet-Sentiment-Extraction/tree/master/Model%20backlog">the list</a>) and there is a buch of extra stuff in this <a href="https://github.com/dimitreOliveira/Tweet-Sentiment-Extraction">Git repository </a>, again I would like to encourage others to do something similar, this will help other people learning and it also helps me a lot solving future similar problems.</p>
<p>I will talk about my best model that achieved 0.720 at the private leaderboard but wasn't the one chosen 😓.</p>
<h2>Quick summary</h2>
<ul>
<li>Model: 5-Fold <code>RoBERTa base</code>. <a href="https://github.com/dimitreOliveira/Tweet-Sentiment-Extraction/blob/master/Model%20backlog/Train/260-Tweet-Train-5Fold-roBERTa%20mask%20and%20span%20OneCycle2.ipynb">training notebook here</a></li>
<li>Tasks: Predicted start/end indexes and mask span <code>(details below)</code></li>
<li>Dataset: Removed about 30 noisy samples, and samples with Jaccard score between <code>text</code> and <code>selected_text</code> == 0, found that they were very noisy. <a href="https://github.com/dimitreOliveira/Tweet-Sentiment-Extraction/blob/master/Datasets/tweet-dataset-5fold-roberta-64-clean.ipynb">dataset creation here</a></li>
<li>Post-process: Removed words that were not present in <code>text</code> field. <code>(details below)</code> <a href="https://github.com/dimitreOliveira/Tweet-Sentiment-Extraction/blob/master/Model%20backlog/Inference/260-tweet-inference-5fold-roberta-mask-and-span-on.ipynb">inference notebook here</a></li>
<li>Framework: <code>Tensorflow</code> only GPUs</li>
</ul>
This is a metrics table that I used to compare my models (this one is from the 0.720 model).
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1182060%2Ffebe1288f70dc51136a094407ad6bfd3%2FScreenshot%20from%202020-06-16%2021-37-24.png?generation=1592354294790540&alt=media" alt=""></p>
<h2>Detailed summary</h2>
<h3>Model & training</h3>
<ul>
<li>5-Fold <code>RoBERTa base</code></li>
<li>Public leaderboard: <code>0.715</code></li>
<li>Private leaderboard: <code>0.720</code></li>
<li>Sequence length: <code>64</code></li>
<li>Batch size: <code>32</code></li>
<li>Epochs: <code>2</code></li>
<li>Learning rate: <code>1e-4</code></li>
<li>Training schedule: One cycle (from 0 to 1e-4 then to 1e-6)</li>
<li>Target: Start/end indexes and masked span (token level)</li>
<li>Used the pooled <code>11th layer</code></li>
<li>Dense layer without <code>bias</code></li>
<li>Label smoothing: <code>0.2</code> (for all tasks)</li>
<li>Losses: <code>CategoricalCrossentropy</code> for start/end and <code>BinaryCrossentropy</code> for maks (used logits)</li>
</ul>
<p>I got pretty good results using one cycle schedule for training, the first time I tried one cycle got me 0.718 on the public leaderboard. My model had 3 outputs, 1 for each span index (start and end) and an extra output for the masked span, this means if we had a sequence of size 6, the start span is index 2 and end span is index 4, the masked span would be <code>0 0 1 1 1 0</code>, makes sense? I thought this would give the model a more general feeling about the target.</p>
<h3>Dataset</h3>
<ul>
<li>Nothing fancy here dropped <code>null</code> samples, removed about 30 samples that I thought were very noisy, and after some study, I found that samples with very low Jaccard score between <code>text</code> and <code>selected_text</code> were noisier, so I removed them.</li>
<li>I would like to say here that at the beginning I made a lot of mistakes with the tokenizers, so for every dataset that I created I have added some tests, check it out!</li>
<li>5-Fold split using <code>sklearn StratifiedKFold</code> and <code>sentiment</code> as the target, like many others.</li>
<li>To save time for each dataset I was creating a bunch of outputs.</li>
<li>I have used <code>ByteLevelBPETokenizer</code> as the tokenizer for RoBERTa</li>
<li>Lowercased text</li>
</ul>
<h3>Inference</h3>
<ul>
<li>Summed all 5 model's logits and then used <code>argmax</code> to get start/end indexes.</li>
<li>The post-process was very trivial, I just removed words that were not present in the <code>text</code> field, this helped to remove word-pieces that the tokenizer created and got predicted.</li>
<li>Empty predictions got filled the whole <code>text</code> field.</li>
</ul>
<p><strong>This is how my model looked like:</strong>
```
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')</p>
<p>base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name='base_model')
_, _, hidden_states = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})</p>
<p>h11 = hidden_states[-2]</p>
<p>logits = layers.Dense(3, use_bias=False, name='qa_outputs')(h11)</p>
<p>start_logits, end_logits, mask_logits = tf.split(logits, 3, axis=-1)
start_logits = tf.squeeze(start_logits, axis=-1, name='y_start')
end_logits = tf.squeeze(end_logits, axis=-1, name='y_end')
mask_logits = tf.squeeze(mask_logits, axis=-1, name='y_mask')</p>
<p>model = Model(inputs=[input_ids, attention_mask], outputs=[start_logits, end_logits, mask_logits])
```</p>
<p>Thanks <a href="/kurianbenoy">@kurianbenoy</a> , good luck with the competitions!</p>
<p>Hey <a href="https://www.kaggle.com/diansheng" target="_blank">@diansheng</a> , It gave a small boost when used as an auxiliary target, was not something too relevant but helped.</p> |
Tweet Sentiment Extraction | 69th solution(silver medal) - Noise modeling | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>>It is the first competition and learned a lot from many amazing kagglers' discussions and kernels.
Without them, I could not get this high at my first competition. So many thanks to people who shared their kernels, ideas and opinions. This time, I could not share as much as I got from them. I will definitely try to be the kaggler just like them and contribute for the community.</p>
<p>It turns out my private score took advantage of selection of high performance starter kernel. Huge appreciation to sazuma but, at the same time, a bit disappointed because the most of my private dataset performance didn't come from my modeling trials, but his work. However, I think my modeling approach is deserved to be shared and believe there was a chance to improve more than current score if properly re-implemented with some tweaks. I couldn't seize the chance because of some mistakes found in my implementation and lack of remaining time. </p>
<h1>1. Text Extraction Model</h1>
<h3>1. Roberta based <code>start_idx</code>, <code>end_idx</code> extraction 10-fold model + Pseudo Label</h3>
<ul>
<li>Adopted from sazuma's starter kernel <a href="https://www.kaggle.com/shoheiazuma/tweet-sentiment-roberta-pytorch">Tweet Sentiment RoBERTa PyTorch</a></li>
<li>Public score of the 10-fold model is 0.714 and boosted 0.716 by pseudo labels using <a href="https://www.kaggle.com/kazanova/sentiment140">Sentiment140 dataset</a> - turns out the 10-fold model got 0.717 and pseudo label degrades it to 0.716.
The number of pseudo labels are 200000 which is about 9-10 times of the number of one fold training set. The submission with 1500000 pseudo label remains the same jaccard score on private dataset. It is likely that pseudo labeling has no advantage for this competition.</li>
</ul>
<h1>2. Noise Model</h1>
<p>As discussed in <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/154415">The magic</a>, noise is my next modeling target for this competition. Some random weird noise such as "onna" (gonna) and "e nice" (nice) appears in the dataset.
(Now, revealed it is not random, here <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159254">Some call it magic</a> how it is generated, not confirmed by the competition host, but seems reasonable and lessons learned the importance of deep-dive data exploration).</p>
<p>To make my model output the noised <code>selected_text</code>s, post processing is applied as following procedure:</p>
<ol>
<li>Train the model with cleansed <code>selected_text</code> labels</li>
<li>Predict for the test samples</li>
<li>"<em>Selectively"</em> inject the noise to the predictions</li>
</ol>
<p>For 1, we need cleansed text. The cleansing is done with simple preprocessing: filling the words in the separated <code>text</code> by space. Also, add two more columns <code>shifted</code>, <code>front_shifts</code>, <code>back_shifts</code>
For example,
- "onna" cleansed to "gonna"
+1 char-level shift at the front -> <code>front_shifts</code>=1, <code>back_shifts</code>=0, <code>shifted</code>=1
- "o sad" cleansed to "soo sad."
+2 char-level shifts at the front and +1 char-level shift at the back -> <code>front_shifts</code>=1, <code>back_shifts</code>=2, <code>shifted</code>=1
- "happy" no cleansed needed, no shifts needed -> <code>shifted</code>=0</p>
<p>For 3, I design two roberta-based models </p>
<h2>a. Noise label row classifier</h2>
<ul>
<li>Which rows to inject noise </li>
<li>Trained with <code>text</code> as input, <code>shifted</code> as label</li>
<li><p>10 fold CV gives 88% accuracy</p>
<h2>b. Noise resurrector</h2></li>
<li><p>How the amount of character-level shifts occurred to make the cleansed words</p></li>
<li>Trained with <code>text</code> and <code>selected_text</code> as input, <code>front_shifts</code> and <code>back_shifts</code> as label</li>
<li>10 fold CV gives 0.5 full matching accuracy, jaccard improvement 0.2 after noise resurrection than before(cleansed one)</li>
<li>L1 loss significantly boosted its performance (As mentioned below, maybe this is because of some outliers due to my buggy preprocessing code)</li>
</ul>
<p>Ideally, if the model trained with cleansed data predicts well, I could boost my total jaccard +0.02 since about 10% of train data has <code>shifted</code>=1. However, it doesn't go that much, +0.001.
Possible reasons are
- The model jaccard is around 0.717, so the noise put into wrong prediction at the first time.
- My preprocessing code is naive. In the example "o sad", the answer could be "sad." but my preprocessing code makes it "soo sad." because it just fills the incomplete words.
- Lack of hyperparmeter tuning. I wished I have more time </p>
<h1>3. 10-folded ensemble</h1>
<ul>
<li>Majority vote ensemble works better for me than logits averaging.</li>
<li>Ensemble one the pseudo labled model's output set from 1, one cleansed <code>selected_text</code> trained model's output set, and two postprocessed models for each of them.</li>
<li>Public LB is 0.712 but private LB is 0.718. (Lessons learned that no 100% trust for public LB)
<em>It was kind of risky bet to choose this ensemble model, but I made it, haha. If I choose another, my score would be ~0.716</em> 🤣</li>
</ul>
<h1>4. Additional techniques</h1>
<ul>
<li>Pretrain model's end layers reinitialization
Described here, <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/157806">How to reinitialize specific layer weights?</a></li>
<li>Additional tokens to Roberta-base pretrained model
Tested on public LB, it gives +0.001</li>
</ul>
<p><strong>Hope you enjoyed the competition and thank you for reading. Any feedbacks, questions will be welcomed!</strong></p> |
Tweet Sentiment Extraction | 7th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>If sentiment isn't neutral, all consecutive punctuation will be split with probability of 0.5. For example:
'...' --> '. . .'</p>
<p>'???' --> '? ? ?'</p>
<p>'!!!' --> '! ! !'</p>
<p>Actually we found the magic first, then recalled this, which makes us feel more confident. </p>
<p>Thanks. Spent an hour or so beforehand. </p>
<p>Thanks </p>
<p><a href="https://www.kaggle.com/naivelamb/roberta-base-ensemble?scriptVersionId=36485398">here</a> is the submission notebook. </p>
<p>Version 8 = 0.730 (4 RoBERTa-base)
Vesrion 11 = 0.731 (3 RoBERTa-base + 1 RoBERTa-large)</p>
<p>First of all, we want to thank kaggle for hosting the competition. Thanks for my teamates <a href="/murphy89">@murphy89</a> <a href="/wuyhbb">@wuyhbb</a> for their hard work. Thank <a href="/abhishek">@abhishek</a> for providing a very solid baseline. This is my first NLP gold medal and I am extremely happy! </p>
<h1>TLDR</h1>
<p>Use model to predict the <strong>ground truth</strong> start/end indices, use post processing to capture <strong>noise</strong>. </p>
<h1>Models</h1>
<p>Our models are RoBERTa-base with customized headers. We have two model structures:
1. Model-1
- Concat([last 2 hidden_layers from BERT]) -> Conv1D -> Linear
- End position depends on start (taken from <a href="https://www.kaggle.com/c/tensorflow2-question-answering/discussion/127545">here</a>), which looks like,
```python</p>
<h1>x_head, x_tail are results after Conv1D</h1>
<p>logit_start = linear(x_start)
logit_end = linear(torch.cat[x_start, x_end], dim=1)
```
2. Model-2
- Concat([last 3 hidden_layers from BERT]) -> Conv1D -> Linear
- Auxiliary tasks for whether the prediction is whole text (classification) and whether the token is in selected text (segmentation). </p>
<h1>Preprocessing & Tokenization</h1>
<p>We used two methods to preprocess the text and selected_text:
* Method-1</p>
<p>1) Clean text & selected_text: <code>" ".join(text.split())</code>. </p>
<p>2) Split punctuations if sentiment is not "neutral": '...' ==> '. . .' (50% if sentiment != 'neutral'). </p>
<p>3) Correct wrong offsets for "�" (this is not needed if you are using tokenizers 0.7.0). </p>
<p>4) A token is considered as a target if <code>len(selected_text) &gt; 4</code> and <code>char_target_pct of the token &gt;= 0.5</code>. </p>
<p>5) Max_len = 192.</p>
<ul>
<li>Method-2</li>
</ul>
<p>1) Clean label based on postprocessing method (will discuss later). </p>
<p>2) Use raw text & selected_text, tokenize at word level, use "Ġ" for all spaces. </p>
<p>3) Dynamics padding. </p>
<p>We used two patterns to build the training samples for both methods: [sentiment][text] and
[sentiment][raw sentiment][text], where raw sentiment comes from the original full dataset. We didn't convert text to lower case during preprocessing, instead, we use it as an augmentation. Method-1 is used for Model-1, while Method-2 is used for Model-2. </p>
<h1>Training related</h1>
<ul>
<li>Batch size: 32</li>
<li>Optimizer AdamW, weight decay 0.01</li>
<li>Scheduler: cosine scheduler with warmup</li>
<li>Loss: CrossEntropy with label smoothing</li>
</ul>
<p>We finetuned Model-1 (head only) for another 3 epoches using linear scheduler with warmup (lr=1e-5). Model-2 is trained with Fast Gradient Method (FGM). Our best single model on public LB: [sentiment][raw sentiment][text] + Method-1 + Model-1, trained on a 10-fold setup (CV 0.7177, LB 0.719).</p>
<h1>Postprocessing</h1>
<p>There are two parts of postprocessing: set "neutral" predictions as texts (all CV scores are after this treatment) and process noise. The first part is straight forward, so we will focus on the second part here. Actually many people noticed there are lots of mystery selected_texts during the competition. For example:
<code>
[text] : [ hey mia! totally adore your music. when will your cd be out?]
[selected_text] : [y adore]
</code>
Take Method-1 as an example, it throws [y] away, leaving [ adore] in the target span. Therefore, the trained model is good at predicting [ adore] as the results (jaccard(decode_target, pred) = 1), but due to the noise [y], the final jaccard is only 0.5. In fact, for a model with validation jaccard score around 0.717, the jaccard score between prediction and the decode target is around 0.724. Hence if we can somehow add back the noise, we can boost model performance. For the above expample, one may naturelly think the original label is [ adore ], therefore the given label can be acheived by shifting the span to the left by one position. Later, we realized how these shifts come from, if we compare the text and the clean text,</p>
<p><code>
[text] : [ hey mia! totally adore your music. when will your cd be out?]
[clean text] : [hey mia! totally adore your music. when will your cd be out?]
</code>
You will realize there are 3 extra spaces in the text, one leading and two in between. Ignore the leading space and the one after selected_text span, there is one extra space, which is exactly the number of shift position.
<code>
[text_no_leading]: [hey mia! totally adore your music. when will your cd be out?]
[selected text] : [y adore]
[clean text] : [hey mia! totally adore your music. when will your cd be out?]
[selected text] : [ adore ]
</code>
We think the stroy behind is that the label provider only provided the indices of targets to Kaggle ( <a href="/philculliton">@philculliton</a> mentioned it <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/140847#802761">here</a>). There might be some miss communication between Kaggle and the service provider regarding which texts they are using (Kaggle uses raw text, labeler uses clean text). </p>
<p>Based on this finding, we designed rules to post-process model predictions and clean labels. As a results, both models could predict the <strong>ground truth</strong> start index accurately, while the starting noise is handled by post processing. The post processing rules are,
```python
def _post_shift_new(text, pred):
"""
text: raw text
pred: prediction based on raw text, no leading space
"""
clean_text = " ".join(text.split())
start_clean = clean_text.find(" ".join(pred.split()))</p>
<pre><code>start = text.find(pred)
end = start + len(pred)
extra_space = start - start_clean
if start&gt;extra_space and extra_space&gt;0:
if extra_space==1:
if text[start-1] in [',','.','?','!'] and text[start-2]!=' ':
start -= 1
elif extra_space==2:
start -= extra_space
if text[end-1] in [',','.','!','?','*']:
end -= 1
else:
end -= (extra_space-2)
start -= extra_space
pred = text[start:end]
# handle single quotation mark
if pred.count("'") == 1:
if pred[0] == "'":
if text.find(pred) + len(pred) &lt; len(text) and text[text.find(pred) + len(pred)] == "'":
pred += "'"
else:
if text.find(pred) - 1 &gt;= 0 and text[text.find(pred) - 1] == "'":
pred = "'" + pred
return pred
</code></pre>
<p>```
We applied post processing to all the non-neutral samples, for all those modified samples, it has a wining rate (jac_post > jac_no_post) of ~60% on the validation set. Our best single achieves CV 0.7264, LB 0.728 after post processing. So processing the noise brings ~0.009 boost in both CV and LB. The winning rate and aligned boost in both CV & LB make us feel comfortable to apply it. </p>
<h1>Ensemble</h1>
<p>To ensemble results from different models, we convert the token probability to char-level probability, ensemble the probabilities from different models and finally find the start/end index. By ensembling 4 models (2 patterns X 2 models) and applying post-processing, we achieved CV 0.7299, LB 0.723, which scores 0.730 on private leaderboard. </p>
<h1>Other stuff</h1>
<ul>
<li>Actually we didn't select our best solution, which is an ensemble of 3 RoBERTa-base and a RoBERTa-large, it has a CV 0.730669, LB 0.721 and private LB 0.731. </li>
<li>I checked all our submissions with private leaderboard score, CV aligns perfectly with private leaderboard. Never forget the golden rule: <strong>Trust your CV</strong> </li>
</ul>
<p>Assign the probability of a token to all the characters of the token, average between different models, then search span on the char-level array. </p>
<p>The public 728 is a single RoBERTa base, while the best private (selected) is an ensemble of 4 RoBERTa base with higher CV. </p>
<p>The unselected best private is an ensemble of 3 RoBERTa base and 1 RoBERTa large model (exclude the best public model). </p> |
Tweet Sentiment Extraction | A Noob's Story : Silver in very First Competition (My Magical Experience) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>When this competition first started on March 24th , I had never worked on NLP problem before , not even the basic ones using count-vectorizer and tf-idf . This competition was unique and it pulled me towards itself , at that time I was just getting started with Kaggle and my plan was to just write kernels and never have I ever dreamed of even participating in a Kaggle competition this early in my Kaggle Journey. </p>
<p>I wrote a simple EDA notebook and dived right in , from there my learning curve began . I got an overwhelming support from community and my notebook became the most upvoted for the competition . This gave me the confidence to start thinking of joining the competition .</p>
<p>I learned everything I currently know about NLP during the course of this competition from lovey kernels written by Grandmasters and masters , from discussion forums, other useful resources .
One thing led to another and after three months I have done three different NLP projects involving different kinds of transformer based models , became a Kaggle notebooks master and today the most precious of it all got a silver medal in my first competition finishing in top 2 percent (44th place). I am just very happy right now and this medal means a lot to me, it's like a dream coming true . I have come a long way and I hope this learning curve continues</p>
<p>From March 24th I have worked 9hours/day on Kaggle and machine learning in general and I can now see that if you are willing to learn , miracles do happen . People have been claiming to find magic in this competition , well I have found something entirely else , a magic for my life .</p>
<p>A very heartily congratulations to all those who survived the shakeups and won a medal , I don't know whether the shakeup is fair or not as a lot of people have got lucky and few unlucky, but that's how life is .</p>
<p>I would also want to thank few people without which this could not have been possible
* <a href="/abhishek">@abhishek</a> for the wonderful tutorials , tihis is a result of your public notebooks
* <a href="/cdeotte">@cdeotte</a> for wonderful insights and kernels
* <a href="/adityaecdrid">@adityaecdrid</a> for wonderful thread of things which can help improve score
* <a href="/cpmpml">@cpmpml</a> for posting the failure thread , it motivated me a lot , looking forward to learn more from you and congratulations</p>
<p>Along the way I found the best team mate ever <a href="/rftexas">@rftexas</a> , who is now like a brother to me,thanks for everything , I have learned a lot from you and also @mehdi who was a valuable addition to the team</p>
<p>As this noob continues to learn , i will try to keep writing new and informative kernels . I thank the community to be kind and let noobs like me flourish , also I am sharing what I feel , if you downvote please let me know what I did wrong</p>
<p>You too buddy </p>
<p>Yes sure , I can , I will soon share it on linkedIn</p>
<p>Thank you</p>
<p>Thanks for kind words <a href="/chrisden">@chrisden</a> , perhaps we can talk on LinkedIn about the team up</p>
<p>I am glad it did thanks a lot</p>
<p>Thanks <a href="/neomatrix369">@neomatrix369</a> , I learned a lot of thing during this competition and I Tried a lot of things , unfortunately they didn't work because the thing was to predict the noise instead of the text because the public models were already good at it . </p>
<p>Thanks <a href="/keithamundsen">@keithamundsen</a> 😉 </p> |
Tweet Sentiment Extraction | Quick 1st place solution overview before the night | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>For 2nd level models we use the original input.
For 1st level models we use <code>' '.join(x.split())</code> </p>
<p>Thanks, and once again congratulations to your team. I found your solution really interesting as well !</p>
<h3>Update :</h3>
<ul>
<li>Training notebook for our 2nd level models : <a href="https://www.kaggle.com/theoviel/character-level-model-magic/">https://www.kaggle.com/theoviel/character-level-model-magic/</a></li>
<li>More detailed write-up : <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159477">https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159477</a></li>
</ul>
<p>This is a short thread to rapidly present a solution, we will work on a more detailed one tomorrow. </p>
<p>Huge thanks to my teammates <a href="/cl2ev1">@cl2ev1</a>, <a href="/aruchomu">@aruchomu</a> and <a href="/wochidadonggua">@wochidadonggua</a> for the great work, we definitely wouldn't have gone this far if we were not together. </p>
<p>Our whole solution can be illustrated in the following pipeline.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2062758%2F9848f1052e0108f6257fa56f1233a9d1%2Fpipe.png?generation=1592353191974814&alt=media" alt=""></p>
<p>The idea is to use transformers to extract token level start and end probabilities. Using the offsets, we can retrieved the processed probabilities for the input text.</p>
<p>We then feed these probabilities to a character level model. </p>
<p>The tricky part is to concatenate everything correctly, such as explained <a href="https://www.kaggle.com/c/tweet-sentiment-extraction/discussion/159254">here</a> </p>
<p>And then... TADAM !
No post-processing. Just modeling. </p>
<p>We selected two models that scored public LB 0.734 / CV 0.736+. They use 4 different character level model each, with a big variety of transformers. Final private scores are 0.735 and 0.736 :)</p>
<p>Thanks for reading ! </p>
<p>Thanks, means a lot :) </p>
<p>Thanks ! </p>
<p>Thanks a lot ! Congratz on the silver medal as well </p>
<p>Thanks !
The key was to correctly align characters with probabilities, otherwise it doesn't really work </p>
<p>Thanks !</p>
<p>I'm not sure I completely get your idea, but I would consider trying to translate the predicted label to the original noisy label, it makes sense.</p>
<p>Here's the training code for 2nd level models :
<a href="https://www.kaggle.com/theoviel/character-level-model-magic">https://www.kaggle.com/theoviel/character-level-model-magic</a></p>
<p>The rest is a bit less interesting and requires more cleaning :) </p>
<p>I agree, I have the feeling that 0.715 scoring models were already better than the labels at the task. The rest was just modeling the noise in the data, everybody found a different way to do so. Ours was a bit fancy, because we didn't spot the pattern initially aha
Anyways, congratz on competition GM !</p> |
Tweet Sentiment Extraction | Quick 3rd place solution (no pp) + source code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My solution on github: <a href="https://github.com/suicao/tweet-extraction/">https://github.com/suicao/tweet-extraction/</a>
Will update later when <a href="/christofhenkel">@christofhenkel</a> wakes up, his old body needs that sleep.</p>
<p>The main ingredients for our solution are:
- Character level model with GRU head (Dieter)
- Normal model with beamsearch-like decoder (mine)
- Diversity rules
- RoBERTa base + RoBERTa large + BART large.</p>
<p>We knew what the magic was but couldn't find a reliable post processing method for it. Soon we looked into the predictions and realized that the models did a decent job of overfitting the noise anyway, and focused on that direction. In the end, each of our method scored ~0.728 public LB and could've been in the gold zone.</p>
<p>For my part:
- I used fuzzywuzzy to fuzzy match the labels in case the it was split in half.
- For modeling, I copied XLNet's decoder head for question answering to RoBERTa. Basically you predict the start index, get the <em>k</em> hidden states at the <em>top-k</em> indices. for each hidden state, concat it to the end index logits and predict the corresponding <em>top-k</em> end indices.
- The best <em>k</em> is 3, for whatever reasons, which resulted in a 3x3 start-end pairs. I ranked them by taking the product of the two probs.
- Nothing too special regarding training pipeline. I used the fastai style freeze-unfreeze scheme since the head is quite complicated.
- Everything with RoBERTa byte-level BPE works best. BERTweet was quite a pain to work with (it also used og BERT's subword BPE for some reason), we didn't include it in the final ensemble.
- Probably for the reason above, XLNet's performance was quite subpar, a shame since it's my favorite.</p>
<p>Updated: added source code.</p>
<p>That was Dieter's work but afaik we just threw the data in and the model was able to pick it up.</p> |
Tweet Sentiment Extraction | Worst place writeup (15th solution) with code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>You found the magic and I didn't. Maybe we should have teamed again ;)</p>
<p>When I added the CNN I got 0.002 upside I think. </p>
<p>Wow, never thought I'd finish one rank short of gold. I was 49th the last morning day of the competition and had stagnated for weeks on the LB. Fortunately, I decided to train my best models on unprocessed data the night before, and first fold scores showed promise when I went to bed. Still, getting to successful submissions on Kaggle took some frantic work during last day. And scoring for my best submission ended literally during the last minute of the competition. I documented the setbacks and issues I got last day elsewhere, I won't repeat here. Let's say I consider myself lucky with the end result.</p>
<p>I'll describe my solution in this post, then what I tried and that did not help much.</p>
<p>Before diving into details, I want to thank Kaggle for this challenging competition. It mixes a rather classical NLP competition with a puzzle component. The puzzle (aka magic) comes from some hidden preprocessing of text before labeling. Without this the competition would probably have been a bit boring.</p>
<p><strong>Don't overfit</strong></p>
<p>Early experiments clearly convinced me that overfitting will be the main issue here. Danger showed via the high variability of cross validation scores when you change how folds are defined. <a href="/cdeotte">@cdeotte</a> had clearly documented this in a post here as well. One way out is to augment data ( via additional external data, or via pseudo labeling, or using some data augmentation techniques). None of the ones I tried really helped (see below). Another, complementary option, is to bag many runs. By the law of large numbers the variance of scores decreases. I therefore decided to settle on training models using 7 different sets of 5 folds. Folds are stratified by sentiment and by proportion of cases here selected text equals full text. From that I compute 2 metrics: average of the 7 CV scores, and CV score when I average all predictions. In the latter case, each sample prediction is the blend of 7 out of fold predictions. An improvement of these two metric translates to a LB improvement with a noise of 0.002 at most. I only submitted if there was a significant improvement in at least one of the metrics, and only kept the modification if LB was improved too.</p>
<p><strong>Efficient code</strong></p>
<p>To make this practical I optimized the code. Training roberta-base on 4/5 of the train data takes less than 2 minutes per epoch on a V100 GPU. Key was to adapt batch datasize. Basically, I truncate input size to the largest of the unpadded input in the batch. My inferencing script shows how to do it: <a href="https://www.kaggle.com/cpmpml/ensemble?scriptVersionId=36550365">https://www.kaggle.com/cpmpml/ensemble?scriptVersionId=36550365</a></p>
<p>Inferencing is also very efficient, it takes about 14 seconds to predict the roberta-base model on public test data using Kaggle P100 GPU.</p>
<p>Issue though is that CV and LB dropped compared to fixed size. I came back to same level of performance by masking all logits for padding. This made prediction insensitive to padding length.</p>
<p>A key component was the use of huggingfaces tokenizers library. I started from <a href="/abhishek">@abhishek</a> code for that.</p>
<p><strong>The model</strong></p>
<p>I ended up with a simple CNN head on top of robert and roberta large models. There is one CNN head for start logits and one CNN head for end logits. The head for start logits is a convolution on last hidden state for a token and the last hidden state of the previous token in the input sequence. Goal is to detect the transition from a previous token not linked to the sentiment to a token linked to the sentiment. Similarly, end logits are convolutions between a token and its following token.</p>
<p>I experimented with more complex heads but they led to overfitting.</p>
<p>I also tried many huggingfaces models, and in the end only kept roberta-base and roberta-large-squad2. </p>
<p><strong>Training settings</strong></p>
<p>I tried various optimizers and schedulers but ended up with keeping <a href="/abhishek">@abhishek</a> settings except i ran it for 5 epochs. Adding 2 epochs gave me 0.002 LB and CV uplift. in short, settings are: huggingfaces AdamW optimizer with learning rate of 3e-5, and linear schedule without warmup. I used early stopping to only keep the best checkpoint.</p>
<p>Roberta large overfits easily on this dataset. I had runs with CV score of 0.1 or below. I tried various regularization (see below) but in the end used a very simple way: if jaccard score of first epoch is less than 0.695 then restart with a different pytorch seed on the same fold. This led to a CV average of 0.7195, a blend cv of 0.726, and a public LB of 0.726, the last day evening...</p>
<p>I trained roberta-base with my 7x5 folds, but also trained it on full data with 7 different pytorch seeds. For these I kept the checkpoints in epochs 3, 4, and 5 given the best checkpoints when running cross validation was always in these epochs. Averaging their weights as in swa was on my todo list, I think it is a bit better than averaging the predictions.</p>
<p>I used smoothed cross entropy loss. I seemed a bit better and more stable than cross entropy.</p>
<p><strong>Data prep</strong></p>
<p>I should have spent more time experimenting variants earlier. I stick to <a href="/abhishek">@abhishek</a> join/split prepossessing till last day. This looked like a great idea but it prevented models from using patterns of repeated spaces. As we all know it now, this was the basis of "magic" post processing.</p>
<p>When I switched to raw data CV and Lb improved by 0.007...</p>
<p>The only prepossessing I have in the end is to replace '�' with a quote, and to replace tokens for repeated punctuation with repeated tokens for single punctuation. You can see it in <a href="https://www.kaggle.com/cpmpml/ensemble?scriptVersionId=36550365">the inferencing script</a> as well. </p>
<p>For the input I followed SQUAD logic; concatenate a questions with its answer. The answer is the input text. The question is of the form "What X sentiment?" where X is the input sentiment: "positive", "negative", or "neutral". This was better than just having X as in public notebooks. I think, but I can't prove it, that using a question made models fine tuned on SQUAD2 a bit more effective.</p>
<p><strong>Ensembling</strong></p>
<p>Predictions are ensembled very simply: logits for a given token are averaged, then I look for the best sum of start logits and end logits with the constraint that start is before end.</p>
<p>I tried using softmax before addition, result is very close. I don't know which is best therefore I stick to the one that runs faster.</p>
<p><strong>Uploading</strong></p>
<p>I trained my models locally hence had to upload trained weights to Kaggle. I saved model weights in half precision, i.e. save after <code>model.half()</code>. It halves upload time. Predictions on test data wre the same than when using original weigths, except for one row only. I also used public datasets for my weights so that I don't hit the 20GB overall limit. Sharing weights is not risky as long as other Kaggle users don't have your model implementation. Edit: <a href="/christofhenkel">@christofhenkel</a> pointed out that weights can reveal quite a lot about the model architecture. I'll be more careful next time.</p>
<p><strong>Results</strong></p>
<p>Last day submissions results:</p>
<p>0.721 public, 0.725 private: 7x5 folds roberta-base, </p>
<p>0.724 public, 0.724 private: 7x5 folds roberta-base + 7x3 full train robert-base</p>
<p>0.726 public, ?? private: 7x5 folds roberta-large-squad2 (offline prediction)</p>
<p>0.723 public, 0.725 private: 4x5 folds roberta-large-squad2 </p>
<p>0.727 public, 0.726 private: 4x5 folds roberta-large-squad2 + 7x3 full train robert-base (*)</p>
<p>I also have some late submission to see what I could get with more uploaded weights. </p>
<p>0.727 public, 0.726 private: 4x5 folds roberta-large-squad2 + 7x3 full train robert-base + 7x5 folds roberta-base + (*)</p>
<p>0.725 public, 0.727 private: 6x5 folds roberta-large-squad2 + 7x3 full train robert-base (*)</p>
<p>The ones with (*) are obtained using my <a href="https://www.kaggle.com/cpmpml/ensemble?scriptVersionId=36550365">public scoring kernel</a>.</p>
<p><strong>Failures</strong></p>
<p>Here is a list of what I tried that did not improve both CV and LB. Note that these were tested only on preprocessed data. Maybe some would work on raw data. Also, some of the items below did help with earlier, weaker, models.</p>
<ul>
<li>other models than roberta-base and roberta-large-squad2. This includes bert, bert twitter, electra, xlm-roberta-base, roberta-base-squad2, deep pavlov robert. Issue is I had to write code to blend logits at char level given these models don't use the same tokenizer.</li>
<li>auxiliary targets. I tried sentiment, and full. The last one is 1 when selected text equals text. I even tried to use the full logit as an attention to modulate start and end logits.</li>
<li>Pseudo labellng: on valid data, on public test data, on all test data. It improved when my LB was 0.717 or below, but didn't improve anymore with better models.</li>
<li>hard sample mining.</li>
<li>token or sentence mixup</li>
<li>token swaps</li>
<li>token masking</li>
<li>mixout. Actually this worked: it was the only way I could get roberta-large to never overfit. But restarting is 0.002 better.</li>
<li>loss clipping</li>
<li>more complex heads</li>
<li>adversarial embeddings</li>
<li>noising embeddings</li>
<li>fixing target shift in input data (opposite of magic post processing)</li>
<li>my version of span loss</li>
<li>using attention layers. Idea was to see if tokens that attended the sentiment token the most would not be the selected ones. This is a bit similar to the unsupervised learning approach described by <a href="/cdeotte">@cdeotte</a> </li>
<li>using translated text back and forth. I used the marianmt models from huggingfaces for translation.</li>
<li>focal loss, jaccard loss, binary cross entropy</li>
</ul>
<p><strong>Takeaway</strong></p>
<p>I said it many times in the past but here it was literally true: one can improve till the very last minute. I failed a lot, but I learned a lot. And in the end enough things worked to get a result I am happy with.</p>
<p>Cherry on the cake: I miss gold here but moved back to top discussion rank :D</p>
<p><a href="/naivelamb">@naivelamb</a> Indeed, maybe we could have been in prize zone together. Congrats for your solution. </p>
<p>But when you asked I had a LB of 0.717 and I didn't feel I could contribute.</p>
<p>I have now a full writeup.</p>
<p>As I wrote, with weaker model it worked for me too. My best 0.717 public LB sub was with pseudo labeling. But when I moved higher up it was no longer the case.</p>
<blockquote>
<p>calling him "uncle"</p>
</blockquote>
<p>This is a sign of respect. I am honored and I appreciate that.</p>
<p>Pseudo labelling did work when I had weaker models. I tried last day with raw data and CV wasn't as good as without pseudo labeling.</p>
<p>Thanks. I was lucky in a way, I could have not tried raw data at all.</p>
<p>Thank you.</p>
<p>Thanks. I had spent too much time to let it go ;)</p>
<p>Also, to see that you didn't found the magic but still had great score was inspiring to me as I couldn't find the magic either.</p> |
Tweet Sentiment Extraction | [48th place] what worked and what did not work | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It was a surprise for me too, it's the first time PL gave me a boost. I added a few more details above on how PL worked for me</p>
<p>To be fair it did not work 100% of the time, an example: <br>
<code>
text = " Sorry I didn`t go Jesus wasn`t feeling it. And you know I can`t see it with out my best bud. Well that movie anyways " <br>
selected_text = "Sorry" <br>
original_prediction = "Sorry" <br>
fixed_prediction = "Sorry I" <br>
</code></p>
<p>In addition, in other cases the "shift" was correct, but the original prediction had better Jaccard score: <br>
<code>
text = " well us Brits have to wait a few more days for it! I thought it was all gonna realized at once! I guess it`s worth the wait!" <br>
selected_text = "! I guess it`s worth the wait!" <br>
original_prediction = "worth" <br>
fixed_prediction = "s worth" <br>
</code></p>
<p>but yeah, in average +0.004 in both CV and LB</p>
<blockquote>
<p>there might be at least two different labeling processes in place</p>
</blockquote>
<p>that could explain a lot indeed.</p>
<blockquote>
<p>I think my model picks it up already </p>
</blockquote>
<p>As a matter of fact, modelling this feature of the data was more effective than pp, by looking at top teams solution. This pp is a "poor man" approach</p>
<p>Solution is built on top of the amazing kernel from Abhishek: <br>
<a href="https://www.kaggle.com/abhishek/roberta-on-steroids-pytorch-tpu-training">https://www.kaggle.com/abhishek/roberta-on-steroids-pytorch-tpu-training</a></p>
<p>What worked: <br>
- for neutral: selected_text = text
- increased batch size (64)
- label smoothing
- modified CE loss to penalize distance from true span (see this kernel: <a href="https://www.kaggle.com/laevatein/tweat-the-loss-function-a-bit">https://www.kaggle.com/laevatein/tweat-the-loss-function-a-bit</a>)
- post-processing to account for offset induced by empty space, a.k.a. the magic, see below => +0.004 LB
- pseudo-labelling (surprise!) => +0.002. I produced pseudo-labelled samples from the original dataset by predicting the sentiment and selected text using only the samples in the training split for each fold iteration, in order to avoid leakage in the validation split. I took the most confident 4000 pos/neg and 2000 neutral, based on sum of start/end logits. For neutral pseudo-samples selected_text = text. <br>
- ensembling: average logits of 3 seeds 5-fold models + 3 seeds models running on the entire training dataset (adding 1 epoch) => +0.001 LB</p>
<p>What did not work: well, all the rest, in particular: <br>
- augmentation (synonym, word swapping, character replacement, ...)
- pos/neg upsampling
- fancy heads on top of transformers
- ...</p>
<p><code>
def fix_offset(x): <br>
txt = x['text'] <br>
if txt == '': <br>
return ''
pred = x['selected_text']
new_txt = ' '.join(txt.strip().split())
start = new_txt.find(pred)
end = start + len(pred)
to_end = len(new_txt) - end
return txt[start: len(txt) - to_end]
</code></p>
<p>It's a semi-supervised learning technique which consists in using the model itself to label new data which is then added to the training dataset. Typically only the most confident predictions are retained. </p> |
Tweet Sentiment Extraction | bronze + magic = gold ~ 8th solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Tweet Sentiment Extraction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to Kaggle for holding this competition and congratulations to all the winners!</p>
<p>Well, I notice this competition right after it started but didn't really started working on it until the last month. But one month actually proves to be too long for me because I quickly ran out ideas in 10 days. And what's making it worse and frustrating is that the LB is seriously unstable. I am really on the verge of giving up until someone posted on the forum and said that there's a post-processing magic. And lucky for me, after grinding really hard for one day, I found out the magic and perfected it in 2 days.
But then, for the last 2 weeks, I still kind of gave up and started playing pc&video games instead. Because I projected that by the end of this competition, gold zone will be flooded with 0.73. And also, there will be severe shake-down and I just don't have any great and robust ideas. To really secure a gold spot you'll basically need to get to public 0.74 which I don't think is possible for me.
Well, in the end, the result turned out to be not too bad for me. Get my first ever solo gold. If I try too hard and over-fit LB too much, who knows what will happen. :)</p>
<p>Magic
I'll use the following sample to illustrate the magic:
's awesome' = original answer = answer_
' He<code>s awesome... Have you worked with him before? He</code>s a good friend.' = original text = text_
'He<code>s awesome... Have you worked with him before? He</code>s a good friend.' = no extra blank = text
<code>'awesome'</code> = true answer = answer
The theory is very easy actually, the shift(noise) happens when in the original text, there are extra blanks before the answer's end.
Suppose answer=text[s:e], then answer_ =text_ [s:e+2]
Code is also very simple actually:
<code>
def post_processing(answer,text,text_):
if len(text_)&gt;0:
answer=answer.strip().lower()
text=text.lower()
p=find_(text_," ")
q=find_(text,answer)
if text_[0]==" " or 0&lt;=p&lt;=q+len(answer)-2:
answer=text_[q:q+len(answer)+2]
return answer
</code>
This is at least 99% true, I think. There are some exceptions of course, because they're originally wrong.</p>
<p>Models
I basically just ensemble several models with minor difference in architecture and cleaning. Sorry to disappoint you. :)
model 1: Similar to those high scoring public kernels. Only difference is that I remove all the noises and errors in the answer. Public LB: 0.716
model 2: Same as model 1 except that I remove the CNN head. Public LB: 0.714
model 3: Same as model 2 except that I complete the noise. For example, in the above sample, I'll change the answer to 'He`s awesome'. Public LB: 0.714
model 4: Do pre-processing by reversing the post-processing magic, then follow the same procedure as model 1: Public LB: 0.719
ensemble: model1+model2+model3+4.5*model4: Public LB: 0.723
+ post-processing: Public LB: 0.731 Private LB: 0.729
But, if we take out everything that's related to magic, that means, remove post-preprocessing and model4, how much can I get?
Well, I can tell you, model1+model2+model3: Public LB: 0.718. And don't forget about the 0.002 shake-down.(I consider this lucky!) So, seriously, bronze is really one optimistic estimation.</p>
<p>I have to put the pre-processing code in this pic, because there are some special symbols in it that makes the whole comment bugged.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F516737%2F1f699a8a36a419617eee2f0dbd9cafdb%2F.png?generation=1592387324189595&alt=media" alt=""></p>
<p>Yes, if x,y are both string, same as x.find(y).</p> |
Two Sigma Financial Modeling Challenge | #4 Public, #13 Private Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Two Sigma Financial Modeling Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>1)- CV tests</p>
<p>2)- feature selection algorithms</p>
<p>3)- Script time run limit</p>
<p>4)- It's a target transformation. It helps decrease correlation with other models. Improving the blend.</p>
<p>5)- It's the inverse transform of target</p>
<p>6)- Or other models didn't improved CV results, or script time limitations</p>
<p>7)- absolutely no!</p>
<p>It helps generalize well, since it trains online, but it's not that much. We have other solutions without that approach that would placed us in #1 if we had selected it :-D</p>
<p>Our solution (Giba and Xpeuler) is a basic blend of 7 models:</p>
<ol>
<li>Ridge A - Selected Features trained on SignedExp( y+1 ). Some cleaning on features and filter on target instances.</li>
<li>Ridge B - Selected features and some cleaning on features and filter on target instances.</li>
<li>Ridge C - Selected features and some cleaning on features and filter on target instances. </li>
<li>Extra Trees - Selected features. 222 trees</li>
<li>XGB - Selected Features and tuned hyperparameters on "all" trainset.</li>
<li>Ridge online rolling fit - Trained every 100 steps on submission time. features: Some lags of [technical20-technical30]. Target used: lag1 of [technical20-technical30] </li>
<li>Variance by Step(day) - Simple variance calculated over all 'Id' per day </li>
</ol>
<p>The final predictions are a weighted average of that 7 models.</p>
<p>Crossvalidation for model performance and feature selection was made using some approaches:</p>
<ul>
<li><p>2 folds: timestamp > 906 and timestamp <= 906</p></li>
<li><p>5 kfolds</p></li>
<li><p>rolling fit for ts> 906</p></li>
</ul>
<p>Our solution is available at Kernels. Run time is about 35 minutes. <br>
<a href="https://www.kaggle.com/titericz/two-sigma-financial-modeling/team-rocket-13">https://www.kaggle.com/titericz/two-sigma-financial-modeling/team-rocket-13</a> </p>
<p>If you press "RUN" button you will get a very good overfitted score. It's because XGB was trained on all data.</p>
<p>Sorry for the lack of comments.</p>
<p>Take care, there is some tricks in the script that you can call "black magic" ;-P</p>
<p>Have fun...as we had!</p>
<p>obs. Don't forget to UP vote ;-D</p> |
Two Sigma Financial Modeling Challenge | E2, Public LB #15, Private LB #8 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Two Sigma Financial Modeling Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Apologies for late reply Roberto, with a new-born baby at home and demanding daily job my schedule is pretty hectic this year so far. </p>
<p>By characteristic, I mean the performance evolution of the strategy by itself and also the contribution to the overall ensemble. We were calculating rsqrd at every time-point and tracking the evolution of this through time. I believe targeting a stable sqrd characteristic for the full valudation period (and there you could have got creative how to define this period) contributed significantly our stable score among public and private leaderboard scorings.</p>
<p>Hi ebdd0304e745872f59bc5597a2f20fb9f7a5a8a704098a5aedbb0bcefc24d11b,</p>
<p>• We approached here from a combination of bottom-up and top-down approaches. i.e. We tracked the overall score of individual features, pairs, triplets and also looked at final score and feat importances while starting with a much broader set of features (based on the first part) looking to the effect of adding and removing features. We didn't track the error slope analytically but we were looking at model error at multiple points rather than just final score. </p>
<p>• It was a time based split. In more traditional scenarios where we have a lot more information about the underlying data and problem at hand I tend to use only walk-forward (time ordered oos) type of validations but for this competition we didn't use data strictly from past but preserved the time order. </p>
<p>Thanks.</p>
<p>Thanks for the nice words Roberto. In timeseries literature differencing is being used quite often to treat non-stationarity, that aside, as you pointed out it was also useful to reconstruct important bit of information in some of the features. We haven't approached at it as a lowpass filter to reconstruct other features though, interesting idea yet it has been a while since I think in DSP terms despite being formally trained as an electronics engineer for my B.S. . :) For us the most important feedback was the rate of learning increasing significantly once the difference terms introduced. </p>
<p>We have played with cross feature interactions but it obviously extended the feature space too much that we didn't feel comfortable to include them at the end. Any linear combination supposed to be mostly captured by the boosted trees we were training, for nonlinear terms we only used difference on self quadratic terms for a few features. Selection was done primarily with cross-validation but we also took into account the characteristic (evolution of the error / objective) introduced by the feature, rather than sticking only to the highest final r-score.</p>
<p>Hi Grizzly_A, we actually tried that, too. i.e. train two separate models based on the y-variance, this can either be cross-sectional or time-clustered for separate id's or even for full universe. We haven't checked if the output was a reflection of pure trend or mean-reversion though, which I would be surprised to see for the relatively complex model we have utilized. </p>
<p>Of course, a prime financial time series stylized fact is volatility clustering and it was possible to estimate the previous timestamp volatility relatively good using t20 & t30 as you suggested. Therefore the idea of training separate models for calm-volatile periods is possible and makes some intuitive sense, however with our implementation we observed significantly worse public LB scores compared to our base model which was trained with full data. It gave quite good results for private LB later, though. So, briefly, I believe there lies some value in training models separately for distinct volatility characteristics periods but it is not immediately straightforward and needs some investigation. </p>
<p>Coming to your main question, if it is possible to capture mean reversion or trend by a pure ML model. I have doubts on this one unless you reformalize your problem into that. It would certainly capture some part of it, if trend or mr had strong enough footprints in the data, but it is extremely unlikely that it will only capture just that with a real-life dataset. One trick might be to define your model explicitly to capture either trend or mean-reversion as I mentioned above, i.e transform y's into a new feature which scores some trend/mr metric and then train your supervised model for that feature. However, our main goal with this part of the ensemble was to come up with an extremely simple model, almost a non-parametric one, to contribute some orthogonal information to the first overfitted model. If strong enough trend or mr exist consistently, the first stronger learner supposed to capture those, too. But it is expected to blend it with many other things, some real information and some noise, whereas this 'dumb' (but yet with positive expectancy) model was just sticking this one simple thing and diversifying the composite model when the non-stationarity was hurting it the most. </p>
<p>This was actually our first Kaggle competition and therefore we are not really disappointed to miss the price by smallest margin. I should also add that, we have spent majority of the competition time to learn the basics of the new Kaggle environment and especially to setup ours and only heavily concentrated solving the problem at hand in the last few weeks. That being said, we both previously spent some healthy amounts of time on machine learning during our PhD studies and later in our professional lives. One of us also spent majority of the last decade developing and trading quantitative strategies. Therefore we were probably not the most naive and uninitiated when it comes to solving a data science problem in financial time series domain. </p>
<p>So how we attacked it? Looking at the characteristics of the y's to be forecasted, it was clear that some heavy normalization was in place from the beginning. After playing with a few commonly employed methods to normalize time-series returns, like using rolling volatility (aka standard deviations) or cross-sectional normalizations, we came to the conclusion that it was a mixture and not straightforward to reverse-engineer. Adding to this the opaque nature of other indicators, i.e. we didn't know what those are, if they were also normalized, frequencies they were operating etc., the more traditional approaches were out of the window. And like quite a few other successful participants did, we also decided to start with a generic learner therefore. We selected XGB, due to its robustness in terms of feature handling and the wide model complexity range it can target out of the box. The efficient and fast implementation that yielded to test even larger models quickly was certainly another point which helped us to decide to start here instead of some more universal function approximators like neural networks for example. </p>
<p>One of the intuitions we had early was to train a relatively strong model and overfit to the training and validation data to some degree, with the hope of almost guaranteeing not to miss any genuinely useful feature. As the signal to noise ratio in financial time series is famously low, every bit of useful information counts. We also thought that a strong learner would have a higher chance to capture a potentially leaking (it is hard to fully normalize things if you are doing normalizations across different dimensions at the same time as we hypothesized) return based information. The nature of financial data makes it hard to learn a massively wrongly biased model once enough data and regularization employed (typically you may not learn anything useful though) and thus we want to control at least the model variance risk, which we believe at the end proven itself useful by providing stable scores by public and private leaderboards for our submissions.</p>
<p>After some feature engineering via adding first and second order differences as well as some interaction terms and cross sectionally normalized forms of input features, we played with various cross-validation scenarios and hand-picked a largish set of features that gave us first good score in public LB by a single XGB model.</p>
<p>As the next step, since we already had an overfitted model we want to extend the ensemble with a very simple heuristic based approach. Here domain knowledge what to expect from financial time series certainly helped. At the end it was a model which was switching between some weak form of trend and mean reversion depending on the return dispersion at each timestamp. </p>
<p>As the final addition, we came up with a model which bets only rarely to capture inflection points from market extremes. By forecasting a reversal after some extreme risk normalized performance of the complete universe, this sub-strategy aimed to deliver a positively skewed return profile. One can think this strategy as an option bought for the cheap. That model helped in training and public leaderboard scoring significantly but contributed only marginally in the final private set. </p>
<p>Those 3 models had different strengths and weaknesses and their ensemble proven to be a quite successful strategy with a stable public/private LB characteristic. The complete script took around 30 minutes to run, by majority of the computation time being spent on XGB model. We are not planning to make our code public at this moment but we hope to be able to contribute to the discussion of this very interesting and fun competition by this writing. </p>
<p>Before finishing, we want to congratulate the Kaggle team for joining forces with Google Cloud. We believe that this will help to grow their reach and eventually machine learning / data science community will benefit the most. And finally, many thanks to the team at Two-Sigma who put together that exciting challenge, we greatly enjoyed this code based competition and looking forward for similar fun in the future. </p> |
Two Sigma Financial Modeling Challenge | Solution Sharing | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Two Sigma Financial Modeling Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations Dr. Knope, NimaShahbazi & mchahhou, The Wolf of Wall Street, rnrq....</p>
<p>As preliminary standings are available, I thought to start solution sharing discussion.
There is huge shakeup which was expected, so it will be interesting to know solutions of leaders, how they have decided on final submissions and most important what was the CV strategy.</p> |
Two Sigma: Using News to Predict Stock Movements | 175th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Two Sigma: Using News to Predict Stock Movements
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello all, </p>
<p>I was thinking to post this solution or not , used a simple model on market data. </p>
<p>Kernel Link : <a href="https://www.kaggle.com/suchith0312/multiple-time-based-validation-try-xgb/data?scriptVersionId=9111086">https://www.kaggle.com/suchith0312/multiple-time-based-validation-try-xgb/data?scriptVersionId=9111086</a></p>
<p>Ps : I think that I was lucky !!</p> |
U.S. Patent Phrase to Phrase Matching | #24 solution (24 again) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: U.S. Patent Phrase to Phrase Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes. I train deberta v2 xxlarge using a very small learning rate (3e-6 or 4e-6) and warm up 500 steps. Totally train 3 epochs with batch size 24 on a RTXA5000 GPU. CV not so good, .81+, public LB 0.8234.</p>
<p>Congratulations on win the first place! Compare to simple average blend, stacking ~+.0012(CV/LB), 0.0013(PB). but I don't test blend on my best CV.</p>
<p>Congratulations! Very impressive solution that put magic to the gold zone!</p>
<p>Thank you! And congratulations for getting your first silver!</p>
<p>Thank you!</p>
<blockquote>
<p>did you use any tricks to perform inference faster? Did your sub take the full 9 hours?</p>
</blockquote>
<p>I use dynamic padding for most of my model when inference, and I lower all the text, deberta-v3-large takes 15 mins when scoring. My subs takes 3hours.</p>
<blockquote>
<p>did you have any adversarial methods(AWP, FGM,etc) in your ensemble?</p>
</blockquote>
<p>I didn't use them.</p>
<p>Thanks kaggle and hosts for host this interesting competition. Congradulations to all the winners, especially to my first teammate <a href="https://www.kaggle.com/trushk" target="_blank">@trushk</a> (NBME competition) for getting his first NLP gold!</p>
<p>My solution is a stacking of 120 models (20 * 6). They are deberta-v1/2/3-base/large/xlarge/xxlarge, funnel-large/xlarge, bert-for-patents, electra-large, cocolm-large with different loss function (MSE, BCE) OR reverse anchor and target OR using dynamic padding or not AND with different pooling layer. In the second level model, I add some static features include jaccard_distance, edit_distance, if target is duplicated, anchor target's length difference, same words' ratio and context. (context in the second level helps a lot) The stacking model use a single CatBoostRegressor and use optuna to find the best hyper-parameters. </p>
<p>My single best model is deberta-v3-large with public LB .8352. Models are trained using different settings. differential learning rate, random initial high level layers, warm-up ratio, mask augmentation, …</p>
<p>Although my name is trust luck, but I trust CV (.8557) in this competition, and survive in the Private LB.</p> |
U.S. Patent Phrase to Phrase Matching | 10th place Solution : Single model public lb 0.8562, private lb 0.8717 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: U.S. Patent Phrase to Phrase Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><a href="https://www.kaggle.com/fightingmuscle" target="_blank">@fightingmuscle</a> Thank you for comment! I'm sorry these parts were not my work.</p>
<p>So I will only tell you about the referenced link.</p>
<ul>
<li><p>KD (you are also Japanese):</p>
<ul>
<li><a href="http://codecrafthouse.jp/p/2018/01/knowledge-distillation/" target="_blank">http://codecrafthouse.jp/p/2018/01/knowledge-distillation/</a></li>
<li>It is also written in the 2nd solution.</li></ul></li>
<li><p>SVR : </p>
<ul>
<li><a href="https://www.kaggle.com/code/cdeotte/rapids-svr-boost-17-8" target="_blank">https://www.kaggle.com/code/cdeotte/rapids-svr-boost-17-8</a></li></ul></li>
<li><p>mixout : </p>
<ul>
<li>published on public notebooks on this competition.</li>
<li>The mixout algorithm is easy to understand here (you are also Japanese):<br>
<a href="https://www.ai-shift.co.jp/techblog/2170" target="_blank">https://www.ai-shift.co.jp/techblog/2170</a></li></ul></li>
</ul>
<p>Thank you.</p>
<p><a href="https://www.kaggle.com/nitishraj" target="_blank">@nitishraj</a> Thank you for comment ! You can see in another comment here. </p>
<p>AWP = Adversarial Weight Perturbation<br>
<a href="https://arxiv.org/abs/2004.05884" target="_blank">https://arxiv.org/abs/2004.05884</a></p>
<p>notebooks :<br>
<a href="https://www.kaggle.com/code/wht1996/feedback-nn-train/notebook" target="_blank">https://www.kaggle.com/code/wht1996/feedback-nn-train/notebook</a><br>
<a href="https://www.kaggle.com/code/junkoda/fast-awp/notebook" target="_blank">https://www.kaggle.com/code/junkoda/fast-awp/notebook</a></p>
<p><a href="https://www.kaggle.com/abdulkadirguner" target="_blank">@abdulkadirguner</a> Thank you very much. Yes just using the AutoModelForSequenceClassification.</p>
<p>First of all, I would like to thank kaggle and the staff for hosting such an interesting competition.<br>
Also, I really appreciate my teammates, <a href="https://www.kaggle.com/harshit92" target="_blank">@harshit92</a>, <a href="https://www.kaggle.com/ynishizono" target="_blank">@ynishizono</a>, <a href="https://www.kaggle.com/muhammad4hmed" target="_blank">@muhammad4hmed</a> Congratulations to become the competition master and <a href="https://www.kaggle.com/trushk" target="_blank">@trushk</a> Congratulations to 2nd gold medal !</p>
<h1>1. Summary (Our Magic and got single model public LB : 0.8562, private : 0.8717)</h1>
<p>Our magic was to group the target words per "anchor + context" and attach them to the end of each sentence.Maybe it's easier to understand by looking at the code, so I'll share it.</p>
<pre><code>train['group'] = train['context'] + " " + train['anchor']
allres = {}
for text in tqdm(train["group"].unique()):
tmpdf = train[train["group"]==text].reset_index(drop=True)
texts = ",".join(tmpdf["target"])
allres[text] = texts
train["target_gp"] = train["group"].map(allres)
train["input"] = train.anchor + " " + tokenizer.sep_token + " " + train.target + " " + tokenizer.sep_token + " " + train.title + " " + tokenizer.sep_token + " " + train.target_gp
</code></pre>
<p>for example, we get like this sentence as input. And training.</p>
<pre><code>abatement [SEP] abatement of pollution [SEP] HUMAN NECESSITIES. FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL [SEP] abatement of pollution,act of abating,active catalyst,eliminating process,forest region,greenhouse gases,increased rate,measurement level,minimising sounds,mixing core materials,multi pollution abatement device,noise reduction,pollution abatement,pollution abatement incinerator,pollution certificate,rent abatement,sorbent material,source items pollution abatement technology,stone abutments,tax abatement,water bodies
</code></pre>
<p>By doing so, we thought that we could not only train one sentence, but also train considering the correlation of the target words for each "anchor + context" with attention. Moreover, in order to avoid leakage, groupkfold of "anchor + context" was adopted. As a result, this magic boosted our models (in best case, public lb 0.8418 → 0.8562) two days before closing.</p>
<p>This idea was decisive for getting into the gold medal zone. (Only this single model we can get the gold)</p>
<p>------------------Details below---------------------------------------</p>
<h1>2. Preprocess and cross validation</h1>
<p>preprocess and cross validation is proposed by <a href="https://www.kaggle.com/harshit92" target="_blank">@harshit92</a> Basically, we used the lower case, not using [SEP] but uses [sep], remove ";" , "," , and "." like this.</p>
<pre><code>train['input'] = train['anchor'] + ' [SEP] ' + train['target'] + ' [SEP] ' + train['context_text']
train['input'] = train['input'].apply(lambda x: x.lower())
train['input'] = train['input'].apply(lambda x: x.replace(';','').replace(',','').replace('.',''))
</code></pre>
<p>And he found the boosting by changing 5kfold to 10kfold as NBME 1st solution<br>
(public LB : 5kfold 0.8395 → 10kfold 0.8435). These were the strong tools for us.</p>
<h1>3. Model making</h1>
<h2>3.1 How to catch the problem</h2>
<p>We did not just solve the 1 target prediction, but to make it more diverse, we solved the problem as follows.</p>
<h3>3.1.1 BCE with binning</h3>
<p>The score value was replaced as follows. And sigmoid was calculated in each predictions and averaged.</p>
<pre><code>0:[0,0,0,0], 0.25:[1,0,0,0], 0.5:[1,1,0,0], 0.75:[1,1,1,0],1:[1,1,1,1]
output = sigmoid in each prediction and averaged
For example, prediction = [0.1, 0.3, 0.2, 0.4], output = (0.1 + 0.3 + 0.2 + 0.4) /4
</code></pre>
<h3>3.1.2 Softmax with binning</h3>
<p>The score was replaced as follows. And softmax was calculated in each predictions and convoluted.</p>
<pre><code>0:[0, 0, 0, 0, 0], 0.25:[0,1,0,0,0], 0.5:[0,0,1,0,0], 0.75:[0,0,0,1,0],1:[0,0,0,0,1]
output = softmax in each prediction and convolution operation
For example, prediction = [0.1, 0.3, 0.2, 0.4, 0], output = 0*0.1 + 0.25*0.3 + 0.5*0.2 + 0.75*0.4 +1.0*0
</code></pre>
<h2>3.2 AWP</h2>
<p>As with Feedback and NBME, we were able to improve our score with AWP.<br>
I got the code from the following in Feedback <a href="https://www.kaggle.com/code/wht1996/feedback-nn-train/notebook" target="_blank">code</a></p>
<p>This boosted my model public LB : 0.8394 to 0.8418</p>
<h2>3.3 Other tips that worked well</h2>
<ul>
<li>Mixout by <a href="https://www.kaggle.com/trushk" target="_blank">@trushk</a> </li>
<li>Knowledge distillation(KD) by <a href="https://www.kaggle.com/ynishizono" target="_blank">@ynishizono</a> </li>
<li>text embedding with SVR </li>
<li>mix various loss (ex. MSE + Corr)</li>
<li>dynamic padding for some arches </li>
</ul>
<h2>3.4 Didn't work well</h2>
<ul>
<li>MLM</li>
<li>pseudo labeling (export all combination of anchor and target per context)</li>
<li>Random noise of [MASK]</li>
<li>change the order of input</li>
<li>post process of Gradiant Boost</li>
<li>adding per subsection title (some case is better.)</li>
<li>concat text embedding and SVR like PetFinder 1st solution</li>
</ul>
<h1>4. Ensemble</h1>
<p>For our ensemble, we used the nelder-mead coefficient by oof files. Candidates were automatically exported by <a href="https://www.kaggle.com/trushk" target="_blank">@trushk</a> 's code which uses the mix of the hill climb and nelder-mead algorithm. Finally, the used models were extracted based on over 90 oof files, and adjusted manually.</p>
<p>There are the list of models for the final submission. </p>
<table>
<thead>
<tr>
<th>model id</th>
<th>model</th>
<th>Feature</th>
<th>Task</th>
<th>Magic</th>
<th>kfold</th>
<th>cv</th>
<th>public LB</th>
<th>private LB</th>
<th>weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>deberta-v3-large</td>
<td>AWP</td>
<td>MSE</td>
<td>TRUE</td>
<td>15</td>
<td>0.8605</td>
<td>0.8562</td>
<td>0.8717</td>
<td>0.237</td>
</tr>
<tr>
<td>2</td>
<td>electra-large-discriminator</td>
<td></td>
<td>MSE</td>
<td>TRUE</td>
<td>15</td>
<td>0.8456</td>
<td>0.8406</td>
<td>0.8534</td>
<td>0.166</td>
</tr>
<tr>
<td>3</td>
<td>electra-large-discriminator</td>
<td></td>
<td>MSE</td>
<td></td>
<td>15</td>
<td>0.8381</td>
<td>0.8339</td>
<td>0.8486</td>
<td>0.049</td>
</tr>
<tr>
<td>4</td>
<td>bert-for-patents</td>
<td>KD + SVR</td>
<td>BCE binning</td>
<td></td>
<td>5</td>
<td>0.8339</td>
<td></td>
<td></td>
<td>0.087</td>
</tr>
<tr>
<td>5</td>
<td>deberta-v3-large</td>
<td>KD + SVR</td>
<td>MSE</td>
<td></td>
<td>5</td>
<td>0.8470</td>
<td></td>
<td></td>
<td>0.129</td>
</tr>
<tr>
<td>6</td>
<td>deberta-v3-large</td>
<td></td>
<td>BCE binning</td>
<td>TRUE</td>
<td>5</td>
<td>0.8471</td>
<td>0.8512</td>
<td>0.8664</td>
<td>0.067</td>
</tr>
<tr>
<td>7</td>
<td>deberta-v3-large</td>
<td>Mixout</td>
<td>Softmax binning</td>
<td>TRUE</td>
<td>5</td>
<td>0.8440</td>
<td>0.8506</td>
<td>0.8644</td>
<td>0.057</td>
</tr>
<tr>
<td>8</td>
<td>bert-for-patents</td>
<td>Mixout</td>
<td>Softmax binning</td>
<td>TRUE</td>
<td>5</td>
<td>0.8340</td>
<td></td>
<td></td>
<td>0.084</td>
</tr>
<tr>
<td>9</td>
<td>deberta-v3-large</td>
<td></td>
<td>BCE binning</td>
<td>TRUE</td>
<td>10</td>
<td>0.8463</td>
<td></td>
<td></td>
<td>0.092</td>
</tr>
<tr>
<td>10</td>
<td>deberta-v3-large</td>
<td></td>
<td>BCE binning</td>
<td></td>
<td>10</td>
<td>0.8335</td>
<td>0.8390</td>
<td>0.8579</td>
<td>0.073</td>
</tr>
</tbody>
</table>
<p>Final our cv is 0.8721, public lb is 0.8604, private lb is 0.8750 (11th).</p>
<p>As reference, this is all of our cv and lb relationship. The difference in color indicates the difference between people. We discussed based on this. </p>
<p><img src="https://raw.githubusercontent.com/chumajin/pppm/main/cvlb.jpg" alt="lb"></p>
<h1>5. Acknowledgments</h1>
<p>We couldn't get this score on our own. Thank you to everyone who shared past knowledge and code! We respect to you. </p>
<p>And I think the reason we got the gold medal was largely due to the sharing and discussion of the daily results. Eventually it leaded to the magic. We are the best team ! Thank you !!</p>
<p>From our team :</p>
<p><img src="https://raw.githubusercontent.com/chumajin/pppm/main/teampicture.jpg" alt="Our team"></p>
<p>Thank you !!</p>
<p>Thank you very much !!</p>
<p>Yes. Adversarial Weight Perturbation ! <br>
<a href="https://www.kaggle.com/killimi" target="_blank">@killimi</a> <a href="https://www.kaggle.com/shravankoninti" target="_blank">@shravankoninti</a> Thank you for reference link.</p>
<p><a href="https://www.kaggle.com/vialactea" target="_blank">@vialactea</a> Thank you for comment and rooting ! Congrats to your solo gold and become competition master too !!</p>
<p>Thank you so much!!</p>
<p><a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> Thank you for your congratulations and explanations! fixed.<br>
I am glad to receive congratulations from you who I respect.<br>
Congratulations on your gold medal, too.</p>
<p><a href="https://www.kaggle.com/mingccc" target="_blank">@mingccc</a> Thank you for comment! Looking at the solutions of others, there were more variations. We only had two days from noticing the magic, but we were lucky to notice.</p>
<p><a href="https://www.kaggle.com/mpwolke" target="_blank">@mpwolke</a> Thank you ! I'm really happy to receive your congratulations. <br>
And I'm sorry to be late. Congrats to become grandmaster. This is the result of your efforts!!</p>
<p><a href="https://www.kaggle.com/xbustc" target="_blank">@xbustc</a> We did it ! I'm really happy to receive your congratulations. Thank you very much !!!</p>
<p><a href="https://www.kaggle.com/skraiii" target="_blank">@skraiii</a> Thank you for comment! And congratulation to you become new master too.<br>
This probably answers both of the two questions, it does the same thing as the notebook below.</p>
<p><a href="https://www.kaggle.com/code/cdeotte/rapids-svr-boost-17-8" target="_blank">https://www.kaggle.com/code/cdeotte/rapids-svr-boost-17-8</a></p>
<p>Step1: Train NN backbone and head<br>
Step2: Train RAPIDS SVR Head<br>
Step3: Merge step1 and step2 result</p> |
U.S. Patent Phrase to Phrase Matching | 12th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: U.S. Patent Phrase to Phrase Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Introduction</h2>
<p>In the patent matching dataset, contestants need to judge the similarity of two phrases, one is anchor and the other is target, and then output the similarity between the two in different semantics (context), the range is 0-1.</p>
<p>Our team id is xlyhq, LB rank 13rd, PB rank 12nd, thank you very much <code>@heng zheng</code>,<code>@pythonlan</code>, <code>@leolu1998</code>, <code>@syzong</code> . The hard work and dedication of the four teammates finally got the lucky obtain to the gold medal.</p>
<p>Similar to other core ideas in the top teams, here we mainly share our game history and the specific results of related experiments, as well as interesting attempts</p>
<h2>Text processing</h2>
<p>The dataset mainly includes anchor, target and context fields, and additional text splicing information. During the competition, we mainly tried the following splicing attempts:</p>
<ul>
<li>v1: test['anchor'] + '[SEP]' + test['target'] + '[SEP]' + test['context_text']</li>
<li>v2: tes- t['anchor'] + '[SEP]' + test['target'] + '[SEP]' +test['context']+ '[SEP]' + test['context_text'], equivalent to Directly splicing A47 similar codes</li>
<li>v3: test['text'] = test['anchor'] + '[SEP]' + test['target'] + '[SEP]' + test['context'] + '[SEP]' + test[ 'context_text'] Get more text for splicing, which is equivalent to splicing the subcategories under A47, such as A47B, A47C</li>
</ul>
<pre><code>context_mapping = {
"A": "Human Necessities",
"B": "Operations and Transport",
"C": "Chemistry and Metallurgy",
"D": "Textiles",
"E": "Fixed Constructions",
"F": "Mechanical Engineering",
"G": "Physics",
"H": "Electricity",
"Y": "Emerging Cross-Sectional Technologies",
}
titles = pd.read_csv('./input/cpc-codes/titles.csv')
def process(text):
return re.sub(u"\\(.*?\\)|\\{.*?}|\\[.*?]", "", text)
def get_context(cpc_code):
cpc_data = titles[(titles['code'].map(len) <= 4) & (titles['code'].str.contains(cpc_code))]
texts = cpc_data['title'].values.tolist()
texts = [process(text) for text in texts]
return ";".join([context_mapping[cpc_code[0]]] + texts)
def get_cpc_texts():
cpc_texts = dict()
for code in tqdm(train['context'].unique()):
cpc_texts[code] = get_context(code)
return cpc_texts
cpc_texts = get_cpc_texts()
</code></pre>
<p>This splicing method can be improved a lot, but the text length becomes longer, and the maximum length is set to 300, resulting in slower training</p>
<ul>
<li>v4: The core splicing method: test['text'] = test['text'] + '[SEP]' + test['target_info']</li>
</ul>
<pre><code># concat target info
test['text'] = test['anchor'] + '[SEP]' + test['target'] + '[SEP]' + test['context_text']
target_info = test.groupby(['anchor', 'context'])['target'].agg(list).reset_index()
target_info['target'] = target_info['target'].apply(lambda x: list(set(x)))
target_info['target_info'] = target_info['target'].apply(lambda x: ', '.join(x))
target_info['target_info'].apply(lambda x: len(x.split(', '))).describe()
del target_info['target']
test=test.merge(target_info,on=['anchor','context'],how='left')
test['text'] = test['text'] + '[SEP]' + test['target_info']
test.head()
</code></pre>
<p>This splicing method can greatly improve the cv and lb scores of the model. By comparing the two different splicing methods of v3 and v4, we can find that selecting higher-quality text for splicing can improve the model. The v3 method has A lot of redundant information, and there is a lot of critical information at the entity level in the v4 way.</p>
<p>We were very lucky to find out the “magic trick”, which other golden medal zone teams refer in their solutions, in the final days of the competition.</p>
<h2>CV Split</h2>
<p>During the competition, we tried different data partitioning methods, including:</p>
<ul>
<li><code>StratifiedGroupKFold</code>: this splicing method has a smaller difference between the cv and lb lines, and the score is slightly better</li>
<li><code>StratifiedKFold</code>: Offline cv is relatively high</li>
<li>Other <code>Kfold</code> and <code>GrouFold</code> do not work well</li>
</ul>
<h2>Loss function</h2>
<p>The main loss functions that can be referred to are:</p>
<ul>
<li>BCE: nn.BCEWithLogitsLoss(reduction="mean")</li>
<li>MSE: nn.MSELoss()</li>
<li>Mixture Loss: MseCorrloss</li>
</ul>
<pre><code>class CorrLoss(nn.Module):
"""
use 1 - correlational coefficience between the output of the network and the target as the loss
input (o, t):
o: Variable of size (batch_size, 1) output of the network
t: Variable of size (batch_size, 1) target value
output (corr):
corr: Variable of size (1)
"""
def __init__(self):
super(CorrLoss, self).__init__()
def forward(self, o, t):
assert(o.size() == t.size())
# calcu z-score for o and t
o_m = o.mean(dim = 0)
o_s = o.std(dim = 0)
o_z = (o - o_m)/o_s
t_m = t.mean(dim =0)
t_s = t.std(dim = 0)
t_z = (t - t_m)/t_s
# calcu corr between o and t
tmp = o_z * t_z
corr = tmp.mean(dim = 0)
return 1 - corr
class MSECorrLoss(nn.Module):
def __init__(self, p = 1.5):
super(MSECorrLoss, self).__init__()
self.p = p
self.mseLoss = nn.MSELoss()
self.corrLoss = CorrLoss()
def forward(self, o, t):
mse = self.mseLoss(o, t)
corr = self.corrLoss(o, t)
loss = mse + self.p * corr
return loss
</code></pre>
<p>The loss function used in our experiment is slightly better than BCE</p>
<h2>Models</h2>
<p>In order to improve the degree of difference of the models, we mainly selected variants of different models, including the following five models:</p>
<ul>
<li>Deberta-v3-large</li>
<li>Bert-for-patents</li>
<li>Roberta-large</li>
<li>Ernie-en-2.0-Large</li>
<li>Electra-large-discriminator</li>
</ul>
<p>The specific cv scores are as follows:</p>
<pre><code>deberta-v3-large:[0.8494,0.8455,0.8523,0.8458,0.8658] cv 0.85176
bertforpatents [0.8393, 0.8403, 0.8457, 0.8402, 0.8564] cv 0.8444
roberta-large [0.8183,0.8172,0.8203,0.8193,0.8398] cv 0.8233
ernie-large [0.8276,0.8277,0.8251,0.8296,0.8466] cv 0.8310
electra-large [0.8429,0.8309,0.8259,0.8416,0.846] cv 0.8376
</code></pre>
<h2>Training optimization</h2>
<p>According to previous competition experience, we mainly adopted the following model training optimization methods:</p>
<ul>
<li>Adversarial training: Tried FGM to improve model training</li>
</ul>
<pre><code>class FGM():
def __init__(self, model):
self.model = model
self.backup = {}
def attack(self, epsilon=1., emb_name='word_embeddings'):
# emb_name这个参数要换成你模型中embedding的参数名
for name, param in self.model.named_parameters():
if param.requires_grad and emb_name in name:
self.backup[name] = param.data.clone()
norm = torch.norm(param.grad)
if norm != 0 and not torch.isnan(norm):
r_at = epsilon * param.grad / norm
param.data.add_(r_at)
def restore(self, emb_name='emb.'):
# emb_name这个参数要换成你模型中embedding的参数名
for name, param in self.model.named_parameters():
if param.requires_grad and emb_name in name:
assert name in self.backup
param.data = self.backup[name]
self.backup = {}
</code></pre>
<ul>
<li>Model generalization: added multidroout</li>
<li>Ema improves model training</li>
</ul>
<pre><code>class EMA():
def __init__(self, model, decay):
self.model = model
self.decay = decay
self.shadow = {}
self.backup = {}
def register(self):
for name, param in self.model.named_parameters():
if param.requires_grad:
self.shadow[name] = param.data.clone()
def update(self):
for name, param in self.model.named_parameters():
if param.requires_grad:
assert name in self.shadow
new_average = (1.0 - self.decay) * param.data + self.decay * self.shadow[name]
self.shadow[name] = new_average.clone()
def apply_shadow(self):
for name, param in self.model.named_parameters():
if param.requires_grad:
assert name in self.shadow
self.backup[name] = param.data
param.data = self.shadow[name]
def restore(self):
for name, param in self.model.named_parameters():
if param.requires_grad:
assert name in self.backup
param.data = self.backup[name]
self.backup = {}
# 初始化
ema = EMA(model, 0.999)
ema.register()
# 训练过程中,更新完参数后,同步update shadow weights
def train():
optimizer.step()
ema.update()
# eval前,apply shadow weights;eval之后,恢复原来模型的参数
def evaluate():
ema.apply_shadow()
# evaluate
ema.restore()
</code></pre>
<p>Tried that didn't work:</p>
<ul>
<li>AWP</li>
<li>PGD</li>
</ul>
<h2>Model ensemble</h2>
<p>According to offline cross-validation scores and online score feedback, we average fusion by weighted fusion:</p>
<pre><code>from sklearn.preprocessing import MinMaxScaler
MMscaler = MinMaxScaler()
predictions1 = MMscaler.fit_transform(submission['predictions1'].values.reshape(-1,1)).reshape(-1)
predictions2 = MMscaler.fit_transform(submission['predictions2'].values.reshape(-1,1)).reshape(-1)
predictions3 = MMscaler.fit_transform(submission['predictions3'].values.reshape(-1,1)).reshape(-1)
predictions4 = MMscaler.fit_transform(submission['predictions4'].values.reshape(-1,1)).reshape(-1)
predictions5 = MMscaler.fit_transform(submission['predictions5'].values.reshape(-1,1)).reshape(-1)
# final_predictions=(predictions1+predictions2)/2
# final_predictions=(predictions1+predictions2+predictions3+predictions4+predictions5)/5
# 5:2:1:1:1
final_predictions=0.5*predictions1+0.2*predictions2+0.1*predictions3+0.1*predictions4+0.1*predictions5
</code></pre>
<h2>Other attempts</h2>
<ul>
<li>two stage</li>
</ul>
<p>In the early stage, we made fine-tuning of different pre-training models, so the number of features was relatively large. We tried to stack the text statistical features and model prediction based on the tree model. At that time, the model had a relatively good fusion effect. The following contains some codes</p>
<pre><code># ====================================================
# predictions1
# ====================================================
def get_fold_pred(CFG, path, model):
CFG.path = path
CFG.model = model
CFG.config_path = CFG.path + "config.pth"
CFG.tokenizer = AutoTokenizer.from_pretrained(CFG.path)
test_dataset = TestDataset(CFG, test)
test_loader = DataLoader(test_dataset,
batch_size=CFG.batch_size,
shuffle=False,
num_workers=CFG.num_workers, pin_memory=True, drop_last=False)
predictions = []
for fold in CFG.trn_fold:
model = CustomModel(CFG, config_path=CFG.config_path, pretrained=False)
state = torch.load(CFG.path + f"{CFG.model.split('/')[-1]}_fold{fold}_best.pth",
map_location=torch.device('cpu'))
model.load_state_dict(state['model'])
prediction = inference_fn(test_loader, model, device)
predictions.append(prediction.flatten())
del model, state, prediction
gc.collect()
torch.cuda.empty_cache()
# predictions1 = np.mean(predictions, axis=0)
# fea_df = pd.DataFrame(predictions).T
# fea_df.columns = [f"{CFG.model.split('/')[-1]}_fold{fold}" for fold in CFG.trn_fold]
# del test_dataset, test_loader
return predictions
model_paths = [
"../input/albert-xxlarge-v2/albert-xxlarge-v2/",
"../input/bert-large-cased-cv5/bert-large-cased/",
"../input/deberta-base-cv5/deberta-base/",
"../input/deberta-v3-base-cv5/deberta-v3-base/",
"../input/deberta-v3-small/deberta-v3-small/",
"../input/distilroberta-base/distilroberta-base/",
"../input/roberta-large/roberta-large/",
"../input/xlm-roberta-base/xlm-roberta-base/",
"../input/xlmrobertalarge-cv5/xlm-roberta-large/",
]
print("train.shape, test.shape", train.shape, test.shape)
print("titles.shape", titles.shape)
# for model_path in model_paths:
# with open(f'{model_path}/oof_df.pkl', "rb") as fh:
# oof = pickle.load(fh)[['id', 'fold', 'pred']]
# # oof = pd.read_pickle(f'{model_path}/oof_df.pkl')[['id', 'fold', 'pred']]
# oof[f"{model_path.split('/')[1]}"] = oof['pred']
# train = train.merge(oof[['id', f"{model_path.split('/')[1]}"]], how='left', on='id')
oof_res=pd.read_csv('../input/train-res/train_oof.csv')
train = train.merge(oof_res, how='left', on='id')
model_infos = {
'albert-xxlarge-v2': ['../input/albert-xxlarge-v2/albert-xxlarge-v2/', "albert-xxlarge-v2"],
'bert-large-cased': ['../input/bert-large-cased-cv5/bert-large-cased/', "bert-large-cased"],
'deberta-base': ['../input/deberta-base-cv5/deberta-base/', "deberta-base"],
'deberta-v3-base': ['../input/deberta-v3-base-cv5/deberta-v3-base/', "deberta-v3-base"],
'deberta-v3-small': ['../input/deberta-v3-small/deberta-v3-small/', "deberta-v3-small"],
'distilroberta-base': ['../input/distilroberta-base/distilroberta-base/', "distilroberta-base"],
'roberta-large': ['../input/roberta-large/roberta-large/', "roberta-large"],
'xlm-roberta-base': ['../input/xlm-roberta-base/xlm-roberta-base/', "xlm-roberta-base"],
'xlm-roberta-large': ['../input/xlmrobertalarge-cv5/xlm-roberta-large/', "xlm-roberta-large"],
}
for model, path_info in model_infos.items():
print(model)
model_path, model_name = path_info[0], path_info[1]
fea_df = get_fold_pred(CFG, model_path, model_name)
model_infos[model].append(fea_df)
del model_path, model_name
del oof_res
</code></pre>
<p>train code:</p>
<pre><code>for fold_ in range(5):
print("Fold:", fold_)
trn_ = train[train['fold'] != fold_].index
val_ = train[train['fold'] == fold_].index
# print(train.iloc[val_].sort_values('id'))
trn_x, trn_y = train[train_features].iloc[trn_], train['score'].iloc[trn_]
val_x, val_y = train[train_features].iloc[val_], train['score'].iloc[val_]
# train_folds = folds[folds['fold'] != fold].reset_index(drop=True)
# valid_folds = folds[folds['fold'] == fold].reset_index(drop=True)
reg = lgb.LGBMRegressor(**params,n_estimators=1100)
xgb = XGBRegressor(**xgb_params, n_estimators=1000)
cat = CatBoostRegressor(iterations=1000,learning_rate=0.03,
depth=10,
eval_metric='RMSE',
random_seed = 42,
bagging_temperature = 0.2,
od_type='Iter',
metric_period = 50,
od_wait=20)
print("-"* 20 + "LightGBM Training" + "-"* 20)
reg.fit(trn_x, np.log1p(trn_y),eval_set=[(val_x, np.log1p(val_y))],early_stopping_rounds=50,verbose=100,eval_metric='rmse')
print("-"* 20 + "XGboost Training" + "-"* 20)
xgb.fit(trn_x, np.log1p(trn_y),eval_set=[(val_x, np.log1p(val_y))],early_stopping_rounds=50,eval_metric='rmse',verbose=100)
print("-"* 20 + "Catboost Training" + "-"* 20)
cat.fit(trn_x, np.log1p(trn_y), eval_set=[(val_x, np.log1p(val_y))],early_stopping_rounds=50,use_best_model=True,verbose=100)
imp_df = pd.DataFrame()
imp_df['feature'] = train_features
imp_df['gain_reg'] = reg.booster_.feature_importance(importance_type='gain')
imp_df['fold'] = fold_ + 1
importances = pd.concat([importances, imp_df], axis=0, sort=False)
for model, values in model_infos.items():
test[model] = values[2][fold_]
for model, values in uspppm_model_infos.items():
test[f"uspppm_{model}"] = values[2][fold_]
# for f in tqdm(amount_feas, desc="amount_feas 基本聚合特征"):
# for cate in category_fea:
# if f != cate:
# test['{}_{}_medi'.format(cate, f)] = test.groupby(cate)[f].transform('median')
# test['{}_{}_mean'.format(cate, f)] = test.groupby(cate)[f].transform('mean')
# test['{}_{}_max'.format(cate, f)] = test.groupby(cate)[f].transform('max')
# test['{}_{}_min'.format(cate, f)] = test.groupby(cate)[f].transform('min')
# test['{}_{}_std'.format(cate, f)] = test.groupby(cate)[f].transform('std')
# LightGBM
oof_reg_preds[val_] = reg.predict(val_x, num_iteration=reg.best_iteration_)
# oof_reg_preds[oof_reg_preds < 0] = 0
lgb_preds = reg.predict(test[train_features], num_iteration=reg.best_iteration_)
# lgb_preds[lgb_preds < 0] = 0
# Xgboost
oof_reg_preds1[val_] = xgb.predict(val_x)
oof_reg_preds1[oof_reg_preds1 < 0] = 0
xgb_preds = xgb.predict(test[train_features])
# xgb_preds[xgb_preds < 0] = 0
# catboost
oof_reg_preds2[val_] = cat.predict(val_x)
oof_reg_preds2[oof_reg_preds2 < 0] = 0
cat_preds = cat.predict(test[train_features])
cat_preds[xgb_preds < 0] = 0
# merge all prediction
merge_pred[val_] = oof_reg_preds[val_] * 0.4 + oof_reg_preds1[val_] * 0.3 +oof_reg_preds2[val_] * 0.3
# sub_reg_preds += np.expm1(_preds) / len(folds)
# sub_reg_preds += np.expm1(_preds) / len(folds)
sub_preds += (lgb_preds / 5) * 0.6 + (xgb_preds / 5) * 0.2 + (cat_preds / 5) * 0.2 #三个模型五折测试集预测结果
sub_reg_preds+=lgb_preds / 5 # lgb五折测试集预测结果
print("lgb",pearsonr(train['score'], np.expm1(oof_reg_preds))[0]) # lgb
print("xgb",pearsonr(train['score'], np.expm1(oof_reg_preds1))[0]) # xgb
print("cat",pearsonr(train['score'], np.expm1(oof_reg_preds2))[0]) # cat
print("xgb lgb cat",pearsonr(train['score'], np.expm1(merge_pred))[0]) # xgb lgb cat
</code></pre>
<p>thanks bro</p>
<p>Yes,same inputs (input ids,attention mask) and labels as this</p>
<p>In the middle of the game, the two stage is greatly improved.At the end, we did not submit due to time reasons .But it can be seen from the top1 top2 solutions that this should have a high probability of being effective</p> |
U.S. Patent Phrase to Phrase Matching | 1st place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: U.S. Patent Phrase to Phrase Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I copied from <a href="https://gist.github.com/ncullen93/58e71c4303b89e420bd8e0b0aa54bf48" target="_blank">https://gist.github.com/ncullen93/58e71c4303b89e420bd8e0b0aa54bf48</a> , change a bit for clamp output to -1-1 to avoid nan( torch.clamp(r_val, min=-1., max=1.))</p>
<p>Update, all models public now, in case you want to rerun inference.</p>
<p><a href="https://www.kaggle.com/mingccc" target="_blank">@mingccc</a> <a href="https://github.com/chenghuige/U.S.-Patent-Phrase-to-Phrase-Matching/blob/main/utils/lele/losses/losses.py" target="_blank">https://github.com/chenghuige/U.S.-Patent-Phrase-to-Phrase-Matching/blob/main/utils/lele/losses/losses.py</a> perason loss here.</p>
<p><a href="https://www.kaggle.com/abdulkadirguner" target="_blank">@abdulkadirguner</a> Just opensources all the codes, you can refere to <a href="https://github.com/chenghuige/U.S.-Patent-Phrase-to-Phrase-Matching/blob/main/utils/lele/layers/layers.py" target="_blank">https://github.com/chenghuige/U.S.-Patent-Phrase-to-Phrase-Matching/blob/main/utils/lele/layers/layers.py</a> line 895, LinearSeqAttnPooling.</p>
<p>You can refert torch/dataset.py, I shuffle targets in <em>_ item _</em>().</p>
<p><a href="https://www.kaggle.com/xodeum" target="_blank">@xodeum</a> I have made all code related dataset public, for models 0-12 public, will make all of them public later, for I did not find a better/fast way to make all my 42 models public…</p>
<p>Thanks! <a href="https://www.kaggle.com/awaptk" target="_blank">@awaptk</a> </p>
<p>Thanks! <a href="https://www.kaggle.com/highscoreman" target="_blank">@highscoreman</a> </p>
<p>Thanks! <a href="https://www.kaggle.com/ch194551" target="_blank">@ch194551</a> </p>
<p>Thanks !! <a href="https://www.kaggle.com/oliverlionado" target="_blank">@oliverlionado</a> congratulation on another medal.</p>
<p>Thanks! Congratulation for you too. 如果你打了个喷嚏 那一定是我在想你 really intesting, we saw you in the LB adjacently for a long time😄</p>
<p>I did not write this well, you could refer to the 2nd solution, I think all teams with 0.86+ LB sore use similary trick fo adding all targets words belong to the same anchor and context. They also has deeper findings and more sophisticated usage.</p>
<p>Thanks! <a href="https://www.kaggle.com/niekvanderzwaag" target="_blank">@niekvanderzwaag</a> </p>
<p>Thanks, I have public the inference code, see my post at the end.</p>
<p>Thanks <a href="https://www.kaggle.com/xiamaozi11" target="_blank">@xiamaozi11</a> Congratulations to your another gold medal!</p>
<p>Thanks <a href="https://www.kaggle.com/zacchaeus" target="_blank">@zacchaeus</a> !</p>
<p>Yes simply add nn.LSTM on top of bert last hidden states. You may find the usage form <a href="https://www.kaggle.com/datasets/goldenlock/usppmatch" target="_blank">https://www.kaggle.com/datasets/goldenlock/usppmatch</a> torch/model.py</p>
<p>Thanks contest organizer for holding such an interesting game. Thanks everyone joining and sharing during this contest. I learned a lot from discussions.</p>
<ul>
<li><p><strong>CV strategy</strong><br>
Groupby anchor and stratify by score, also there are some words occur in both anchor and target, make sure to put them in the same fold. </p></li>
<li><p><strong>NN model detail</strong><br>
a. Pearson loss worked best for me<br>
b. 5 epochs training, start AWP training from the 2nd epoch. <br>
AWP helps a lot in all my nlp contests recently.<br>
c. Groupby['anchor', 'context'] ['target'] -> targets, add to input(anchor[SEP]target[SEP]CPC_TEXT[SEP]targets) produce best model<br>
Groupby['anchor', 'context[0]'] ['target'] -> targets, add to input help ensemble a lot, let me define context[0] as sector, it is like F21 -> F<br>
Remember to exclude current target from targets.<br>
d. Random shuffle targets during each training step. (did not test enough, I remembered improved a lot on LB)<br>
e. Freeze bert embedding layer (not much difference maybe but I used it for final models)<br>
Freeze embedding layer not hurt, means we do not need to finetue so much as our targets is simple short words similarity.<br>
f. Using different learning rates for bert (2e-5, 3e-5) and other parts(1e-3), especially useful when adding LSTM which need large lr.<br>
g. Add BI-LSTM header help a lot.<br>
Deberta-v3-large CV 858-> 861 “prompt is all you need" gave me a hint we do not need to finetune/change bert model a lot, so I tried to add LSTM on top of bert and freeze bert embedding layer.<br>
h. Using linear attention pooling on top of BI-LSTM before fc.<br>
i. Lr matters a lot for best single model deberta-v3-large, 2e-5 much better then 3e-5 <br>
Deberta-v3-large CV 861 -> 8627<br>
j. Change rnn output dim * 2 from (bert out dim like 1024 to 2048) help a lot for some weak models like bert-for-patents and simcse-bert-for-patent.<br>
So for weak models we might need models to be widder.<br>
k. One possible method might be using token classfication to predict all targets score in 1 instance. <br>
Seems a bit complex to implement and I do not know if it will help improve score, not tried that yet.</p></li>
</ul>
<table>
<thead>
<tr>
<th>model</th>
<th>CV</th>
<th>backbone lr</th>
<th>base lr</th>
<th>scheduler</th>
<th>rnn dim * 2</th>
<th>weight</th>
<th>1 Fold LB</th>
<th>1 Fold PB</th>
<th>Full train LB</th>
<th>Full train PB</th>
<th>5 Folds LB</th>
<th>5 Folds PB</th>
</tr>
</thead>
<tbody>
<tr>
<td>microsoft/deberta-v3-large</td>
<td>8627</td>
<td>2e-5</td>
<td>1e-3</td>
<td>linear</td>
<td>No</td>
<td>1</td>
<td>8599</td>
<td>8710</td>
<td>8604</td>
<td>8745</td>
<td>8604 (may shake to 8615)</td>
<td>8761</td>
</tr>
<tr>
<td>anferico/bert-for-patents</td>
<td>8451</td>
<td>3e-5</td>
<td>1e-3</td>
<td>cosine</td>
<td>Yes</td>
<td>0.4</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>ahotrod/electra_large_discriminator_squad2_512</td>
<td>8514</td>
<td>2e-5</td>
<td>1e-3</td>
<td>cosine</td>
<td>No</td>
<td>0.3</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Yanhao/simcse-bert-for-patent</td>
<td>8393</td>
<td>3e-5</td>
<td>1e-3</td>
<td>cosine</td>
<td>Yes</td>
<td>0.2</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>funnel-transformer/large</td>
<td>848</td>
<td>3e-5</td>
<td>1e-3</td>
<td>cosine</td>
<td>No</td>
<td>Exp after game end</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<p>Interesting to see deberta-v3-large and electra-large work best, they are both pretrained using electra style RTD not MLM.<br>
But for this problem bert-for-patents ensemble most well with deberta-v3-large due to better diversity.</p>
<p></p><ul><br>
<li><strong>Ensemble</strong><br><br>
a. Using 5 folds + 1 full train for each backbone, with full train model weight * 2<br><br>
weight 2 is choosing by e., and tested ineed better then 1:1 for LB and PB.<br><br>
b. Minmax scale for each model's output before adding to ensemble.<br><br>
c. Be sure to use weighted ensemble as for simple mean average hurt LB, maybe due to deberta-v3-large is much better then other models. <br><br>
d. 5 folds self ensemble improve LB a lot which is a bit hard to measure by local cv<br><br>
e. Use 20% data left out and train 10 folds models on the 80% and could find only lower down weight of weak models you could get gain.But this is costing, in the final days I only choose model weights by manually choosing those make my local first level cv best.<br><p></p>
<table>
<thead>
<tr>
<th>Ensemble</th>
<th>CV</th>
<th>LB</th>
<th>PB <br></th>
</tr>
</thead>
<tbody>
<tr>
<td>4 * 6 models</td>
<td>8651</td>
<td>8618</td>
<td>8745 <br></td>
</tr>
<tr>
<td>Add LSTM header</td>
<td>8666</td>
<td>8625</td>
<td>8775 <br></td>
</tr>
<tr>
<td>Adjust params like lr and rnn out dims</td>
<td>8677</td>
<td>8629</td>
<td>8779 <br></td>
</tr>
<tr>
<td>7 * 6 models, Add 3 groupby context, sector models</td>
<td>8682</td>
<td>8633</td>
<td>8782</td></tr></tbody></table></li>
</ul>
Glad to see CV and LB PB match :)
<ul>
<li><strong>Summary</strong> <br>
-- Add targets groupby anchor,context, key magic/trick to the gold. <br>
-- Add LSTM help me get good enough single model, which is the key for win on PB. (8745 -> 8779) <br>
-- Add targets groupby anchor,sector(context[0]) bring diverse models (comparing to change loss function, pooling method) )(8779->8782) </li>
</ul>
<p>inference code:<br>
<a href="https://www.kaggle.com/code/goldenlock/usp-base" target="_blank">https://www.kaggle.com/code/goldenlock/usp-base</a><br>
most training code in:<br>
<a href="https://www.kaggle.com/datasets/goldenlock/usppmatch" target="_blank">https://www.kaggle.com/datasets/goldenlock/usppmatch</a> <br>
all training code opensource here: <br>
<a href="https://github.com/chenghuige/U.S.-Patent-Phrase-to-Phrase-Matching" target="_blank">https://github.com/chenghuige/U.S.-Patent-Phrase-to-Phrase-Matching</a></p>
<p>Thanks <a href="https://www.kaggle.com/fankaixie" target="_blank">@fankaixie</a> , in order to catch up with your score.. I added it at last..</p>
<p><a href="https://www.kaggle.com/crodoc" target="_blank">@crodoc</a> Thanks! seems work well with lr 1 and eps 1e-3 for all backbones.</p>
<p>Thanks <a href="https://www.kaggle.com/sgeetha" target="_blank">@sgeetha</a> </p>
<p>Thanks! <a href="https://www.kaggle.com/renxingkai" target="_blank">@renxingkai</a> great work and big jump at last, congratulations!</p>
<p>Thanks <a href="https://www.kaggle.com/naoism" target="_blank">@naoism</a> , congraz for your well finish!<br>
if not adding LSTM multi lr with base lr 1e-3 does not improve much, but still a bit better.<br>
If using LSTM I did not test all using lr xe-5, just start with base lr 1e-3, as it worked in feedback, nbme and jd ner dataset.<br>
You may try to set 1e-3 or 5e-4 as base lr and 2e-5 or 3e-5 for backbone lr, I think you should get cv improvement.<br>
I used dropout 0.1, LSTM encoding without considering mask. But at last step when using linear attention pooling on to of LSTM I used atttention_mask just as bert inputs.<br>
For dev3 I set LSTM out dim to bert_out_dim / 2, so BI-LSTM out dim will be bert_out_dim.<br>
For bert-for-patents I set LSTM out dim to bert_out_dim, so BI-LSTM out dim will be bert_out_dim * 2.<br>
No other tricks for me of adding LSTM.</p>
<p>Thanks <a href="https://www.kaggle.com/hengzheng" target="_blank">@hengzheng</a> , congratulations on another gold medal!</p>
<p>Thanks <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a> I saw you and Ditter join this game late, congratulation for the cool finish.</p>
<p>Personally I think not, but I am not very familiar with target encoding. Adding targets help add more text info.</p>
<p>Thanks! <a href="https://www.kaggle.com/ahmedmoabbas" target="_blank">@ahmedmoabbas</a> </p>
<p>Thanks! <a href="https://www.kaggle.com/fightingmuscle" target="_blank">@fightingmuscle</a> congratulation to your team !</p>
<p><a href="https://www.kaggle.com/arvinddevarkonda" target="_blank">@arvinddevarkonda</a> <a href="https://www.kaggle.com/code/wht1996/feedback-nn-train/notebook" target="_blank">https://www.kaggle.com/code/wht1996/feedback-nn-train/notebook</a> From this notebook AWP is an adversarial training method like FGM, from my expereince it almost work in every nlp datasets, with twice training time cost.</p>
<p>Thanks <a href="https://www.kaggle.com/vialactea" target="_blank">@vialactea</a>. I use transformer + LSTM end to end, freezed the transformer embedding layer(feeze embedding or not seem does not affect much). So I do not train transformer weight first. Just using original weight with LSTM.<br>
I'm also suprised on how LSTM helps, it improved my model on Feedback, Jd ner dataset which are all token classfication problems, I also tried it on NBME could not improve but still comparable result. On text classfication problem, this is the first time I see it work so well.<br>
I tried to add it at two days before competion ends, for I am behind two teams on LB with large gap, then I saw the team name 'prompt is all you need' that helped me to add LSTM, using prompt means we do not need to finetune bert a lot but do something on top of it.<br>
I tried prompt method after competion ends, if I implement it correctly(maybe not still need further investigation) it's improvement is much less considering to adding LSTM. I tested only FOLD 0 which has relatively high score, baseline score 8673, prompt score 8681, add LSTM 8719.</p> |
U.S. Patent Phrase to Phrase Matching | 27th place solution (public 72th) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: U.S. Patent Phrase to Phrase Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi <a href="https://www.kaggle.com/fightingmuscle" target="_blank">@fightingmuscle</a>, thank you for your comment.<br>
"regression task" means solved as a regression problem.<br>
In other words, I created regression model.("classification task" also means I created classification model)</p>
<p>No, I did not use ridge.</p>
<p>"regression task" means I created NN model with MSELoss, <br>
almost same as the following notebook (just change loss to MSELoss)<br>
<a href="https://www.kaggle.com/code/yasufuminakama/pppm-deberta-v3-large-baseline-w-w-b-train" target="_blank">https://www.kaggle.com/code/yasufuminakama/pppm-deberta-v3-large-baseline-w-w-b-train</a></p>
<p>"classification task" means I created NN model for multi-class classification with CrossEntropyLoss,<br>
almost same as 10th place solution (see "3.1.2 Softmax with binning")<br>
<a href="https://www.kaggle.com/competitions/us-patent-phrase-to-phrase-matching/discussion/332273" target="_blank">https://www.kaggle.com/competitions/us-patent-phrase-to-phrase-matching/discussion/332273</a></p>
<p>Thank you <a href="https://www.kaggle.com/saurabhbagchi" target="_blank">@saurabhbagchi</a> !</p>
<p>Thanks to USPTO and Kaggle for hosting exicing competition, and Thanks to kagglers for the match.</p>
<p>(2022/10/24 Updated) Training and Inference codes have been added.</p>
<h2>Codes</h2>
<p>Training: <a href="https://github.com/calpis10000/uspppm" target="_blank">https://github.com/calpis10000/uspppm</a><br>
Inference: <a href="https://www.kaggle.com/code/calpis10000/pppm-ens-063/notebook" target="_blank">https://www.kaggle.com/code/calpis10000/pppm-ens-063/notebook</a></p>
<h2>Summary</h2>
<p>I created 20 diverse models, and ensembled using weight-tuning with optuna. </p>
<ul>
<li>different tasks</li>
<li>different backbone</li>
<li>different pooling head</li>
<li>different preprocess</li>
</ul>
<p>Each of the experiments did not improve the CV by itself, but the ensemble showed an effect.<br>
Presumably, model diversity worked for me.</p>
<table>
<thead>
<tr>
<th>CV</th>
<th>public</th>
<th>private</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.8535</td>
<td>0.8514</td>
<td>0.8655</td>
</tr>
</tbody>
</table>
<h2>Worked</h2>
<h3>Tasks</h3>
<ul>
<li>Regression (with MSELoss)</li>
<li>Classification (with CrossEntropyLoss)<ul>
<li>in classification task, pearson score was calculated as below.</li></ul></li>
</ul>
<pre><code>def label_to_score(label):
return (label*[0,0.25,0.5,0.75,1.0]).sum(axis=1)
def metric_pearson(predictions, labels):
pred_score = label_to_score(predictions) # "predictions" are model outputs
label_score = label_to_score(labels) # "labels" are ground truth labels
pearson = np.corrcoef(pred_score, label_score)[0][1]
return pearson
</code></pre>
<h3>Backbones</h3>
<ul>
<li>deberta-v3-large, bert-for-patents, electra-large, etc.</li>
<li>deberta-v3-large is best single-model.</li>
</ul>
<h3>Pooling heads</h3>
<ul>
<li>Attention or Concatenate CLS-Token</li>
<li>I tried Conv1D or LSTM head, but I gave up because models did not learn well. </li>
</ul>
<h3>Preprocess</h3>
<ul>
<li>In some models, I add contexts as a token like below.</li>
</ul>
<pre><code>[subgrp=A][context=A47]HUMAN NECESSITIES. FURN...
</code></pre>
<h2>Not worked</h2>
<ul>
<li>psuedo labeling(augmentation or using test data)</li>
<li>MLM pretraining</li>
</ul>
<h2>Models</h2>
<table>
<thead>
<tr>
<th>exp_no</th>
<th>CV-score</th>
<th>task</th>
<th>backbone</th>
<th>head</th>
<th>preprocess</th>
<th>weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>exp122</td>
<td>0.8133</td>
<td>Cls</td>
<td>anferico/bert-for-patents</td>
<td>Attention</td>
<td></td>
<td>0.8588</td>
</tr>
<tr>
<td>exp094</td>
<td>0.8143</td>
<td>Reg</td>
<td>anferico/bert-for-patents</td>
<td>CLS-Token</td>
<td></td>
<td>0.3617</td>
</tr>
<tr>
<td>exp127</td>
<td>0.8174</td>
<td>Cls</td>
<td>anferico/bert-for-patents</td>
<td>CLS-Token</td>
<td></td>
<td>0.9497</td>
</tr>
<tr>
<td>exp080</td>
<td>0.7942</td>
<td>Reg</td>
<td>facebook/muppet-roberta-large</td>
<td>Attention</td>
<td></td>
<td>0.0775</td>
</tr>
<tr>
<td>exp132</td>
<td>0.7949</td>
<td>Cls</td>
<td>facebook/muppet-roberta-large</td>
<td>Attention</td>
<td></td>
<td>0.4389</td>
</tr>
<tr>
<td>exp064</td>
<td>0.8171</td>
<td>Reg</td>
<td>funnel-transformer/large</td>
<td>Attention</td>
<td></td>
<td>0.0001</td>
</tr>
<tr>
<td>exp073</td>
<td>0.8050</td>
<td>Reg</td>
<td>funnel-transformer/large-base</td>
<td>Attention</td>
<td></td>
<td>0.2670</td>
</tr>
<tr>
<td>exp115</td>
<td>0.8175</td>
<td>Reg</td>
<td>google/electra-large-discriminator</td>
<td>Attention</td>
<td></td>
<td>0.3393</td>
</tr>
<tr>
<td>exp129</td>
<td>0.8166</td>
<td>Cls</td>
<td>google/electra-large-discriminator</td>
<td>Attention</td>
<td></td>
<td>0.3552</td>
</tr>
<tr>
<td>exp131</td>
<td>0.8129</td>
<td>Reg</td>
<td>google/electra-large-discriminator</td>
<td>CLS-Token</td>
<td></td>
<td>0.2503</td>
</tr>
<tr>
<td>exp130</td>
<td>0.8182</td>
<td>Cls</td>
<td>google/electra-large-discriminator</td>
<td>CLS-Token</td>
<td></td>
<td>0.5523</td>
</tr>
<tr>
<td>exp069</td>
<td>0.8198</td>
<td>Reg</td>
<td>microsoft/cocolm-large</td>
<td>Attention</td>
<td></td>
<td>0.4549</td>
</tr>
<tr>
<td>exp123</td>
<td>0.8158</td>
<td>Cls</td>
<td>microsoft/cocolm-large</td>
<td>Attention</td>
<td></td>
<td>0.3200</td>
</tr>
<tr>
<td>exp090</td>
<td>0.8277</td>
<td>Reg</td>
<td>microsoft/deberta-v3-large</td>
<td>Attention</td>
<td>context-token</td>
<td>0.9632</td>
</tr>
<tr>
<td>exp124</td>
<td>0.8246</td>
<td>Cls</td>
<td>microsoft/deberta-v3-large</td>
<td>Attention</td>
<td>context-token</td>
<td>0.6297</td>
</tr>
<tr>
<td>exp051</td>
<td>0.8294</td>
<td>Reg</td>
<td>microsoft/deberta-v3-large</td>
<td>Attention</td>
<td></td>
<td>0.4101</td>
</tr>
<tr>
<td>exp119</td>
<td>0.8262</td>
<td>Cls</td>
<td>microsoft/deberta-v3-large</td>
<td>Attention</td>
<td></td>
<td>0.6013</td>
</tr>
<tr>
<td>exp077</td>
<td>0.8285</td>
<td>Reg</td>
<td>microsoft/deberta-v3-large</td>
<td>CLS-Token</td>
<td></td>
<td>0.3859</td>
</tr>
<tr>
<td>exp128</td>
<td>0.8246</td>
<td>Cls</td>
<td>microsoft/deberta-v3-large</td>
<td>CLS-Token</td>
<td></td>
<td>0.4330</td>
</tr>
<tr>
<td>exp067</td>
<td>0.7895</td>
<td>Reg</td>
<td>roberta-large</td>
<td>Attention</td>
<td></td>
<td>0.0282</td>
</tr>
</tbody>
</table>
<p><a href="https://www.kaggle.com/nlpfightgirl" target="_blank">@nlpfightgirl</a> <br>
The relevant code is summarized below.<br>
If you have more questions, please check my github repository and inference code.</p>
<p><a href="https://github.com/calpis10000/uspppm" target="_blank">https://github.com/calpis10000/uspppm</a> <br>
<a href="https://www.kaggle.com/code/calpis10000/pppm-ens-063/notebook" target="_blank">https://www.kaggle.com/code/calpis10000/pppm-ens-063/notebook</a></p>
<hr>
<p>First, read csv and create one-hot columns of scores, like below.</p>
<pre><code>train_df = pd.read_csv(f"{CFG.input_path}train.csv")
train_df['input'] = train_df['anchor'] + '[SEP]' + train_df['target'] + '[SEP]' + train_df['context_text']
train_df = pd.concat([train_df, pd.get_dummies(train_df['score'])], axis='columns')
</code></pre>
<p>Next, create dataset with labels of [0.0,0.25,0.5,0.75,1.0].</p>
<pre><code>def prepare_input(tokenizer, text):
inputs = tokenizer(text,
add_special_tokens=True,
max_length=CFG.max_len,
padding="max_length",
truncation=True,
return_offsets_mapping=False)
for k, v in inputs.items():
inputs[k] = torch.tensor(v, dtype=torch.long)
return inputs
class TrainDataset(Dataset):
def __init__(self, df):
self.inputs = df['input'].values
self.label = df[[0.0,0.25,0.5,0.75,1.0]].values
def __len__(self):
return len(self.inputs)
def __getitem__(self, item):
inputs = self.inputs[item]
label = self.label[item]
outputs = prepare_input(tokenizer, inputs)
outputs['label'] = torch.tensor(label, dtype=torch.float32)
return outputs
</code></pre>
<p>Next, create model with output size of 5 ("CFG.num_classes" is set to 5).</p>
<pre><code>class AttentionHead(nn.Module):
def __init__(self, in_features, hidden_dim, num_targets):
super().__init__()
self.in_features = in_features
self.middle_features = hidden_dim
self.W = nn.Linear(in_features, hidden_dim)
self.V = nn.Linear(hidden_dim, 1)
self.out_features = hidden_dim
def forward(self, features):
att = torch.tanh(self.W(features))
score = self.V(att)
attention_weights = torch.softmax(score, dim=1)
context_vector = attention_weights * features
context_vector = torch.sum(context_vector, dim=1)
return context_vector
class PPPMModel(nn.Module):
def __init__(self):
super().__init__()
self.config = AutoConfig.from_pretrained(CFG.model_path)
self.pre_model = AutoModel.from_pretrained(CFG.model_path, config=self.config)
self.head = AttentionHead(self.config.hidden_size, self.config.hidden_size,1)
self.dropout = nn.Dropout(self.config.hidden_dropout_prob)
self.dropout1 = nn.Dropout(0.1)
self.dropout2 = nn.Dropout(0.2)
self.dropout3 = nn.Dropout(0.3)
self.dropout4 = nn.Dropout(0.4)
self.dropout5 = nn.Dropout(0.5)
self.regressor = nn.Linear(self.config.hidden_size, CFG.num_classes)
def forward(self, inputs):
pre_out = self.pre_model(**inputs)
last_hidden_states = pre_out[0]
last_hidden_states = self.dropout(self.head(last_hidden_states))
logits1 = self.regressor(self.dropout1(last_hidden_states))
logits2 = self.regressor(self.dropout2(last_hidden_states))
logits3 = self.regressor(self.dropout3(last_hidden_states))
logits4 = self.regressor(self.dropout4(last_hidden_states))
logits5 = self.regressor(self.dropout5(last_hidden_states))
logits = (logits1 + logits2 + logits3 + logits4 + logits5) / 5
return logits
</code></pre>
<p>Next, train models with CrossEntropyLoss. (code omitted because it is too long, change your training code and set loss to CrossEntropyLoss.)</p>
<p>Last, when prediction, outputs are converted with softmax so that the sum of the outputs is 1, and then summed with the weighted values of the label.</p>
<pre><code>def label_to_score(label):
return (label*[0,0.25,0.5,0.75,1.0]).sum(axis=1)
def predict(model, dataloader):
model.eval()
result = np.zeros((len(dataloader.dataset), CFG.num_classes))
idx = 0
with torch.no_grad():
for batch_idx, data in enumerate(dataloader):
inputs = {}
for k, v in data.items():
inputs[k] = v.to(CFG.device)
output = model(inputs)
output = nn.Softmax(dim=1)(output)
result[idx:idx + output.shape[0], :] = output.to('cpu')
idx += output.shape[0]
return label_to_score(result)
</code></pre> |
U.S. Patent Phrase to Phrase Matching | 2nd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: U.S. Patent Phrase to Phrase Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We want to get w1 * oof1+w2 * oof2+…+wn * oofn for best score, and linear regression can directly calculate the best coefficient based on least square.</p>
<p>Yes. </p>
<p>Thanks! </p>
<p>First of all, I would like to thank competition organizers for hosting this interesting competition. And thanks to my great teammate <a href="https://www.kaggle.com/fankaixie" target="_blank">@Tifo</a> , we discuss and work hard for the whole last month to explore new methods. And also thank to the community of great notebooks and discussions.</p>
<h2><strong>Where is magic</strong></h2>
<p>The key is that there exits strong correlations between different targets under the same anchor. (you can see from the gap between groupkfold and kfold) For example, some targets are similar to the origin target and some are similar to the anchor. In short, adding them to the context can more effectively capture the correlation between the anchor and the target.</p>
<p>We used various methods to take advantage of this magic:</p>
<h4>stage1</h4>
<ol>
<li>Group the targets from the same <code>anchor</code>, such as 'target1, target2, target3, …'. Then add them to the context.</li>
<li>Group the targets from the same <code>anchor</code> and <code>context</code>. This brings more relevant targets.</li>
<li>Group the targets from the same <code>anchor</code>. Group the anchors from the same <code>context</code>. Add them to the context in turn.</li>
</ol>
<h4>stage2</h4>
<ol>
<li><p>Group the targets from the same <code>anchor</code> and add oof score to describe more specific quantitative information, like 'target1 23, target2 47, …'. The scores are multplied by 100 so can be recognized as a token.</p></li>
<li><p>Group the targets from the same <code>anchor</code> and <code>context</code>, with score.</p></li>
</ol>
<h4>details</h4>
<ul>
<li>During training, the group is performed inside the train-set, and the score is derived from the oof score from the first-stage models.</li>
<li>During inference, the group is performed after concatenating train-set and test-set, and the score is derived from both the oof and the prediction of test-set from first-stage models. (Why concat? Because overlap anchors in train and test.)</li>
</ul>
<h2><strong>Things that worked</strong></h2>
<ul>
<li><p>FGM</p>
<ul>
<li><a href="https://www.kaggle.com/competitions/tweet-sentiment-extraction/discussion/143764" target="_blank">Adversarial-training in NLP</a> </li>
<li>eps: 0.1</li>
<li>single model cv 0.002-0.005</li></ul></li>
<li><p>EMA (Exponential Moving Average)</p>
<ul>
<li>decay: 0.999</li>
<li>single model cv 0.001-0.003</li></ul></li>
<li><p>Knowledge distillation</p>
<ul>
<li>In other words, soft label from ensemble oof. In this way, single model can achieve performance close to ensemble models (just save time but no more diversity)</li>
<li>Make sure to use only the corresponding label for each fold to avoid leakage</li>
<li>The actual performance of second or more rounds is almost the same as first round, and the cv will be distorted in a strange way. We only use few models distiled from the first round.</li></ul></li>
</ul>
<h2><strong>Not worked</strong></h2>
<ul>
<li>BCE Loss</li>
<li>MLM</li>
<li>Post processing</li>
</ul>
<h2><strong>Models</strong></h2>
<ul>
<li>Deberta-v3-large</li>
<li>Bert-for-patents</li>
<li>Deberta-large</li>
</ul>
<h2><strong>CV split</strong></h2>
<p>We use the 5fold StratifiedGroupKFold (the same seed 42, group by anchor). So we are able to use OOF to get ensemble scores and model weights effectively. Linear regression is much faster than optuna search.</p>
<p>When there are enough models, our CV and LB are perfectly correlated. </p>
<h2><strong>Notebook</strong></h2>
<p>submit: <a href="https://www.kaggle.com/code/zzy990106/upppm-final" target="_blank">https://www.kaggle.com/code/zzy990106/upppm-final</a></p>
<p>You can find more details in the code.</p>
<p>Thanks. You can find related codes here: <br>
FGM: <a href="https://www.kaggle.com/competitions/tweet-sentiment-extraction/discussion/143764" target="_blank">https://www.kaggle.com/competitions/tweet-sentiment-extraction/discussion/143764</a><br>
AWP: <a href="https://www.kaggle.com/code/junkoda/fast-awp" target="_blank">https://www.kaggle.com/code/junkoda/fast-awp</a></p>
<p>We didn't see worse LB. The only downside is that CV is higher than normal, causing the weights to be too large. So we only use 5 such models and lower the weights manually.</p>
<p>Sort by length and pad in batch. It can accelerate 1.2-1.5x</p>
<p>BCEWithLogitLoss and adding sigmoid when inference. It can bring diversity at first. Once the model is strong enough, it is no longer needed.</p>
<p>From the ensemble of all stage1 models.</p>
<p>Congratulations to you!</p>
<p>Thanks! Also congratulations to you.</p>
<p>Thanks. MSELoss.</p>
<p>Thanks. Considering overlap anchors gave us 0.0005+.</p>
<p>During fold 0 training, we get best predition on val-set fold0. Then concat 5 parts to get oof .</p>
<p>Yes, two rounds. First at cell 10 and second at cell 12.</p>
<p>Different max_len because of their different intput (context)</p>
<p>The improvement is mainly come from quantitative information, like 'target1 23, target2 47, …'. Single model 0.001+</p>
<p>Yes. </p>
<ol>
<li>single model 0.001+, very large diversity for ensembling</li>
<li>Among methods 1-3, 2 has the highest single model score, but all of them bring great diversity for ensembling. We don't think which is good or bad.</li>
<li>This is not well known, the test-set has a small number of same anchors. And the score in the first stage will affect the score in the second stage.</li>
</ol> |
U.S. Patent Phrase to Phrase Matching | 31st place solution [with code] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: U.S. Patent Phrase to Phrase Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, I was also surprised to see how well funnel xlarge worked. And, Congrats on winning the competition!!!</p>
<p>Thank you organizers and Kaggle for organizing such a nice competition. Also, thanks to everyone for sharing during the competition. I learned a lot from discussions and some great notebooks.</p>
<p>I have open-sourced my code here: <a href="https://github.com/Gladiator07/U.S.-Patent-Phrase-to-Phrase-Matching-Kaggle" target="_blank">https://github.com/Gladiator07/U.S.-Patent-Phrase-to-Phrase-Matching-Kaggle</a></p>
<p>You can also view all my training logs on the Weights & Biases dashboard <a href="https://wandb.ai/gladiator/USPPPM-Kaggle" target="_blank">here</a></p>
<p>Final Inference Notebook <a href="https://www.kaggle.com/code/atharvaingle/uspppm-inference-ensemble-hill-climbing" target="_blank">here</a></p>
<p>Seeing the top solutions, my solution seems to be very simple 😅 and it basically relies on the diversity of models trained. </p>
<h1>Tools used</h1>
<ul>
<li>HuggingFace <a href="https://huggingface.co./docs/transformers/main_classes/trainer" target="_blank">Trainer</a> and <a href="https://huggingface.co./docs/datasets" target="_blank">datasets</a> for the complete code base</li>
<li><a href="https://hydra.cc" target="_blank">Hydra</a> for configuration management</li>
<li><a href="https://wandb.ai/site" target="_blank">Weights & Biases</a> for experiment tracking</li>
<li>Git/GitHub for code tracking</li>
<li>Google Cloud Bucket for storing models</li>
<li>A100 for training large models and RTX 5000 for smaller models</li>
</ul>
<p>This setup allowed me to utilize the limited time I had during the competition fully. I could run a series of experiments by just changing some flags from the command line itself.</p>
<p>I spent a lot of time at the start of the competition to have a reliable CV strategy. I tried all strategies shared in public notebooks and discussions and finally settled on grouped by anchor and stratify on score strategy as follows:</p>
<pre><code> train_df["score_bin"] = pd.cut(train_df["score"], bins=5, labels=False)
train_df["fold"] = -1
sgkf = StratifiedGroupKFold(n_splits=n_folds, shuffle=True, random_state=seed)
folds = sgkf.split(
X=train_df,
y=train_df["score_bin"].to_numpy(),
groups=train_df["anchor"].to_numpy(),
)
for fold, (trn_idx, val_idx) in enumerate(folds):
train_df.loc[val_idx, "fold"] = fold
train_df["fold"] = train_df["fold"].astype(int)
</code></pre>
<h1>Summary</h1>
<ul>
<li>Used <code>[s]</code> instead of <code>[SEP]</code>. Improved CV of baseline model from <code>0.81408</code> to <code>0.81906</code></li>
<li>Trained most of the models with three losses for diversity: MSE, BCE, and Pearson loss (Pearson worked best for me)</li>
<li>Used multi-sample dropout for some models.</li>
<li>Tried various types of pooling heads for diversity:<ul>
<li>Attention Pooling</li>
<li>Mean Pooling</li>
<li>Max Pooling</li>
<li>Mean Max Concatenate Pooling</li>
<li>Conv1D Pooling</li></ul></li>
<li>Used low learning rates for larger models</li>
<li>Dynamic Padding and Uniform Length Batching for increasing training and inference speed</li>
</ul>
<h1>Ensemble</h1>
<p>I used the hill-climbing approach explained by Chris Deotte <a href="https://www.kaggle.com/code/cdeotte/forward-selection-oof-ensemble-0-942-private/notebook" target="_blank">here</a>. I used a tolerance of 0.0 for the final submission (i.e add a new model only if increases the CV by some tolerance, in this case, it added the models as long as the CV increases).<br>
Also, I scaled all model's predictions by <code>MinMaxScaler</code> for ensembling.<br>
However, I also selected a submission with a tolerance of 0.0003 to not overfit on CV but it turned out that the submission with the tolerance of 0 scored the highest on private LB.</p>
<p>My final submission:</p>
<table>
<thead>
<tr>
<th>Submission</th>
<th>CV Score</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>31 experiments (31 * 5 = 155 models)</td>
<td>0.85484</td>
<td>0.8505</td>
<td>0.8652</td>
</tr>
<tr>
<td>8 experiments (8 * 5 = 40 models)</td>
<td>0.85382</td>
<td>0.8504</td>
<td>0.8650</td>
</tr>
</tbody>
</table>
<p>Only 8 experiments ensemble would have also gotten me the same rank 😂</p>
<h1>Things that didn't work for me</h1>
<ul>
<li>Ordinal Regression.</li>
<li>Posing the problem as classification and using cross-entropy loss / weighted cross-entropy loss.</li>
<li>I really wanted to try the SVR trick inspired by <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> and <a href="https://www.kaggle.com/titericz" target="_blank">@titericz</a> from PetFinder comp but couldn't make it work as I started implementing it in the last 2 days of the competition and there was not enough time to debug what went wrong.</li>
<li>Simple average ensemble of models worked worse compared to a weighted ensemble by hill climbing.</li>
<li>Second stage model stacking performed worse than hill climbing.</li>
<li>I see many top teams benefitting from AWP. I will give it a shot in the next NLP comp :)</li>
</ul>
<p>Here are the CV scores for final ensemble submission</p>
<table>
<thead>
<tr>
<th>Experiment</th>
<th>CV Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>107_microsoft-deberta-v3-large_fin-val-strategy-pearson-baseline</td>
<td>0.8332</td>
</tr>
<tr>
<td>109_microsoft-deberta-v3-large_pearson-attention-pool</td>
<td>0.8329</td>
</tr>
<tr>
<td>110_microsoft-deberta-v3-large_mse-msd</td>
<td>0.8348</td>
</tr>
<tr>
<td>111_microsoft-deberta-v3-large_pearson-ms</td>
<td>0.8342</td>
</tr>
<tr>
<td>112_microsoft-deberta-v3-large_mse-transformer-head</td>
<td>0.8130</td>
</tr>
<tr>
<td>121_anferico-bert-for-patents_mse-baseline-low-bs</td>
<td>0.8225</td>
</tr>
<tr>
<td>122_anferico-bert-for-patents_pearson-baseline-low-bs</td>
<td>0.8223</td>
</tr>
<tr>
<td>123_anferico-bert-for-patents_mse-msd-low-bs</td>
<td>0.8229</td>
</tr>
<tr>
<td>124_anferico-bert-for-patents_pearson-msd-low-bs</td>
<td>0.8209</td>
</tr>
<tr>
<td>126_anferico-bert-for-patents_pearson-attention-pool-low-bs</td>
<td>0.8212</td>
</tr>
<tr>
<td>128_microsoft-deberta-v3-large_pearson-mean-pool</td>
<td>0.8333</td>
</tr>
<tr>
<td>129_microsoft-deberta-v3-large_mse-conv1d-pool</td>
<td>0.8312</td>
</tr>
<tr>
<td>130_microsoft-deberta-v3-large_pearson-conv1d-pool</td>
<td>0.8322</td>
</tr>
<tr>
<td>134_anferico-bert-for-patents_pearson-conv1d-pool</td>
<td>0.8200</td>
</tr>
<tr>
<td>140_microsoft-deberta-v3-large_pearson-mean-max-concatenate-pool</td>
<td>0.8342</td>
</tr>
<tr>
<td>144_anferico-bert-for-patents_pearson-mean-max-concatenate-pool</td>
<td>0.8215</td>
</tr>
<tr>
<td>152_microsoft-deberta-v2-xlarge_pearson-lowlr</td>
<td>0.8270</td>
</tr>
<tr>
<td>154_microsoft-deberta-xlarge_pearson</td>
<td>0.8263</td>
</tr>
<tr>
<td>164_albert-xxlarge-v2_mse</td>
<td>0.8108</td>
</tr>
<tr>
<td>165_albert-xxlarge-v2_bce</td>
<td>0.8109</td>
</tr>
<tr>
<td>166_albert-xxlarge-v2_pearson</td>
<td>0.8110</td>
</tr>
<tr>
<td>168_google-electra-large-discriminator_bce</td>
<td>0.8143</td>
</tr>
<tr>
<td>169_google-electra-large-discriminator_pearson</td>
<td>0.8098</td>
</tr>
<tr>
<td>172_funnel-transformer-large_pearson</td>
<td>0.8219</td>
</tr>
<tr>
<td>174_funnel-transformer-xlarge_bce</td>
<td>0.8238</td>
</tr>
<tr>
<td>175_funnel-transformer-xlarge_pearson</td>
<td>0.8225</td>
</tr>
<tr>
<td>177_albert-xxlarge-v2_bce-lowlr</td>
<td>0.8102</td>
</tr>
<tr>
<td>183_microsoft-deberta-large_bce</td>
<td>0.8214</td>
</tr>
<tr>
<td>197_microsoft-deberta-v2-xlarge_mse-pearson</td>
<td>0.8267</td>
</tr>
<tr>
<td>205_microsoft-cocolm-large_pearson-msd</td>
<td>0.8218</td>
</tr>
<tr>
<td>208_microsoft-cocolm-large_mse-conv1d-pool</td>
<td>0.8180</td>
</tr>
</tbody>
</table>
<h1>Acknowledgments</h1>
<p>I would like to thank everyone who shared during the competition. I learned a lot and will try to apply all the learning in the next competition. Also, a special thanks to <a href="https://www.kaggle.com/code/rhtsingh/utilizing-transformer-representations-efficiently" target="_blank">this</a> amazing notebook by <a href="https://www.kaggle.com/rhtsingh" target="_blank">@rhtsingh</a>, got to learn a lot from this. And the legend <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> for his detailed <a href="https://www.kaggle.com/competitions/siim-isic-melanoma-classification/discussion/175614" target="_blank">discussion</a> and notebook on hill climbing.</p>
<p><strong>Also, a huge thank you to <a href="https://jarvislabs.ai" target="_blank">jarvislabs.ai</a> for the GPU support. The platform enabled me to do multiple experiments rapidly with instant and powerful GPU instances. All my models were trained on <a href="https://jarvislabs.ai" target="_blank">jarvislabs.ai</a> and this could not have been achieved without them.</strong></p>
<p>Thank you for your kind words!</p>
<p>Thank you!</p> |
U.S. Patent Phrase to Phrase Matching | 3rd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: U.S. Patent Phrase to Phrase Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We tried linear regression and lightgbm, the former gives a slightly better result on the private leaderboard. In addition, the neural network is not a good choice for stacking since the features (from oof) are highly correlated with the label and it's easy to overfit.</p>
<p>Thanks to the organizer and the Kaggle team for hosting this competition. And thanks to many participants who shared their ideas with notebook or discussion. It's difficult to improve the score until we find the "magic". Fortunately, our team make the breakthrough and get 3rd place at the end of the competition. Great thanks to my teammates and their hard work! <a href="https://www.kaggle.com/xiamaozi11" target="_blank">@xiamaozi11</a> <a href="https://www.kaggle.com/renxingkai" target="_blank">@renxingkai</a> <a href="https://www.kaggle.com/decalogue" target="_blank">@decalogue</a> </p>
<h2>Summary</h2>
<p>Our team tried to find the additional information about anchor and target in the <a href="https://www.kaggle.com/competitions/us-patent-phrase-to-phrase-matching/discussion/314456" target="_blank">public dataset</a> shared by the organizer. However, this method has a little benefit because only part of them are matched or those texts are useless.</p>
<p>The essential part of our solution is adding targets with the same anchor to each data sample. This data processing trick boosts our score from 0.84x to 0.85x on LB by a single model.</p>
<p>We stack 12 different models in the final submission. DeBERTa V3 large with MSE loss gives the best single model score on both CV and LB.</p>
<h2>Validation strategy</h2>
<p>Both <code>StratifiedGroupKFold</code> and <code>GroupKFold</code> can prevent data with the same anchor from leaking to validation set. <code>GroupKFold</code> can keep the same training data size of each fold, while <code>StratifiedGroupKFold</code> can keep the distribution of the label. Both of them are used (by different team member) and get relatively strong correlation between CV and LB.</p>
<h2>Data processing</h2>
<p>Input data from baseline</p>
<pre><code>anchor [SEP] target [SEP] context text
</code></pre>
<p>Our input data</p>
<pre><code>anchor [SEP] target; target_x1; target_x2; ... traget_xn; [SEP] context text
</code></pre>
<p>where target_xi are targets with the same anchor and context code.</p>
<p>It's easy to get comaprable improvement by hard encoding them while shuffling the sequence can reach higher score.</p>
<h2>Model</h2>
<p>Pretrained model</p>
<ul>
<li>Electra large</li>
<li>Bert For Patent</li>
<li>DeBERTa V3 large</li>
<li>DeBERTa V1</li>
<li>DeBERTa V1 xlarge</li>
</ul>
<p>Loss</p>
<ul>
<li>binary cross entropy loss</li>
<li>mean squared error loss</li>
<li>pearson correlation loss</li>
</ul>
<p>There is no big difference among those loss functions. However, using different loss in training phrases will lead to high diversity when ensembling because the distribution of the prediction looks different from oof.</p>
<p>Tricks</p>
<ul>
<li>different learning rate for different layer</li>
<li>fgm</li>
<li>ema</li>
</ul>
<p>You may get around 1k~2k improvement by adding all of those tricks.</p>
<h2>Result</h2>
<p>Single Model</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>CV</th>
<th>Public Score</th>
<th>Private Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>Bert For Patent</td>
<td>0.8362</td>
<td>/</td>
<td>/</td>
</tr>
<tr>
<td>DeBERTa V3 large</td>
<td>0.8516</td>
<td>0.8559</td>
<td>0.8675</td>
</tr>
<tr>
<td>DeBERTa V1</td>
<td>0.8385</td>
<td>/</td>
<td>/</td>
</tr>
<tr>
<td>DeBERTa V1 xlarge</td>
<td>0.8423</td>
<td>/</td>
<td>/</td>
</tr>
<tr>
<td>Electra large</td>
<td>0.8483</td>
<td>/</td>
<td>/</td>
</tr>
</tbody>
</table>
<p>Ensemble</p>
<p>12 models with different cross validation strategy, different concatenating methods, different pretrained models and different loss function.</p>
<table>
<thead>
<tr>
<th>Model</th>
<th>CV</th>
<th>Public Score</th>
<th>Private Score</th>
</tr>
</thead>
<tbody>
<tr>
<td>Mean of 12 models</td>
<td>0.8674</td>
<td>0.8627</td>
<td>0.8765</td>
</tr>
<tr>
<td>Stacking 12 models</td>
<td>0.8683</td>
<td>0.8640</td>
<td>0.8772</td>
</tr>
</tbody>
</table>
<h2>Other ideas</h2>
<p>There are some ideas we think useful but have no time to try</p>
<ul>
<li>Pretrained with the cpc text</li>
<li>Prompt learning</li>
<li>Predict the score of those concatenated targets together</li>
</ul>
<p>Have never thought about their usage of oof score, seems treating them as text may lead the model to learn the similarity of the (target, score) sequence instead of output the score just behind each target. 😂</p>
<p>I have just read your solution. It seems ensemble of text models and token models help you a lot. And congratulations on your strong solo finish!</p>
<p>It's still a text classification model, but I think a token classification model may be more powerful because it's trained with label information from other targets. Unfortunately, our team had no time to implement it in the last few days.</p> |
U.S. Patent Phrase to Phrase Matching | 52nd place Solution 🥈 (Public 105th) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: U.S. Patent Phrase to Phrase Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>Introduction</h1>
<p>First of all, our team (<a href="https://www.kaggle.com/ktakita" target="_blank">@K.T.</a>, <a href="https://www.kaggle.com/stgkrtua" target="_blank">@Taro_pan</a>, <a href="https://www.kaggle.com/kotashimomura" target="_blank">@S.Kota</a>, <a href="https://www.kaggle.com/javasparrowismu" target="_blank">@Java_is_a_sparrow</a>, <a href="https://www.kaggle.com/mipypf" target="_blank">@mipypf</a>) would like to express our deepest gratitude for organizing such an exciting competition. Thank you to all the staff involved.</p>
<p>I would like to take this opportunity to say thank you to my best teammates (<a href="https://www.kaggle.com/ktakita" target="_blank">@K.T.</a>, <a href="https://www.kaggle.com/stgkrtua" target="_blank">@Taro_pan</a>, <a href="https://www.kaggle.com/kotashimomura" target="_blank">@S.Kota</a>,<a href="https://www.kaggle.com/javasparrowismu" target="_blank">@Java_is_a_sparrow</a>) !</p>
<h1>OverView</h1>
<ul>
<li>A summary of the final two selected subs is as follows.</li>
</ul>
<table>
<thead>
<tr>
<th>Sub No.</th>
<th>CV</th>
<th>Method of determining coefficients for ensemble</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>148a</td>
<td>0.8518</td>
<td>ridge (14 model)</td>
<td>0.8502</td>
<td>0.8641</td>
</tr>
<tr>
<td>153</td>
<td>0.8520</td>
<td>optuna weight search (15 model)</td>
<td>0.8506</td>
<td>0.8638</td>
</tr>
</tbody>
</table>
<h2>CV Strategy</h2>
<ul>
<li><p>MultilabelStratifiedKFold with anchor and score </p>
<p><a href="https://www.kaggle.com/code/abhishek/creating-folds-properly-hopefully-p" target="_blank">https://www.kaggle.com/code/abhishek/creating-folds-properly-hopefully-p</a></p>
<p><a href="https://www.kaggle.com/competitions/us-patent-phrase-to-phrase-matching/discussion/315220" target="_blank">https://www.kaggle.com/competitions/us-patent-phrase-to-phrase-matching/discussion/315220</a></p>
<p><a href="https://www.kaggle.com/code/hannes82/pppm-deberta-v3-large-closing-the-cv-lb-gap/notebook" target="_blank">https://www.kaggle.com/code/hannes82/pppm-deberta-v3-large-closing-the-cv-lb-gap/notebook</a></p></li>
</ul>
<h2>LOSS function used</h2>
<ul>
<li><p>BCEWithLogitsLoss</p>
<p>We experimented with several different loss functions, but chose BCEWithLogitsLoss, which has the most reasonable distribution of cv predictions.</p></li>
</ul>
<h2>Phenomenon of extremely low cv folds appearing</h2>
<ul>
<li><p>In this competition, there was often a phenomenon of extremely low cv folds appearing.</p>
<p><a href="https://www.kaggle.com/competitions/us-patent-phrase-to-phrase-matching/discussion/319237" target="_blank">https://www.kaggle.com/competitions/us-patent-phrase-to-phrase-matching/discussion/319237</a></p>
<p>Therefore, it was necessary to adjust the learning rate and scheduler when this phenomenon appeared.</p></li>
</ul>
<h2>Ensemble model</h2>
<ul>
<li>Sub No.148a:no6, no19, no37, no41, no52, no76, no86, ex28, ex30, ex37, exp055, exp088, exp147d, exp149</li>
<li>Sub No.153:no6, no19, no37, no41, no52, no76, no86, ex28, ex30, ex37, exp055, exp088, exp147d, exp149, exp177</li>
</ul>
<p>what we used model</p>
<table>
<thead>
<tr>
<th>Exp ID</th>
<th>model</th>
<th>Experimental Details</th>
<th>CV</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>no6</td>
<td>microsoft/deberta-v3-large</td>
<td>Attention head</td>
<td>0.8261</td>
<td>0.8374</td>
<td>0.8543</td>
</tr>
<tr>
<td>no19</td>
<td>microsoft/deberta-v3-large</td>
<td>Transformer head</td>
<td>0.8135</td>
<td>0.8330</td>
<td>0.8495</td>
</tr>
<tr>
<td>no37</td>
<td>bert-for-patents</td>
<td>Attention head</td>
<td>0.816</td>
<td>0.8316</td>
<td>0.8440</td>
</tr>
<tr>
<td>no41</td>
<td>roberta-large</td>
<td>Attention head</td>
<td>0.7874</td>
<td>0.8181</td>
<td>0.8251</td>
</tr>
<tr>
<td>no52</td>
<td>deberta-large-mnli</td>
<td>Attention head</td>
<td>0.7818</td>
<td>0.8146</td>
<td>0.8286</td>
</tr>
<tr>
<td>no76</td>
<td>funnel-transformer/large</td>
<td>Attention head</td>
<td>0.8105</td>
<td></td>
<td></td>
</tr>
<tr>
<td>no86</td>
<td>bert-for-patents</td>
<td>LSTM head</td>
<td>0.8193</td>
<td></td>
<td></td>
</tr>
<tr>
<td>ex28</td>
<td>microsoft/deberta-v2-xxlarge</td>
<td>Attention head</td>
<td>0.8172</td>
<td></td>
<td></td>
</tr>
<tr>
<td>ex30</td>
<td>microsoft/deberta-v3-large</td>
<td>LSTM head</td>
<td>0.8290</td>
<td></td>
<td></td>
</tr>
<tr>
<td>ex37</td>
<td>microsoft/deberta-v3-large</td>
<td>GRU head</td>
<td>0.8301</td>
<td></td>
<td></td>
</tr>
<tr>
<td>exp055</td>
<td>microsoft/deberta-v3-large</td>
<td>1DCNN head</td>
<td>0.8302</td>
<td></td>
<td></td>
</tr>
<tr>
<td>exp088</td>
<td>microsoft/deberta-v3-large</td>
<td>1DCNN head</td>
<td>0.8291</td>
<td></td>
<td></td>
</tr>
<tr>
<td>exp147d</td>
<td>google/electra-large-discriminator</td>
<td>sum last_hidden_state mask attention_mask</td>
<td>0.7868</td>
<td></td>
<td></td>
</tr>
<tr>
<td>exp149</td>
<td>valhalla/distilbart-mnli-12-9</td>
<td>sum last_hidden_state mask attention_mask</td>
<td>0.7986</td>
<td></td>
<td></td>
</tr>
<tr>
<td>exp177</td>
<td>studio-ousia/luke-large</td>
<td>sum last_hidden_state mask attention_mask</td>
<td>0.8033</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<h1>Worked</h1>
<ul>
<li><p>train['anchor'] + ' is ' + train['target'] ' in ' train['context_text']</p></li>
<li><p>claim's most frequent 300 words</p>
<ul>
<li><p>CV is not up, but worked on esemble.</p></li>
<li><p>in claim dataset groupby context and get most frequent 300 words.</p>
<p><a href="https://www.kaggle.com/datasets/fankaixie/cpc-description" target="_blank">https://www.kaggle.com/datasets/fankaixie/cpc-description</a></p></li></ul></li>
<li><p>LSTM head</p></li>
<li><p>GRU head</p>
<ul>
<li>Slightly better score than lstm head(cv : 0.8290 → 0.8301). </li></ul></li>
<li><p>1DCNN head</p>
<ul>
<li>This idea comes from the receptive field of the segmentation model. Multi-scale features can be obtained by adapting and concatenating convolution filters of different kernel sizes to the embedding generated from the backbone.</li></ul>
<pre><code> #model
self.cnn_1_1 = nn.Sequential(
nn.Conv1d(self.config.hidden_size, cfg.cnn_hidden_size, kernel_size=1, padding=0),
nn.ReLU()
)
self.cnn_1_2 = nn.Sequential(
nn.Conv1d(cfg.cnn_hidden_size, cfg.cnn_hidden_size, kernel_size=1, padding=0),
nn.ReLU()
)
self.cnn_2_1 = nn.Sequential(
nn.Conv1d(self.config.hidden_size, cfg.cnn_hidden_size, kernel_size=2, padding=1),
nn.ReLU()
)
self.cnn_2_2 = nn.Sequential(
nn.Conv1d(cfg.cnn_hidden_size, cfg.cnn_hidden_size, kernel_size=2, padding=0),
nn.ReLU()
)
def feature(self, inputs):
outputs = self.model(**inputs)
last_hidden_states = outputs[0]
weights = self.attention(last_hidden_states)
feature = torch.sum(weights * last_hidden_states, dim=1)
#1dcnn
feature = feature.unsqueeze(2)
h1 = self.cnn_1_1(feature)
h1 = self.cnn_1_2(h1)
h2 = self.cnn_2_1(feature)
h2 = self.cnn_2_2(h2)
h = torch.cat((h1, h2), 1)
feature = h.squeeze(2)
return feature
</code></pre></li>
<li><p>sum last_hidden_state mask attention_mask <br>
<a href="https://www.kaggle.com/code/gaozhao/a-simple-ensemble-of-two" target="_blank">https://www.kaggle.com/code/gaozhao/a-simple-ensemble-of-two</a></p></li>
</ul>
<h1>Not worked</h1>
<ul>
<li>Pseudo labeling(labeling CPC description dataset)</li>
<li>Add feature (levenshtein distance with anchor and target, part of speech vector)</li>
<li>Focal loss</li>
<li>(MSELoss + BCEWithLogitsLoss) / 2</li>
<li>AWP</li>
<li>MLM</li>
<li>PLM</li>
</ul> |
U.S. Patent Phrase to Phrase Matching | 56th solution of Our First silver medal !🥈 Just tune hyperparameters and models on the baseline. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: U.S. Patent Phrase to Phrase Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks!! <a href="https://www.kaggle.com/yolo1996" target="_blank">@yolo1996</a> </p>
<p>Thanks! Actually, they are not important, just keep the same.</p>
<p>Thanks! <a href="https://www.kaggle.com/hoturam" target="_blank">@hoturam</a> </p>
<h1>To record our easy ordinary solution which brings a not bad result!😊</h1>
<p>Thanks for the <a href="https://www.kaggle.com/code/yasufuminakama/pppm-deberta-v3-large-baseline-w-w-b-train" target="_blank">training notebook</a> and <a href="https://www.kaggle.com/code/yasufuminakama/pppm-deberta-v3-large-baseline-inference" target="_blank">inference notebook</a> baseline of <a href="https://www.kaggle.com/Y.Nakama" target="_blank">@Y.Nakama</a>, We learned a lot.</p>
<h2>【Data format】</h2>
<p>Just like baseline: ['anchor'] + '[SEP]' + ['target'] + '[SEP]' + ['context_text'] </p>
<h2>【Cross Validation】</h2>
<p>We use the <code>train_fold5.csv</code> and <code>train_fold4.csv</code> from <a href="https://www.kaggle.com/datasets/helloggfss/foldsdump" target="_blank">https://www.kaggle.com/datasets/helloggfss/foldsdump</a></p>
<h2>【Ensemble】</h2>
<table>
<thead>
<tr>
<th>Model</th>
<th>Public LB</th>
<th>Private LB</th>
</tr>
</thead>
<tbody>
<tr>
<td>bert_for_patent_5folds</td>
<td>0.8312</td>
<td>0.8420</td>
</tr>
<tr>
<td>bert_for_patent_4folds</td>
<td>0.8320</td>
<td>0.8414</td>
</tr>
<tr>
<td>electra_5folds</td>
<td>0.8371</td>
<td>0.8504</td>
</tr>
<tr>
<td>electra_5folds_tuned</td>
<td>0.8383</td>
<td>0.8508</td>
</tr>
<tr>
<td>deberta_v3_large</td>
<td>0.8376</td>
<td>0.8510</td>
</tr>
<tr>
<td>deberta_v3_large</td>
<td>0.8380</td>
<td>0.8490</td>
</tr>
<tr>
<td>Funnel_xlarge_5folds</td>
<td>0.8380</td>
<td>0.8488</td>
</tr>
<tr>
<td>Funnel_large_4folds</td>
<td>0.8325</td>
<td>0.8416</td>
</tr>
<tr>
<td><strong>Average Ensemble 8 models</strong></td>
<td><strong>0.8533(33th)</strong></td>
<td><strong>0.8639(56th)</strong></td>
</tr>
</tbody>
</table>
<h2>【Tune tricks】</h2>
<p><strong>Retrain:</strong></p>
<ul>
<li>Firstly, train the single model about 6 epochs with learning rate about 2x10-5, and save it.</li>
<li>Secondly, load the saved model state and train it again with a smaller learning rate like 5x10-6 about 5epochs.</li>
<li>I guess it can help achieve the local minimum more easily, which always improve 0.002+ on CV or LB.</li>
</ul>
<p><strong>Others:</strong></p>
<ul>
<li>Just tune warm up, learning rate and num of cycles.</li>
<li>Adding attention head and Layernorm improve 0.002+ on the models except deberta.</li>
</ul>
<h2>【At last】</h2>
<p>Thanks for my teamate <a href="https://www.kaggle.com/buyingniu" target="_blank">@buyingniu</a>, we tune the parameters all the days.😂<br>
And I surely learn a lot new things from the top solutions! 💪</p> |
U.S. Patent Phrase to Phrase Matching | 57th place solution(public 188th) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: U.S. Patent Phrase to Phrase Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you for competition organizers having this interesting competition. Thank you for everyone involved with this competition. I learned a lot from public notebooks and discussions.</p>
<p>My main solution is ensemle/stacking of several experiments' results. Details below,</p>
<h2>Environments</h2>
<ul>
<li>All experiments by my local machine(Corei7 12700 / 64GB RAM / RTX 3090)</li>
<li>PyTorch(with pytorch-pfn-extras) and huggingface transformers</li>
<li>Manage experiments by MLFlow<br>
<br><br>
<br></li>
</ul>
<h2>Summary of My Experiments</h2>
<p><br></p>
<h3>common settings</h3>
<h4>batch_size, lr, etc…</h4>
<ul>
<li>for large models, I used batch_size=16 and accumulate_grad_batches=2 and lr=5.0e-06(OneCycleLR) and small clip_grad_norm value(2.0~100.0)</li>
<li>for small models, I used batch_size=32 and accumulate_grad_batches=1 and lr=5.0e-06(OneCycleLR) and large clip_grad_norm value(1000.0)<br>
<br></li>
</ul>
<h4>additional special tokens</h4>
<p>I make input texts as this form,<br>
text = f"{cpc_section} {sep} {anchor} {sep} {target} {sep} {cpc_context}."</p>
<ul>
<li>where cpc_section = "[A]", "[B]", …, or "[H]" as special tokens</li>
<li>where sep = "[s]" as a special token<br>
<br><br>
<br></li>
</ul>
<h3>Summaty Table</h3>
<table>
<thead>
<tr>
<th>exp</th>
<th>RUN_ID</th>
<th>seed</th>
<th>n_fold</th>
<th>encoder</th>
<th>head</th>
<th>loss</th>
<th>preprocess</th>
<th>optimizer/training</th>
<th>valid_score</th>
<th>public_score</th>
<th>private_score</th>
</tr>
</thead>
<tbody>
<tr>
<td>exp018</td>
<td>b02bfabced5345b689e6ac36e25a478c</td>
<td>42</td>
<td>8</td>
<td>microsoft/deberta-v3-large</td>
<td>AttentionPoolHead</td>
<td>MSEWithLogitsLoss</td>
<td>None</td>
<td>AdamW</td>
<td>0.83</td>
<td>0.839</td>
<td>0.8542</td>
</tr>
<tr>
<td>exp021</td>
<td>187e25eeeed943f08f6b450e47455542</td>
<td>42</td>
<td>5</td>
<td>microsoft/deberta-v3-large</td>
<td>SimpleHead</td>
<td>MSEWithLogitsLoss</td>
<td>None</td>
<td>AdamW</td>
<td>0.836</td>
<td>0.832</td>
<td>0.856</td>
</tr>
<tr>
<td>exp041</td>
<td>dc95c61f15bf4ac0b9e1de3ac1299f45</td>
<td>1221</td>
<td>10</td>
<td>microsoft/deberta-v3-large</td>
<td>AttentionPoolHead</td>
<td>MSELoss</td>
<td>None</td>
<td>AdamW</td>
<td>0.8325</td>
<td>0.8369</td>
<td>0.8553</td>
</tr>
<tr>
<td>exp050</td>
<td>7923146adde64b6a9643eb163d8d223f</td>
<td>1221</td>
<td>10</td>
<td>microsoft/deberta-v3-base</td>
<td>MultiSampleDropoutHead</td>
<td>MSELoss</td>
<td>None</td>
<td>AdamW</td>
<td>0.82</td>
<td>0.8213</td>
<td>0.8403</td>
</tr>
<tr>
<td>exp051</td>
<td>bc8145deac9d4376b708e8b1499da970</td>
<td>1221</td>
<td>10</td>
<td>anferico/bert-for-patents</td>
<td>MultiSampleDropoutHead</td>
<td>MSELoss</td>
<td>None</td>
<td>AdamW</td>
<td>0.825</td>
<td>0.8295</td>
<td>0.8396</td>
</tr>
<tr>
<td>exp052</td>
<td>b977b33e7f8547bcb1e62e42e021e627</td>
<td>1221</td>
<td>10</td>
<td>roberta-large</td>
<td>MultiSampleDropoutHead</td>
<td>MSELoss</td>
<td>None</td>
<td>AdamW</td>
<td>0.807</td>
<td>0.8262</td>
<td>0.8318</td>
</tr>
<tr>
<td>exp053</td>
<td>9aa398aea22c4048b4e904c36bb3605b</td>
<td>1221</td>
<td>10</td>
<td>microsoft/deberta-v3-large</td>
<td>MultiSampleDropoutHead</td>
<td>MSELoss</td>
<td>None</td>
<td>AdamW</td>
<td>0.8307</td>
<td>0.8369</td>
<td>0.8513</td>
</tr>
<tr>
<td>exp054</td>
<td>e38625b57d3e4f00992c4f191a786c5d</td>
<td>1221</td>
<td>10</td>
<td>microsoft/deberta-v3-large</td>
<td>AttentionPoolHead</td>
<td>BCEWithLogitsLoss</td>
<td>lowercase, “;”→”,”</td>
<td>AdamW</td>
<td>0.8308</td>
<td>0.8347</td>
<td>0.8526</td>
</tr>
<tr>
<td>exp060</td>
<td>c05c9ec899474ff58fb3552fe4a084ed</td>
<td>1</td>
<td>5</td>
<td>microsoft/deberta-v3-base</td>
<td>AttentionPoolHead</td>
<td>BCEWithLogitsLoss</td>
<td>None</td>
<td>SiFT/Adabelief</td>
<td>0.8168</td>
<td>0.8196</td>
<td>0.8344</td>
</tr>
<tr>
<td>exp061</td>
<td>f87754b75cb54060b19527b551b3e6fe</td>
<td>2</td>
<td>5</td>
<td>anferico/bert-for-patents</td>
<td>AttentionPoolHead</td>
<td>BCEWithLogitsLoss</td>
<td>None</td>
<td>SiFT/Adabelief</td>
<td>0.8271</td>
<td>0.8333</td>
<td>0.8447</td>
</tr>
<tr>
<td>exp064</td>
<td>c54349cdef094923b1003ba22b394ce9</td>
<td>5</td>
<td>5</td>
<td>microsoft/deberta-v3-large</td>
<td>AttentionPoolHead</td>
<td>MSELoss</td>
<td>None</td>
<td>SiFT/Adabelief</td>
<td>0.8343</td>
<td>0.8347</td>
<td>0.8552</td>
</tr>
<tr>
<td>exp065</td>
<td>56a4472423e54bd39ff7b02c0daa08ac</td>
<td>6</td>
<td>5</td>
<td>microsoft/deberta-v3-base</td>
<td>AttentionPoolHead</td>
<td>MSELoss</td>
<td>lowercase</td>
<td>SiFT/Adabelief</td>
<td>0.819</td>
<td>0.8243</td>
<td>0.8424</td>
</tr>
<tr>
<td>exp066</td>
<td>018420c68d63460f9ad625d51bb3903f</td>
<td>7</td>
<td>5</td>
<td>anferico/bert-for-patents</td>
<td>AttentionPoolHead</td>
<td>MSELoss</td>
<td>None</td>
<td>SiFT/Adabelief</td>
<td>0.8256</td>
<td>0.8283</td>
<td>0.8387</td>
</tr>
</tbody>
</table>
<p><br><br>
<br></p>
<h2>Ensemble</h2>
<p>I used all results in the table above for blending/stacking.</p>
<ol>
<li>Blending by Nelder-Mead optimized weights based on oof predictions.</li>
<li>BayesianRidge Stacking by oof predictions.</li>
</ol>
<table>
<thead>
<tr>
<th>No</th>
<th>valid_score</th>
<th>public_score</th>
<th>private_score</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>0.8525</td>
<td>0.8493</td>
<td>0.8638</td>
</tr>
<tr>
<td>2</td>
<td>0.8522</td>
<td>0.8493</td>
<td>0.8638</td>
</tr>
</tbody>
</table>
<p><br><br>
<br></p>
<h2>Code</h2>
<p>training code: <a href="https://github.com/Taichicchi1221/us-patent-phrase-to-phrase-matching" target="_blank">https://github.com/Taichicchi1221/us-patent-phrase-to-phrase-matching</a><br>
infernce notebook: <a href="https://www.kaggle.com/code/hutch1221/uspppm-inference" target="_blank">https://www.kaggle.com/code/hutch1221/uspppm-inference</a></p>
<p><br></p>
<p>Advices are always welcome!<br>
Thank you for attentions.</p> |
U.S. Patent Phrase to Phrase Matching | 5th solution: prompt is all you need | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: U.S. Patent Phrase to Phrase Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I don't use original pet model, I catch the only logits of words YES as 1D, then use it to input bce with logits loss , when u predict, u just need to put logits of word YES </p>
<h1>Abstract</h1>
<p>In this competition, my main model structure is mainly based on prompt learning,Because prompt learning can make more full use of the existing knowledge of the model for reasoning and has obvious advantages in few shot learning, and I used prompt learning to get the first place in the SEMEVAL 2022 PCL Detection.I first reformulate the task as a specific form of cloze prompt, and then apply prompt-based learning on it to predict the confidence of label words.</p>
<h1>Method</h1>
<h3>prompt learning</h3>
<p><img src="https://i.ibb.co/Z1NgtW6/usspp-pet.png" alt=""><br>
In order to enable the model to directly output the similarity between anchor and target, I improved the model of the pet paper. I manually set YES as the label words of 1, and then took out the logits of the model in this word as input, and used bce loss to calculate ,make logits of YES equal to the similarity.<br>
In the experiment, we found that the effect of the prompt model was 0.005 better than that of the regular model</p>
<table>
<thead>
<tr>
<th>method</th>
<th>cv</th>
<th>lb</th>
</tr>
</thead>
<tbody>
<tr>
<td>attention_pool</td>
<td>0.8485</td>
<td>0.8535</td>
</tr>
<tr>
<td>prompt</td>
<td>0.8535</td>
<td>0.8570</td>
</tr>
</tbody>
</table>
<h3>Trick</h3>
<p>Through eda, I found that there are very similar transformations between all the target parts under the same anchor. At the same time, I found that the same anchor was grouped and the target-related features were sorted in the use of lgb to construct features, which improved a lot. , so I decided to construct the input for teammate_info<br>
teamte_info = ";".join(set(train.group(['anchor'])['target'].tolist())<br>
context = anchor+[SEP]+target+[SEP]+teamate_info</p>
<h3>Summarize</h3>
<p>In the last few days of the competition I was busy with something more important than the competition and lost a lot of time for the final sprint, but thank god I still finished the gold medal in solo and became a kaggle GM, thank you all.</p>
<p>thank you , Congratulations your team too , it's a interesting competition!</p>
<p>thank you , Congratulations your team too</p>
<p>I think what is more important is not the template and verbalizer, but how to convert the prompt learning from a multilabel classification task to a continuous value prediction task</p>
<p>Due to some innovative technologies involved, all the code will not be open source at present, the main model code can refer to pet's paper for implementation</p>
<p>thank you , bro</p>
<p>thank you friend</p>
<p>Your performance in the competition is excellent congratulations on winning the championship</p> |
U.S. Patent Phrase to Phrase Matching | 7th place solution - the power of randomness | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: U.S. Patent Phrase to Phrase Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to The U.S. Patent and Trademark Office (USPTO) and Kaggle for organizing such an engaging competition. This was my 3rd NLP competition, and I wouldn’t be able to get the position I got without the teachings and sharing of the Kaggle community, particularly <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a>, <a href="https://www.kaggle.com/abhishek" target="_blank">@abhishek</a>, <a href="https://www.kaggle.com/theoviel" target="_blank">@theoviel</a>, <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a>, and <a href="https://www.kaggle.com/cpmpml" target="_blank">@cpmpml</a>. Thank you!</p>
<p>I finally got some time off work yesterday to review some of the posts and it seems that all top teams used the same base idea. So, instead of just talking about my solution, today I’m going to also share a bit about my journey through NLP competitions and my thought process.</p>
<h1>My journey</h1>
<p>This was my 3rd NLP competition. I learned a lot in each one. From the first one in which I relied mainly in tuning a public notebook for one epoch while consuming a week of GPU quota, to the second in which I managed to train a deberta-v3-large model end-to-end, and this one in which I managed to train a deberta-v2-xlarge, it’s been quite a ride. Even now, after reading some of the write-ups I learned new stuff about the capabilities of the torch library. Kaggle is truly a unique platform that gives you the opportunity to compete with the best and learn from them.</p>
<p>My journey in this competition started in early June after I was done with reviewing the write ups of my previous competition and doing some tests. I wondered if it wasn’t too late to join. I took a look at the data, compiled a list of ideas and decided to give it a try. First, I built a base solution that I could use as a benchmark. To my surprise even after some tweaks, its CV was pretty low when compared to both the LB and the top scoring public notebook. I expected a bump up from simply ensembling the various folds of a model, but my basis was so low that I doubted that would help much. With about two and a half weeks left I seriously considered moving on. Yet, I was curious to see whether some of my ideas would work and I figured that I could spend that time and pressed forward. </p>
<p>The next day, June 6th, I made my first submission, mainly to make sure I could do it successfully. In my second competition I waited to submit until I had a model worth submitting and I struggled for several days to get it right with only 5 submissions per day to run tests, while the competition deadline was fast approaching. Since then, I like to get that out of the way as soon as possible. I wasn’t yet ready to submit anything other than single models, so that’s what I did. From then on, every day I’d use my 5 submissions, if for nothing else, to get more data to understand the correlation between my CV and the LB.</p>
<p>On the second day, while continuing to train models, I completed the code for submitting an ensemble, and I submitted my first one. It was a simple mean of the models I had at the time. The following day I realized the submission had failed. I corrected a bug and made another submission right away. It didn’t take long for it to fail. I corrected the problem (a left over from the bug fix tests) and submitted again. A few hours later I realized it had also failed. I fixed another bug and repeated the submission. It was almost the end of the Kaggle day, so I submitted 2 more single models and with that completed my 3rd day of submissions. </p>
<p>The next morning, I woke up in solid silver medal territory with LB=0.8525. The CV was 0.8589, but given that I hadn’t submitted any ensemble yet, I had no idea how that would translate in the LB. I was a great and encouraging surprise. Little did I know that the PB for that first ensemble, consisting of a simple mean of my first models, was 0.8714; more than enough to win a gold medal. To think that just 3 days before I was ready to move on because essentially those same individual models compared very poorly with the LB. I guess that if a large number of teams make a large number of attempts, it’s likely that some of them will get a good score by chance (aka overfit). Hence, the LB scores may not reflect the quality of a team’s solutions. Only each team can assess what they have. Others can only guess. Persistence pays off, or in my case, curiosity.</p>
<p>The next day I got into gold territory and stayed there through the end. The competition was fierce. Two or three of the teams were always farther away at the top, but the others changed positions frequently. Given that my initial ideas were working well, I parked the other ones. I planned on trying 3 of them in the last days. Yet on the last Thursday I got a “nan” validation score during training. That was big wake-up call. I redirected my efforts to try to understand what had happened. For some reason, the prediction of the last items in the last validation batch was nan. Why the hell was that happening? There was nothing unusual in the input data. My focus changed completely. I realized that my gold medal position could be a mirage. What if the same thing was happening with my previous submissions and my PB score was nan? I spent most of my remaining time and submissions making my code as robust as possible. That seriously hampered my progress but felt like the right thing to do. I can now see that none of my submissions had a PB problem, but I didn’t know that, and I didn’t want to take any chances. I’m pretty happy with the outcome and if placed in the same circumstances, I’d do the same thing.</p>
<h1>solution</h1>
<p>The input data was small enough that I could review it in Excel. I noticed that small variations of the same words had an impact on the score. I further noticed that the same anchor and target would sometimes have different scores depending on the context. I compiled a list of ideas for postprocessing and checked how well some of them would work considering the whole data. I knew that applying rules to the whole data was a recipe for disaster, but this was just Excel and I figured that I could use that “knowledge” to build new features and properly test them with an L2 model. But why build hand-made features and use level 2 models when transformers are great at doing that? I decided to use them instead. That lead to my decision to create two types of models: one based on a text prediction and another on a token prediction.</p>
<h2>Encoding for text models</h2>
<p>Text models predict a single score for each text, which in this case means one target. I used the following encoding:</p>
<pre><code>anchor [SEP] target [SEP] context [SEP] n_additional_targets
</code></pre>
<p>The n_additional_targets are targets for the same anchor and context separated by “;”. I used a random permutation caped at size n: a parameter set mostly to 0 or 20. I further cut the encoding at max_len. Setting n=0 equated to the approach used by public notebooks. I was surprised that several teams used “;” as a target separator. I discarded “.” because the context used it. I didn’t feel it was necessary to add a new token and “;” felt aligned with what happens in a regular text and I expected the transformer to be well equipped to process it. I wonder what the reasoning was for others. My final submissions use both models with 0 additional targets and with 20.</p>
<h2>Encoding for token models</h2>
<p>Token models predict multiple targets at once. For each target, I generate an encoding using one of the following methods:</p>
<ul>
<li>1) balanced: the target goes first and is followed by one random permutation of the remaining targets (for the same anchor and context). The permutation is cut at max_len. The reason the target goes first is that in some cases the permutation is cut short and I wanted to ensure that each target would go in at least once.</li>
</ul>
<pre><code>anchor [SEP] target; permutation_other_targets [SEP] context [SEP]
</code></pre>
<ul>
<li>2) full: the target goes first and is followed by multiple random permutations of the remaining targets. The permutations are cut at max_len and are not repeated (implying that, for example, if we have only two additional targets, we only get two permutations). For a brief moment I considered whether I should avoid repeating sequences (this may ring a bell to Santa 2021 participants), but it didn’t seem worth the trouble 😊.</li>
</ul>
<pre><code>anchor [SEP] target; permutation_other_targets_1;..; permutation_other_targets_n [SEP] context [SEP]
</code></pre>
<ul>
<li>3) hybrid: this is similar to full in that it attempts to maximize the usage of the encoding length but does it differently. It works roughly as follows: the sequences “target; permutation_other _targets” are concatenated and used to fill encodings until we have as many encodings as targets. Any part of a permutation that is truncated in one encoding is used for the following encoding.</li>
</ul>
<pre><code>anchor [SEP] target_a; permutation_other_targets_a_part1 [SEP] context [SEP]
anchor [SEP] permutation_other_targets_a_part2; target_b; permutation_other_targets_b_part1 [SEP] context [SEP]
</code></pre>
<p>I kept the number of encodings equal to the number of targets. The intent was to maintain some balance between the targets with large permutations and those with smaller ones. I didn’t want the training to be driven by the former, especially because I didn’t notice any patterns in the residual errors of targets with long permutations versus those with small permutations. I included in the code a factor that would increase that ratio, but I never set it to anything other than 1. I intended to test other values and theorized that a ratio higher than 1 might help at least with the predictions (you get more of them). However, it would also increase the prediction time and I wasn’t sure it would be worthwhile versus using the submission time to ensemble more models. Looking back, I should have tried it at least once, but in the race against the clock (mine and the GPUs) it never felt like a priority.</p>
<p>For the training dataset I tried all 3 options and noticed that 1 and 3 performed much better, so I quickly discarded 2. As for the predictions, I expected that each model would perform better using the same approach as the training dataset, but for some reason balanced seemed to always perform better so I adopted it.</p>
<p>Each encoding provides token predictions for multiple targets. The final prediction for each target is computed by averaging the various predictions of its tokens. I tried 3 different averaging methods:</p>
<ol>
<li><p>Simple average: just a simple average if all tokens for all predictions</p></li>
<li><p>Length weighted average: average all tokens for all predictions using the length of the text represented by the token as the weight.</p></li>
<li><p>Score weighted average: average all tokens for all predictions using the prediction as a weight. Cases of high similarity are less common in the training data, so I wanted to prioritize those predictions. A simple way was to use the prediction as the weight. Before using it, I reviewed the range of predictions for the tokens and noticed that they tended not to vary by much.</p></li>
</ol>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F982731%2F3700d696bf46f59eb1afb4687d841d6a%2Faveraging.jpg?generation=1656034953397615&alt=media" alt=""></p>
<p>The best approach tended to be sometimes 1 and sometimes 3. Approach 2 was consistently worse. Maybe because the presence of short stop words would sometimes have a huge impact on the score. I ended up taking the mean of 1 and 3.</p>
<h1>model architecture</h1>
<p>The model architecture for text model consists of a transformer, an optional MLP layer that feeds of the transformer’s last hidden state, and a head. The latter is one of the following:</p>
<ol>
<li>attention head published by <a href="https://www.kaggle.com/code/yasufuminakama/pppm-deberta-v3-large-baseline-w-w-b-train" target="_blank">@yasufuminakama</a>.</li>
<li>attention head masked by the target tokens.</li>
<li>attention head masked by the encoding’s attention_mask (ignore padding).</li>
<li>2 and 3 with equal weight</li>
<li>GlobalAveragePooling</li>
</ol>
<p>I believe my final submissions used mainly 2, but also 1 and 5. The presence of the MLP layer didn’t seem to make much difference in the score, though it helped a bit with diversity (slightly lower correlation with other models).</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F982731%2F6e63468060f4acf09f533b21110d8fb8%2Fmodels.jpg?generation=1656034988650795&alt=media" alt=""></p>
<p>The model architecture for token models is roughly the same, except for the head, which doesn’t consolidate the predictions of the tokens. </p>
<p>I tried a few other variations that didn’t work. For example: adding some hand-made features to the model and feed them to the MLP/head concatenated with the transformers output; use multiple hidden layers of the transformer instead of just the last one.</p>
<h1>ensembling</h1>
<p>I tried the following ensembling methods:</p>
<ul>
<li><p>Simple mean. This was the first one I tried - it can’t get easier than that 😊. In other competitions I used median or some combination of mean and median. I never tried it here because it didn’t feel right for a Pearson correlation. After the competition ended, I saw a comment refer to it and I wondered if my intuition had been wrong. It took me about a minute to open an Excel with all my oof predictions and verify that I wasn’t wrong. Considering how long it took to check that, I should have done it earlier rather than count only on my intuition.</p></li>
<li><p>Optuna. This didn’t take long because I already had most of the code from a previous competition. I started with 0 and 1 weights just to select the best combination of models and then moved to discrete intervals of 0.01. The first option performed better than the mean and the second even better.</p></li>
<li><p>Linear regression on full oof data: this was very simple to code and produced the best results. I tried using unrestricted coefficients and only positive ones. The former had a higher CV, but lower LB. The CV difference was small, so I took the conservative approach of using only the latter. At some point my best ensemble was a two layer regression: I split the models into groups defined by their type (text/token) and transformer; I built an ensemble for each group using linear regression and then used the results to ensemble the groups. As started having more diverse models I felt that wouldn’t help and stopped doing it (without actually testing it).</p></li>
<li><p>LGBM: to my surprise this performed markedly worse than linear regression. Possibly with some tunning and some hand-made features that would change, but that didn’t feel like a good use of my time.</p></li>
<li><p>Ridge: I used this mainly to validate the usage of linear regression with the whole data. I was concerned that the usage of the whole data could lead to overfitting and used ridge with multiple folds to try to assess that. I ended up with weights that were similar to those produced by linear regression and resulted in slightly lower CV and LB scores. <br>
After the initial tests, all my submissions used linear regression, with one exception that I’ll mention later.</p></li>
</ul>
<h1>final thoughts</h1>
<p>My two selected solutions were linear regression ensembles of various models combining the approaches previously described. The difference between them is that one excludes the model that generated the nan. I changed the way the predictions were processed to “ensure” the presence of nans would not be an issue, even if all predictions for a target were nans. That model was my best one, so I used it for my first selection, but I didn’t want to run the risk of having missed something and selected another ensemble without it. Both selections use different error processing mechanisms to cover my bases (when I was in college a professor mentioned that in its missions NASA ran simultaneously 3 versions of each program developed by different teams, so who am I to disagree 😊).</p>
<p>An interesting characteristic of my solution is that it’s not deterministic (unless you set seeds during prediction, but I didn’t see the point). In fact, my last 3 submissions (from which I used the same model (with different “error” handling approaches) and scored 0.8607, 0.8508 and 0.8610 on the LB. I should say that this large difference was an exception, and, in fact, they all got the same PB.</p>
<p>My CV was a lot higher than the LB. That was a cause of concern because I didn’t know what was causing that. I first thought it might be because my oof predictions were based on different splits of the data and that might lead to some leakage. To test that I submitted a prediction of models trained on the same split and whose ensemble also used that split (I used ridge for that) and I got the same gap. With the benefit of seeing the PB scores, I now suppose that the gap was the result of an “unlucky” LB sample, because the PB is actually higher than the CV. In spite of the gap, the correlation of the CV and LB was remarkable. In most cases, if cv went up say 2 points (0.0002) the LB would go up the same 2 points. Sometimes that would change. The cause was always the same: a bug in the prediction. At times my score in the LB would stall for 1 to 3 days. That meant I was either writing code or more commonly fixing bugs. Often, I used submissions to test bug fixes instead or spending the GPU quota. It may not seem productive, but I had to conserve that quota and it balanced well with my job’s workload (as engaging as Kaggle is, at the end of the day it’s just a hobby for me). I’m pleased that in all my NLP competitions I managed to select the solution with the highest PB. Some would call it luck but I prefer to think of it as solid cv strategy and good judgement 😊.</p>
<p>I only used Kaggle resources which put a premium on any testing and tuning. Hence, you should take my conclusions with a grain of salt. Although my wish was to use a full training cycle with its various folds to test something, often I used only one fold for one epoch. I wouldn’t be surprised if some of my decisions were flawed.</p>
<p>That’s it. I wished I had tried some other ideas. I felt two of them had great potential, but I’m pretty happy with the results. There’s always a next time.</p>
<p>That's a good point. Yet, not at the root of the issue. I specifically excluded the last batch from the training dataset and included it in the validation dataset. However, I don't calculate the correlation of the training dataset, only of the validation's and at the end of a training epoch using the whole data.</p>
<p>In this case, the prediction itself was nan. I did some troubleshooting but couldn't find why that was happening. Rather than pursue further investigation and try to avoid it, I opted for dealing with it. This happened with a token model, which provides various predictions for the same target. In the cases I observed there we still plenty of predictions available to get a got prediction. Moreover, if the number of predictions was below a threshold, I could repeat the prediction for the applicable targets and get more predictions. That was never necessary, but to test it, I forced some reruns of the prediction and they worked fine.</p>
<p>You are right! Randomness wasn't the key factor, but I found it interesting that the results were non deterministic. 😃</p>
<p>Also, I assumed that applying random permutations would help with learning and generalization, versus keeping the same order of the remaining targets, as some other solutions appear to have done. For example:</p>
<ul>
<li>Same order<br>
anchor [SEP] target 1; target 2; target 3; target 4; target 5 [SEP] context <br>
anchor [SEP] target 2; target 1; target 3; target 4; target 5 [SEP] context <br>
anchor [SEP] target 3; target 1; target 2; target 4; target 5 [SEP] context <br>
anchor [SEP] target 4; target 1; target 2; target 3; target 5 [SEP] context <br>
anchor [SEP] target 5; target 1; target 2; target 3; target 4 [SEP] context </li>
<li>Random<br>
anchor [SEP] target 1; target 3; target 5; target 4; target 1 [SEP] context <br>
anchor [SEP] target 2; target 5; target 3; target 1; target 4 [SEP] context <br>
anchor [SEP] target 3; target 2; target 4; target 1; target 5 [SEP] context <br>
anchor [SEP] target 4; target 5; target 3; target 1; target 2 [SEP] context <br>
anchor [SEP] target 5; target 4; target 2; target 1; target 3 [SEP] context </li>
</ul>
<p>I must confess that I didn't measure that though, so it's purely based on my intuition.</p> |
U.S. Patent Phrase to Phrase Matching | 80th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: U.S. Patent Phrase to Phrase Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you to Kaggle and U.S. Patent Phrase to Phrase Matching's host for providing this opportunity.</p>
<h1>Summary</h1>
<p>CV & privateLB improved by fold8</p>
<table>
<thead>
<tr>
<th></th>
<th>fold</th>
<th>CV</th>
<th>public</th>
<th>private</th>
</tr>
</thead>
<tbody>
<tr>
<td>Best publicLB</td>
<td>4</td>
<td>0.84978</td>
<td>0.8502</td>
<td>0.8616</td>
</tr>
<tr>
<td>Best CV</td>
<td>8</td>
<td>0.85286</td>
<td>0.8490</td>
<td>0.8633</td>
</tr>
</tbody>
</table>
<h1>Worked for me</h1>
<ul>
<li>Add 'context' to train&test dataset: anchor + target + context + context_text</li>
<li>Special tokens change</li>
</ul>
<ol>
<li>[SEP] : anchor + [SEP]+ target + [SEP] + context + [SEP] + context_text</li>
<li>[SEP]+[cpc] : anchor + [SEP]+ target +[cpc] + context + [cpc] + context_text</li>
<li>[MASK] : anchor + [MASK] + target + [MASK] + context + [MASK] + context_text</li>
<li>’ ' : anchor + ’ ' + target + ’ ' + context + ’ '+ context_text</li>
</ol>
<ul>
<li>Model: deberta-v3-large, bert-for-patents, funnel</li>
<li>Custom head : attention head, cls embedding</li>
<li>CV : Groupby anchor and stratify by score</li>
<li>Fold 8 : Public LB is not change, but CV is up </li>
<li>Label smoothing</li>
<li>Mixout</li>
<li>Ensemble : Nelder-Mead</li>
</ul>
<h1>Ensemble</h1>
<p>Public 0.8490, Private 0.8633</p>
<table>
<thead>
<tr>
<th></th>
<th>model</th>
<th>special tokens</th>
<th>custom head</th>
<th>weight</th>
<th>CV</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>deberta-v3-large</td>
<td>[MASK]</td>
<td>attention head</td>
<td>0.0976364</td>
<td>0.834336942</td>
</tr>
<tr>
<td>2</td>
<td>deberta-v3-large</td>
<td>’ '</td>
<td>cls embedding</td>
<td>0.1161994</td>
<td>0.828407893</td>
</tr>
<tr>
<td>3</td>
<td>bert-for-patents</td>
<td>’ '</td>
<td>attention head</td>
<td>0.1245703</td>
<td>0.824681836</td>
</tr>
<tr>
<td>4</td>
<td>deberta-v3-large</td>
<td>[MASK]</td>
<td>cls embedding</td>
<td>0.1721244</td>
<td>0.834253562</td>
</tr>
<tr>
<td>5</td>
<td>funnel-transformer/large</td>
<td>[SEP]</td>
<td>attention head</td>
<td>0.1158813</td>
<td>0.819553024</td>
</tr>
<tr>
<td>6</td>
<td>bert-for-patents</td>
<td>[SEP]+[cpc]</td>
<td>attention head</td>
<td>0.1289362</td>
<td>0.825093685</td>
</tr>
<tr>
<td>7</td>
<td>deberta-v3-large</td>
<td>[SEP]+[cpc]</td>
<td>attention head</td>
<td>0.1276247</td>
<td>0.832513801</td>
</tr>
<tr>
<td>8</td>
<td>bert-for-patents</td>
<td>’ '</td>
<td>cls embedding</td>
<td>0.0981573</td>
<td>0.823069904</td>
</tr>
</tbody>
</table>
<h1>Not Worked for me</h1>
<ul>
<li>Pseudo labeling</li>
<li>Data augmentation</li>
<li>clean text(lower case, symbol removal, etc.)</li>
<li>multi sample dropout etc.</li>
</ul>
<p>Sorry, I made a mistake and have corrected it.<br>
Thanks for letting me know!</p>
<p>Thank you!</p>
<p>Unfortunately, I did not record any single LB scores for the 8 models.</p>
<p>Thank your comment!</p>
<p>It is one of the Transformer output formats.<br>
Here is a <a href="https://www.kaggle.com/code/rhtsingh/utilizing-transformer-representations-efficiently" target="_blank">kernel</a> for your reference, if you are interested.</p> |
U.S. Patent Phrase to Phrase Matching | 85th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: U.S. Patent Phrase to Phrase Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h1>85th place solution</h1>
<p>First, I would like to thank competition host and my pair <a href="https://www.kaggle.com/kazuakiohmori" target="_blank">@kazuakiohmori</a>. We worked hard together for two weeks, and finally we shaked up to silver medal zone.</p>
<h1>Summary</h1>
<p>We did two final submissions, 022+026+033, and 022+026+033+PatentSBERTa035, and our final best solution, which is the former, is composed of</p>
<ul>
<li>preprocess: lower case, and cleaning the cpc context texts</li>
<li>exp022: DeBERTa-v3-large, and additional special token "[s]"</li>
<li>exp026: almost same as exp022, and gradient accumulation steps = 2</li>
<li>exp033: bert-for-patents</li>
<li>ensemble: LightGBM stacking</li>
</ul>
<p>We followed the baseline by <a href="https://www.kaggle.com/yasufuminakama" target="_blank">@yasufuminakama</a>. <a href="https://www.kaggle.com/code/yasufuminakama/pppm-deberta-v3-large-baseline-w-w-b-train" target="_blank">here</a></p>
<h1>Preprocess</h1>
<p>A lot of cpc context texts contained many semicolons ';', but it seemed unnatural use, so I replaced them with periods '.', which boosted CV +0.0034, and LB +0.0029.</p>
<h1>Input texts</h1>
<p>While BERT families expect that input is one or two texts, cpc context texts were effective in this competition, so we need to handle with three texts which are anchor, target, and cpc text.</p>
<p>To deal with this situation, we introduced additional special token "[s]". Our input is like below.</p>
<pre><code>anchor [s] target [s] cpc context text
</code></pre>
<p>This boosted CV + 0.0013.</p>
<h1>Stacking</h1>
<p>We used stacking for ensemble. The main reason is that the number of tokens of anchor, target, and cpc context texts is varied and it might affect the difficulty of each input. Some inputs were like word to word matching, other inputs were sentence to sentence mathing. GBDT can deal with the information as feature, so we decided to use stacking. Be aware that the number of tokens is on what tokenizer is used, which is why we made the features for each model. The features is like below.</p>
<pre><code>[pred_expA, anchor_len_expA, target_len_expA, ..., pred_expB, anchor_len_expB, ...]
</code></pre>
<p>This feature engineering boosted CV +0.0030, and LB +0.0019, even for single model.</p>
<h1>Not worked</h1>
<ul>
<li>MLM</li>
<li>Pseudo Labeling for train.csv <a href="https://www.kaggle.com/competitions/us-patent-phrase-to-phrase-matching/discussion/327288" target="_blank">(discussion)</a></li>
<li>Multi class classification</li>
<li>Architecture change, like average pooling and layer normalization not only using the [CLS] token</li>
<li>Parameter tuning, mainly changed learning rate and warmup</li>
<li>DeBERTa-v2-xlarge</li>
</ul>
<h1>Code</h1>
<p><a href="https://github.com/Y-Haneji/kaggle_pppm" target="_blank">GitHub</a></p>
<p>Some docker files is included.</p>
<p>How to make the feature about the number of tokens.</p>
<pre><code>def tokenizer_feature_engineering(dataframe, tokenizer):
def _tok_len(sentence):
return tok_len(sentence, tokenizer)
df = dataframe.copy()
basecols = df.columns
df['anchor_tok_len'] = df['anchor'].map(_tok_len)
df['target_tok_len'] = df['target'].map(_tok_len)
df['context_tok_len'] = df['context_text'].map(_tok_len)
df['input_len'] = df['anchor_tok_len'] + df['target_tok_len'] + df['context_tok_len'] + 4
df['len_anc_tgt_diff'] = df['anchor_tok_len'] - df['target_tok_len']
df['len_anc_tgt_div'] = df['anchor_tok_len'] / df['target_tok_len']
df['len_anc_cnt_diff'] = df['anchor_tok_len'] - df['context_tok_len']
df['len_anc_cnt_div'] = df['anchor_tok_len'] / df['context_tok_len']
df['len_tgt_cnt_diff'] = df['target_tok_len'] - df['context_tok_len']
df['len_tgt_cnt_div'] = df['target_tok_len'] / df['context_tok_len']
# new feature and id columns is returned
usecols = [col for col in df.columns if col not in basecols]
usecols.append('id')
return df[usecols]
</code></pre>
<p><a href="https://www.kaggle.com/fightingmuscle" target="_blank">@fightingmuscle</a> <br>
Thank your for your comment and for finding my solution!</p> |
U.S. Patent Phrase to Phrase Matching | 8th place solution: Predicting Targets at Once Led Us to Gold | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: U.S. Patent Phrase to Phrase Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I've published a notebook for training a token classification model.<br>
I hope this will help you.</p>
<p><a href="https://www.kaggle.com/code/skraiii/pppm-tokenclassificationmodel-train-8th-place" target="_blank">https://www.kaggle.com/code/skraiii/pppm-tokenclassificationmodel-train-8th-place</a></p>
<p>Thank you!<br>
Do you mean the train code? As for the inference code, <a href="https://www.kaggle.com/masakiaota" target="_blank">@masakiaota</a> has already published it.<br>
<a href="https://www.kaggle.com/code/masakiaota/8th-place-solution-infer/notebook?scriptVersionId=98876654" target="_blank">https://www.kaggle.com/code/masakiaota/8th-place-solution-infer/notebook?scriptVersionId=98876654</a></p>
<p>If you'd like to check the train code, I'll publish it later.</p>
<p>As for the token classification models, you are correct.<br>
The length of train data is the same as the number of unique anchors.</p>
<p>Thank you!<br>
As for text classification models, we didn't aggregate train data by the anchor.<br>
Take a look in detail at the figure of a text classification model. There's just one target word, "alloys".<br>
So, the length of train data is about 36473 * 3 / 4 for one fold.</p>
<p>First of all, we would like to thank the competition organizers and the Kaggle platform for hosting such an exciting competition. And I also want to give thanks to my teammates. Without them, I would have given up a week ago and couldn't have leveraged my idea efficiently.</p>
<p>I learned a lot from this competition and acquired some general knowledge that can be applied to other NLP tasks.</p>
<p>We ended up with 8th place (both on public/private LB) and I'd like to summarize our solution and share some trials that didn’t work for us.</p>
<p>The inference notebook is available <a href="https://www.kaggle.com/code/masakiaota/8th-place-solution-infer/notebook?scriptVersionId=98876654" target="_blank">here</a>.</p>
<h1>Overview</h1>
<p>We used 6 models trained with BCELoss and just averaged with different weights as the ensemble.<br>
<img src="https://i.imgur.com/O4Knufg.png" alt="overview"></p>
<h1>Model</h1>
<h2>Token Classification Model</h2>
<p><img src="https://i.imgur.com/LUdZNfo.png" alt="token_classification_model"></p>
<p>This competition aims for estimating the semantic similarity between specific word pair (anchor and target) under specific context. There are multiple targets to compare with specific anchor.</p>
<p>So, we assumed that we could use three kinds of information to predict the similarity.</p>
<p>They are</p>
<ol>
<li><p>semantic relativity between anchor and target</p></li>
<li><p>semantic relativity between word pair and context</p></li>
<li><p>semantic relativity between targets that are supposed to be compared with same anchor and specific target</p></li>
</ol>
<p>We wrack our brains over and defined the input as below and fed it to the model.</p>
<p><img src="https://i.imgur.com/1SPZC9R.png" alt="anchorwise_input"></p>
<p>[TAR] is a special token that we added for letting model recognize the positions of each target tokens.</p>
<p>This approach made a huge improvement on the score and made train/inference time shorter because the model can infer multiple anchor-target pairs at once.</p>
<p>Public LB: 0.8380(out of medal zone) -> 0.8535(silver medal zone)</p>
<p>Private LB: 0.8526(out of medal zone) -> 0.8656(silver medal zone)</p>
<p>EDIT: <br>
I published a notebook for training a token classification model.<br>
<a href="https://www.kaggle.com/code/skraiii/pppm-tokenclassificationmodel-train-8th-place" target="_blank">https://www.kaggle.com/code/skraiii/pppm-tokenclassificationmodel-train-8th-place</a></p>
<h2>Text Classification Model</h2>
<p><img src="https://i.imgur.com/oFgYXk1.png" alt="text_classification_model"></p>
<p>We also trained models as a text classification task. Probably most of the competitors took this approach, but we added a little trick. We used only attention output corresponding to the CLS token. In our experiment, this made the model learn faster and improved the score. Although this model performs lower cv than the token classification model, it contributes the ensembling performance.</p>
<h1>Train</h1>
<h2>CV Strategy</h2>
<p>We used StratifiedGroupKFold(n_folds=4) and made train data stratified by score, grouped by anchor.</p>
<h2>Target Shuffle Augmentation</h2>
<p>We defined anchor-wise input that have multiple targets, so we augmented data by shuffling targets every sample. This can prevent the model from memorizing the train samples themselves.</p>
<p><img src="https://i.imgur.com/kh7qt67.png" alt="target_shuffle_aug"></p>
<h2>AWP(Adversarial Weight Perturbation)</h2>
<p>This adversarial training method boosted our CV score. We modified the code of AWP made by <a href="https://www.kaggle.com/currypurin" target="_blank">@currypurin</a>.</p>
<p>The hyper parameters are very important. The primary hyper parameter is <code>adv_lr</code>. In past competitions, winners often adjusted <code>adv_eps</code>. However, after reading the original paper carefully, we concluded that <code>adv_lr</code> is more important.</p>
<p>In our understanding, <code>adv_eps</code> and <code>adv_lr</code> can be illustrated as the following figure. (However, we've had some experiences that are a little different from the expected behavior, so there may be a mistake somewhere.)</p>
<p>AWP improved score about 0.005 in the text classification models and about 0.01 in the token classification models.</p>
<p><img src="https://i.imgur.com/64moPC7.png" alt="awp"></p>
<h2>Hyper Parameters Tuning</h2>
<p>The combination with small batch_size (like <code>2</code>) and medium lr (like <code>1e-5</code>) performs the best local CV in token classification model. The important combination of parameters was thoroughly explored as following figure.</p>
<p>This (almost) comprehensive hyper parameter tuning improved score about 0.002.</p>
<p><img src="https://i.imgur.com/8vVjGqY.png" alt="hyper_param_tuning1"></p>
<p><img src="https://i.imgur.com/GLHU1bE.png" alt="hyper_param_tuning2"></p>
<h1>Inference</h1>
<h2>TTA(Test Time Augmentation)</h2>
<p>We further improved the score with TTA. We shuffled the target positions for two times per anchor, just as we did during the training, and took mean value of the two predictions.<br>
Finally this led us to the gold medal zone; 8th place.</p>
<p>0.8535(silver medal zone) -> 0.8555(gold medal zone)</p>
<p>Note: <br>
The scores shown above are accomplished by just a single deberta v3 large model.</p>
<h2>Ensemble with Constrained Least Squares Method</h2>
<p>At the end, we created about 20 trained models. This means we had to optimize the weights for the averaging ensemble. The problem was that we must search for the best weights in almost no time (in fact, we had just about 15 hours left for the deadline when we finally obtained the whole trained models ). Taking this problem into account, we used constrained least squares method. </p>
<p>1.Suppose you want to find the best weights from the following data: y are labels, X are oofs.</p>
<pre><code>y = np.array([0.5, 0.75, 0.25, 1.0, 0.5])
X = np.array([
[0.52, 0.9, 0.41, 0.99, 0.51],
[0.52, 0.7, 0.41, 0.99, 0.51],
[0.48, 0.73, 0.12, 0.97, 0.47],
[0.45, 0.35, 0.25, 0.9, 0.49],
])
</code></pre>
<p>2.First, let's simply look at the MSE for each row of X.</p>
<pre><code>np.square(X - y).mean(axis=1)
#=> array([0.00974, 0.00574, 0.0039 , 0.03452])
np.square(X - y).mean(axis=0).mean(axis=0)
#=> 0.013475
</code></pre>
<p>3.Then, compute the coefficients with linear regression.</p>
<pre><code>from sklearn.linear_model import LinearRegression
reg = LinearRegression().fit(X.T, y)
reg.coef_
#=> array([ 0.43575566, -0.05397578, 0.46076883, 0.21063718])
X.T @ reg.coef_
#=> array([0.51448131, 0.76448131, 0.26448131, 1.01448131, 0.51448131])
np.square(X.T @ reg.coef_ - y).mean(axis=0)
#=> 0.00020970822203200185
</code></pre>
<p>Voila! Unfortunately, some coefficients can have negative values with the vanilla linear regression. Instead, we use the least-squares method with non-negative value constraints.</p>
<p>4.Fortunately, scipy seems to have a solver for that. Let's find the weights as soon as possible.</p>
<pre><code>weights, rnorm = scipy.optimize.nnls(X.T, y)
weights
# => array([0.29260857, 0.08404164, 0.52487508, 0.12761238])
X.T @ weights
# => array([0.50522372, 0.75 , 0.24931469, 0.99686367, 0.50131296])
np.square(X.T @ weights - y).mean(axis=0)
# => 7.863453999510499e-06
</code></pre>
<p>This method enabled us to easily find the optimal combined weights for ensembling, just within a minute!</p>
<h1>What didn’t work well</h1>
<ul>
<li>increasing the number of TTA</li>
<li>adding multi sampled dropout layer</li>
<li>Custom Losses<ul>
<li>optimizing Pearson Loss</li>
<li>optimizing MSE loss (although stable)</li>
<li>mixed above</li></ul></li>
<li>MLM for patent texts (the size was about 4 million)</li>
<li>augmenting data<ul>
<li>back translation(Japanese, Korean, Chinese, French, German, Spanish)</li>
<li>position swapping of anchor and target</li></ul></li>
<li>adding special tokens that denote each context</li>
<li>pseudo labeling for the training data</li>
<li>increasing n_folds (It worked a little but too computational)</li>
<li>other pretrained models<ul>
<li>AI-Growth-Lab/PatentSBERTa</li>
<li>microsoft/deberta-v2-xlarge</li>
<li>microsoft/deberta-v2-xxlarge</li></ul></li>
</ul>
<p>—</p>
<p>This post is written by all the members of my team. </p>
<p><a href="https://www.kaggle.com/masakiaota" target="_blank">@masakiaota</a> <a href="https://www.kaggle.com/taromasuda" target="_blank">@taromasuda</a> <a href="https://www.kaggle.com/skraiii" target="_blank">@skraiii</a> <a href="https://www.kaggle.com/hotchpotch" target="_blank">@hotchpotch</a> </p>
<p><img src="https://i.imgur.com/xO74tqU.png" alt="team_n"></p>
<p>Thank you for your question.</p>
<p>About pooling of preds: <br>
Correct. We took mean value of multiple preds per target. (We also tried taking max value of them, but it didn't work.)</p>
<p>About token length:<br>
Your concern is reasonable. One of the anchors has too many targets and each target consist of many tokens.<br>
So, we took large max_len value as <a href="https://www.kaggle.com/goldenlock" target="_blank">@goldenlock</a> wrote. (Thank you, gezi!)</p>
<p>I'm so sorry for the abstract figure…<br>
The reason of the loss difference among tokens that correspond to same target is just what <a href="https://www.kaggle.com/mingccc" target="_blank">@mingccc</a> wrote down. (thanks 3sigma.)</p>
<p>You are so sharp! Thank you for pointing it out.<br>
I replaced the image!</p>
<p>Nice. That's my favorite movie.</p> |
U.S. Patent Phrase to Phrase Matching | My First Non-Tabular Competition. And Its a Silver!!!! 🥈 (41st place Solution) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: U.S. Patent Phrase to Phrase Matching
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi all ,</p>
<p>First of all I want to thank the Satsyil Corp and kaggle team for organising such a an interesting and somewhat beginner friendly competition, learned a lot. Especially the magic thing. <br>
This was my first Non-Tabular Competition which I participated with the serious intent,(I had casually submitted to PetFinderCompetition a forked notebook, so i don't consider it as a first😜). Also my first experience with NLP. </p>
<p>I felt extremely happy when I saw my ranking went from bronze medal area in public lb to silver in private lb (jump of 65 ranks).</p>
<p>I will say that hard work, works💪.</p>
<p>As this was my first competition my primary focus was on:</p>
<ul>
<li>Understanding the Public Notebooks.</li>
<li>How to use Kaggle's GPU efficiently.</li>
<li>How to create a strong CV.</li>
<li>How to experiment faster and tune hyperparameters faster. Thanks to sir <a href="https://www.kaggle.com/jhoward" target="_blank">@jhoward</a> (<a href="https://www.kaggle.com/code/jhoward/iterate-like-a-grandmaster" target="_blank">code</a>)</li>
<li>Learn about Hugging Face 🤗.</li>
<li>Try different techniques of Ensemble.</li>
<li>Try different techniques of Post-Processing.</li>
</ul>
<h4>My Solution:</h4>
<p>Basically as I was experimenting with various BERT Variants and their Inputs, I always saved the output for them.<br>
So when I was trying various techniques of ensemble thanks to <a href="https://www.kaggle.com/mobassir" target="_blank">@mobassir</a>, posted <a href="https://www.kaggle.com/competitions/siim-isic-melanoma-classification/discussion/175614" target="_blank">this</a> technique by sir <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> of ensembling, in <a href="https://www.kaggle.com/competitions/us-patent-phrase-to-phrase-matching/discussion/327732" target="_blank">this</a> thread. So I kept the models that were helping in increasing the correlation using this technique and discarded the others (0.8491 pb lb)</p>
<p>Later thanks to <a href="https://www.kaggle.com/gaozhao" target="_blank">@gaozhao</a>, released the <a href="https://www.kaggle.com/code/gaozhao/a-simple-ensemble-of-two" target="_blank">code</a> of ensembling just two models which was giving the same lb as that of my 10 models(0.8491). Their i got the idea of using electra-large as backbone which helped me reach (0.8504 lb)</p>
<h4>Post - Processing that worked!!! 😱</h4>
<p>Since the metric was correlation, many post-processing technique were not helping in increasing CV.<br>
Then when I was reading solutions of previous nlp competitions, I learned about <a href="https://www.kaggle.com/competitions/commonlitreadabilityprize/discussion/258328" target="_blank">this</a> technique by <a href="https://www.kaggle.com/takoihiraokazu" target="_blank">@takoihiraokazu</a> using Nelder-Mead optimization.<br>
Here's the code i used</p>
<pre><code>from scipy.optimize import minimize
def f(x):
pred1 = md.copy()
testy = md.copy()
pred1[testy >= 0.85] = testy[testy >= 0.85] * x[0]
pred1[(testy < 0.85) & (testy >= 0.75)
] = testy[(testy < 0.85) & (testy >= 0.75)] * x[1]
pred1[(testy < 0.75) & (testy >= 0.65)
] = testy[(testy < 0.75) & (testy >= 0.65)] * x[2]
pred1[(testy < 0.65) & (testy >= 0.5)
] = testy[(testy < 0.65) & (testy >= 0.5)] * x[3]
pred1[(testy < 0.5) & (testy >= 0.35)
] = testy[(testy < 0.5) & (testy >= 0.35)] * x[4]
pred1[(testy < 0.35) & (testy >= 0.25)
] = testy[(testy < 0.35) & (testy >= 0.25)] * x[5]
pred1[(testy < 0.25) & (testy >= 0.15)
] = testy[(testy < 0.25) & (testy >= 0.15)] * x[6]
pred1[(testy < 0.15)] = testy[(testy < 0.15)] * x[7]
eval_pred = np.array(TRUE), pred1
cor = compute_metrics(eval_pred)['pearson']
return 1 - cor
result = minimize(f, [1, 1, 1, 1, 1, 1, 1, 1], method="Nelder-Mead")
</code></pre>
<p>Which increased the public lb from 0.8504 to 0.8507, also later saw that the private lb also went by 0.0005+ everytime i submitted using the post-processing vs submitting without post-processing.</p>
<p>That sums up all, if you have any doubt in anything, or if my explanation seems incomplete at any stage feel free to ask in the comments.<br>
Thanks!!!!!!</p>
<p>Thankss <a href="https://www.kaggle.com/vad13irt" target="_blank">@vad13irt</a> </p>
<p>I forgot to mention <strong>solo</strong> part in the thread 😅</p>
<p>Thank you sir <a href="https://www.kaggle.com/cdeotte" target="_blank">@cdeotte</a> </p>
<p><a href="https://www.kaggle.com/vad13irt" target="_blank">@vad13irt</a> the same technique i mentioned?</p>
<p>Thankss <a href="https://www.kaggle.com/vadimtimakin" target="_blank">@vadimtimakin</a> 😀</p> |
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | 10th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yeah I have mad googling skills lol. Btw congrats on becoming a GM!</p>
<p>Do you mean weights for WSIs and TMAs? I didn't use any weights for that. I assigned 6 class weights for cross entropy loss and the weights are </p>
<ul>
<li>HGSC: 2.621275</li>
<li>EC: 4.803730</li>
<li>CC: 6.493397</li>
<li>LGSC: 11.164087</li>
<li>MC: 10.148218</li>
<li>Other: 14.658537</li>
</ul>
<p>I used a single model for both image types. When I used all competition and external data for validation, WSI and TMA scores were very similar so I don't think it was overfitting.</p>
<p>There were two more submissions with higher private score but both of them were 0.58 too. I think those 0.58 subs are pretty close to each other.</p>
<p><img src="https://i.ibb.co/98ddh08/Screenshot-from-2024-01-04-16-24-46.png" alt="1"></p>
<p>I really don't have time to do it but I open sourced my external data if someone wants to do ablation studies with it.</p>
<p><a href="https://www.kaggle.com/datasets/gunesevitan/ubc-ocean-dataset?select=datasets" target="_blank">https://www.kaggle.com/datasets/gunesevitan/ubc-ocean-dataset?select=datasets</a></p>
<p>Thanks so much</p>
<p>Looks like we were on the same boat. I still don't understand how some teams broke that 0.5x barrier.</p>
<p>This was an interesting competition and I would like to thank my teammate <a href="https://www.kaggle.com/samfc10" target="_blank">@samfc10</a> and everyone involved with the organization of it.</p>
<p>This is a simple textbook solution that heavily relies on external TMA data and strong labels. There is nothing special or novel in this pipeline.</p>
<ul>
<li><a href="https://www.kaggle.com/code/gunesevitan/ubc-ocean-inference" target="_blank">Inference</a></li>
<li><a href="https://www.kaggle.com/code/gunesevitan/libvips-pyvips-installation-and-getting-started" target="_blank">libvips/pyvips Installation and Getting Started</a></li>
<li><a href="https://www.kaggle.com/code/gunesevitan/ubc-ocean-jpeg-dataset-pipeline" target="_blank">UBC-OCEAN - JPEG Dataset Pipeline</a></li>
<li><a href="https://www.kaggle.com/code/gunesevitan/ubc-ocean-eda" target="_blank">UBC-OCEAN - EDA</a></li>
<li><a href="https://www.kaggle.com/datasets/gunesevitan/ubc-ocean-dataset" target="_blank">UBC-OCEAN - Dataset</a></li>
<li><a href="https://github.com/gunesevitan/ubc-ovarian-cancer-subtype-classification-and-outlier-detection" target="_blank">GitHub Repository</a></li>
</ul>
<h2>1. Raw Dataset</h2>
<h3>WSI</h3>
<p>Masks of WSIs are resized to thumbnail sizes. Tiles of WSIs and masks are extracted from their thumbnails with stride of 384 and they are padded to 512. A MaxViT Tiny FPN model is trained on those padded tiles and masks. Segmentation model outputs are activated with sigmoid and 3x TTA (horizontal, vertical and diagonal flip) are applied after the activation.</p>
<p>Final segmentation mask prediction is blocky since the model was trained on tiles and merged later.</p>
<p><img src="https://i.ibb.co/jg24x1H/Screenshot-from-2024-01-04-09-28-01.png" alt="seg1"></p>
<p>Segmentation mask predictions are cast to 8-bit integer and upsampled to original WSI size with nearest neighbor interpolation.</p>
<p><img src="https://i.ibb.co/ZHjtfmY/Screenshot-from-2024-01-04-09-31-42.png" alt="seg2"></p>
<ul>
<li>WSI and their mask predictions are cropped maximum number of times with stride of 1024.</li>
<li>Crops are sorted based on their mask areas in descending order</li>
<li>Top 16 crops are taken and WSI label is assigned to them</li>
</ul>
<h3>TMA</h3>
<p>Rows and columns with low standard deviation are dropped on TMAs with the function below. The purpose of this preprocessing is removing white regions and making WSIs and TMAs as similar as possible. Using higher values of threshold were dropping areas in the tissue region so the standard deviation threshold is set to 10.</p>
<pre><code> ():
vertical_stds = image.std(axis=(, ))
horizontal_stds = image.std(axis=(, ))
cropped_image = image[vertical_stds > threshold, :, :]
cropped_image = cropped_image[:, horizontal_stds > threshold, :]
cropped_image
</code></pre>
<p><img src="https://i.ibb.co/8jCyhgG/4134-crop.png" alt="seg2"></p>
<h2>2. Validation</h2>
<p>Multi-label stratified kfold is used as the cross-validation scheme. Dataset is split into 5 folds. <code>label</code> and <code>is_tma</code> columns are used for stratification.</p>
<h2>3. Models</h2>
<p>EfficientNetV2 small model is used as the backbone with a regular classification head.</p>
<h2>4. Training</h2>
<p>CrossEntropyLoss with class weights are used as the loss function. Class weights are calculated as n / n ith class.</p>
<p>AdamW optimizer is used with 0.0001 learning rate. Cosine annealing scheduler is used with 0.00001 minimum learning rate.</p>
<p>AMP is also used for faster training and regularization.</p>
<p>Each fold is trained for 15 epochs and epochs with the highest balanced accuracy are selected.</p>
<p>Training transforms are:</p>
<ul>
<li>Resize TMAs to size 1024 (WSI crops are already 1024 sized)</li>
<li>Magnification normalization (resize WSI to 512 and resize it back to 1024 with a random chance)</li>
<li>Horizontal flip</li>
<li>Vertical flip</li>
<li>Random 90-degree rotation</li>
<li>Shift scale rotate with 45-degree rotations and mild shift/scale augmentation</li>
<li>Color jitter with strong hue and saturation</li>
<li>Channel shuffle</li>
<li>Gaussian blur</li>
<li>Coarse dropout (cutout)</li>
<li>ImageNet normalization</li>
</ul>
<h2>5. Inference</h2>
<p>5 folds of EfficientNetV2 small model are used in the inference pipeline. Average of 5 folds are taken after predicting with each model.</p>
<p>3x TTA (horizontal, vertical and diagonal flip) are applied and average of predictions are taken.</p>
<p>16 crops are extracted for each WSI and average of their predictions are taken.</p>
<p>The average pooling order for a single image is:</p>
<ul>
<li>Predict original and flipped images, activate predictions with softmax and average</li>
<li>Predict with all folds and average</li>
<li>Predict all crops and average if WSI </li>
</ul>
<h2>6. Change of Direction</h2>
<p>The model had 86.70 OOF score (TMA: 84, WSI: 86.59) at that point but the LB score was 0.47 (private 0.52/32th-42th) which was very low.</p>
<p><img src="https://i.ibb.co/tQRgZd0/wsi-confusion-matrix.png" alt="wsi_confusion_matrix1"></p>
<p><img src="https://i.ibb.co/YQPDY2D/tma-confusion-matrix.png" alt="tma_confusion_matrix1"></p>
<p><img src="https://i.ibb.co/zhsGR9x/confusion-matrix.png" alt="confusion_matrix1"></p>
<p>I noticed some people were getting better LB scores with worse OOF scores and I was stuck at 0.47 for a while. I had worked on Optiver competition for 2 weeks and came back. I decided to dedicate my time to finding external data because breaking the entire pipeline and starting from scratch didn't make sense.</p>
<h2>7. External Data</h2>
<h3>UBC Ocean</h3>
<p>The most obvious one is the test set image that is classified as HGSC confidently. 16 crops are extracted from that image and HGSC label is assigned to them.</p>
<h3>Stanford Tissue Microarray Database</h3>
<p>134 ovarian cancer TMAs are downloaded from <a href="https://tma.im/cgi-bin/viewArrayBlockList.pl" target="_blank">here</a>.</p>
<p>Classes are converted with this mapping</p>
<pre><code> = {
ovary spindle cell fibroma ovary': ,
papillary serous': ,
endometrioid': ,
precursor lymphoblastic': ,
adeno': ,
clear cell': ,
mucinous': ,
adeno mucinous': ,
dysgerminoma':
}
</code></pre>
<h3>kztymsrjx9</h3>
<p>This dataset is downloaded from <a href="https://data.mendeley.com/datasets/kztymsrjx9/1" target="_blank">here</a>. HGSC label is assigned to images in the Serous directory. Images in the Non_Cancerous directory are not used. 398 ovarian cancer TMAs are found here.</p>
<h3>tissuearray.com</h3>
<p>Screenshots of high resolution previews are taken from <a href="https://www.tissuearray.com/tissue-arrays/Ovary" target="_blank">here</a>. 1221 ovarian cancer TMAs are found here.</p>
<h3>usbiolab.com</h3>
<p>Screenshots of high resolution previews are taken from <a href="https://usbiolab.com/tissue-array/product/ovary" target="_blank">here</a>. 440 ovarian cancer TMAs are found here.</p>
<h3>proteinatlas.org</h3>
<p>Images are downloaded from <a href="https://www.proteinatlas.org/search/prognostic:ovarian+cancer;Favorable+AND+sort_by:prognostic+ovarian+cancer" target="_blank">here</a>. 376 ovarian cancer TMAs are found here.</p>
<h3>Summary</h3>
<p>Those were the sources where I found the external data.</p>
<table>
<thead>
<tr>
<th></th>
<th>Images</th>
<th>Type</th>
<th>HGSC</th>
<th>EC</th>
<th>CC</th>
<th>LGSC</th>
<th>MC</th>
<th>Other</th>
</tr>
</thead>
<tbody>
<tr>
<td>UBC Ocean Public Test</td>
<td>16</td>
<td>WSI</td>
<td>16</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>Stanford Tissue Microarray Database</td>
<td>134</td>
<td>TMA</td>
<td>37</td>
<td>11</td>
<td>4</td>
<td>0</td>
<td>4</td>
<td>78</td>
</tr>
<tr>
<td>kztymsrjx9</td>
<td>398</td>
<td>TMA</td>
<td>100</td>
<td>98</td>
<td>100</td>
<td>0</td>
<td>100</td>
<td>0</td>
</tr>
<tr>
<td>tissuearray.com</td>
<td>1221</td>
<td>TMA</td>
<td>348</td>
<td>39</td>
<td>24</td>
<td>140</td>
<td>100</td>
<td>570</td>
</tr>
<tr>
<td>usbiolab.com</td>
<td>440</td>
<td>TMA</td>
<td>124</td>
<td>40</td>
<td>29</td>
<td>89</td>
<td>68</td>
<td>90</td>
</tr>
<tr>
<td>proteinatlas.org</td>
<td>376</td>
<td>TMA</td>
<td>25</td>
<td>155</td>
<td>0</td>
<td>63</td>
<td>133</td>
<td>0</td>
</tr>
</tbody>
</table>
<h2>8. Final Iteration</h2>
<p>Final dataset (including 16 crops per WSI) label distribution was like this</p>
<ul>
<li>HGSC: 4127</li>
<li>EC: 2252</li>
<li>CC: 1666</li>
<li>MC: 1066</li>
<li>LGSC: 969</li>
<li>Other: 738</li>
</ul>
<p>and image type distribution was like this</p>
<ul>
<li>WSI (16x 1024 crops): 8224</li>
<li>TMA: 2594</li>
</ul>
<p>All the external data are concatenated to each fold's training sets. Validation sets are not changed in order to get comparable results. OOF score is decreased from 86.70 to 83.85 but LB score jumped to 0.54. I thought this jump was related to Other class but the improvement wasn't good enough. That's when I thought private test set could have more Other classes which is very likely of Kaggle competitions. Twist of this competition was predicting TMAs and Other so private test set would likely have more of them. I decided to trust LB and selected a submission with the highest LB score. That submission scored 0.54 on public and 0.58 on private.</p>
<p>Thanks, congrats on your strong finish too! I think I wouldn't finish in top 20 without the Other data.</p> |
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | 13th Place Solution for the UBC-OCEAN Competition | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We've put most of our efforts on TMA models.</p>
<p><a href="https://www.kaggle.com/quan0095" target="_blank">@quan0095</a> 0.42 with AAA for WSI<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2Fc504aaa8ac357370602af8e8d0c2fe88%2Ftma_only.png?generation=1704697928748728&alt=media" alt=""></p>
<p>Thanks <a href="https://www.kaggle.com/sergiosaharovskiy" target="_blank">@sergiosaharovskiy</a> , I would say it took around a few days to implement and a few hours and discussions with my teammate <a href="https://www.kaggle.com/raphaelbourgade" target="_blank">@raphaelbourgade</a> to find the correct settings to be close to real TMAs. Once the TMAs (around 23k) were generated we got better public LB results. To go beyond, we decided to review manually them to drop bad ones, it took (so) many hours for no benefit on public LB but +0.02 on private LB. </p>
<p>For the stain techniques it's something regular in histopatology domain. We've just combined existing known packages in an <code>Albumentations</code> class and selected the 25 TMAs in train dataset as reference. In a paper we've referenced in "Sources" section, it is reported that stain augmentation is not so important and that features extractor is key. That's true, but based on our scores I can say stain augmentation helped to get around +0.01.</p>
<p>True, but for MIL model stain was applied on 60% of generated TMA, 2/3 with 25 TMA references and 1/3 with a generic one. For Transformer/CNN models additional color augmentation was added too.</p>
<p>That's a good question, our first TMAs generated were squared (small dataset) then we had the idea to be closer to real TMA like in inference. Indeed, we've not evaluted this specific impact because we've also improved models at the same time and it worked.</p>
<p>First of all we would like to thank Kaggle and the sponsors for this interesting research competition.</p>
<h1>Context</h1>
<ul>
<li>Business context: <a href="https://www.kaggle.com/competitions/UBC-OCEAN/overview" target="_blank">https://www.kaggle.com/competitions/UBC-OCEAN/overview</a></li>
<li>Data context: <a href="https://www.kaggle.com/competitions/UBC-OCEAN/data" target="_blank">https://www.kaggle.com/competitions/UBC-OCEAN/data</a></li>
</ul>
<h1>Overview of the approach</h1>
<p>Our final solution was a combination of separated high scoring TMA and WSI models. For WSI models we’ve used some self-supervised learning (SSL) pretrained features extractor executed on 224 pixels tiles followed by Multiple Instance Learning (MIL) models. Same for TMA model with additional regular Transformer and CNN backbones. Outliers detection takes place in postprocessing and relies on both embedding distance and probabilities distributions.</p>
<h1>Detail of the submission</h1>
<p><strong>WSI models:</strong><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2Fe18dd035e408c85875e7b74d4862bc2e%2Fwsi_model.png?generation=1704326923139467&alt=media" alt=""><br>
<strong>TMA models:</strong><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2F5ba1a216d5328e17c20cc55e866fcbb5%2Ftma_model.png?generation=1704326948322245&alt=media" alt=""><br>
<strong>Post processing:</strong><br>
TMA models have been trained with ArcFace loss and an ArcMarginProduct sub center module to try to separate embeddings space as much as possible per class. It allows detecting outliers based on a similarity distance. Our distance threshold has been fine tuned on public LB as we didn’t have any solid sample of real outlier/rare class. <a href="https://github.com/facebookresearch/faiss" target="_blank">Faiss</a> package has been used to find nearest neighbors.<br>
We also performed another thresholding based on probabilities distributions. When all probabilities are low enough we switch the predicted class as outlier. The threshold has been calibrated on CV to avoid more than 10% outliers and checked on public LB.</p>
<h1>WSI model training</h1>
<p>Before training the WSI model, our medical intuition supported by this article led us to hypothesize that the relevant information for the subtype prediction was probably more located at low level. All the 513 available WSI in the training set were thus downscaled to 10x magnification. Since all these WSI had a black unicolor background, we then performed an otsu thresholding, in order to discard all the background tiles. We then tiled all the detected tissue into N non-overlapping 224px tiles. Insofar as it was not possible to infer tumor segmentation during submission because of the time limitation, we decided to keep all the tumor and the non-tumor tiles for training.<br>
All these N tiles were then encoded with CTransPath and Lunit-DINO, 2 features extractors trained using self-supervised learning on diverse pathology dataset. According to this article about the robustness of these models to stain variations, we did not perform any kind of augmentation or normalization preprocessing.<br>
We then trained and evaluated several MIL architecture into a weighted ensemble. The best CV were obtained by combining three of them : </p>
<ul>
<li>Clustering-constrained attention MIL (<a href="https://github.com/mahmoodlab/CLAM" target="_blank">https://github.com/mahmoodlab/CLAM</a>)</li>
<li>Dual-stream MIL (<a href="https://github.com/binli123/dsmil-wsi" target="_blank">https://github.com/binli123/dsmil-wsi</a>)</li>
<li>A weighted sum of the embeddings.<br>
MIL training procedure and parameters:</li>
<li>CV4, Stratified Group KFold</li>
<li>No augmentation</li>
<li>No normalization</li>
<li>Batch size =1, epochs = 32</li>
<li>AdamW optimizer, CosineAnnealingLR, LR=5e-3</li>
<li>Cross-Entropy Loss</li>
</ul>
<h1>TMA model training</h1>
<p>The UBC training dataset was coming with only 25 TMA samples and we know, according to the description, that TMA in the test set are the majority. We’ve detected around 65% to 70% of images with sides less than 6000 pixels. We’ve decided to generate some TMA based on the WSI provided in the training set. We’ve developed a custom augmentation that is detailed in this notebook: <a href="https://www.kaggle.com/code/mpware/ubc-tma-generator-from-wsi" target="_blank">https://www.kaggle.com/code/mpware/ubc-tma-generator-from-wsi</a></p>
<pre><code> ():
A.Compose([
SimulateTMA((-, -), radius_ratio=(, ), ellipse_ratio=(, ), angle=(-, ),
background_color=(-, -, -), background_color_ratio=(, ),
noise_level=(/, /), black_replacement_color=, p=, always_apply=),
A.OneOf([
Stainer(ref_images=tma_images, method=, luminosity=, p=),
Stainer(ref_images=, method=, luminosity=, p=),
Stainer(ref_images=tma_images, method=, luminosity=, p=),
], p=),
], p=p)
</code></pre>
<p>The idea is to identify tiles with tumoral tissue and crop an ellipse shape as could be a real TMA picked by an operator.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2Fd37aaf25fc882b4f0c477f975514b388%2Ftma_v1.png?generation=1704327065491575&alt=media" alt=""><br>
The crops are then augmented with stains based on the 25 TMA as references. As the WSI magnification is mainly x20 the generated TMA are also x20. Here are some generated samples:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2F7d4e508fdbfd96b7f04f8356408803d2%2Ftma_v2.png?generation=1704327082748102&alt=media" alt=""><br>
A final step was to review them manually to drop bad generated TMA (especially when tumor mask was not available / complete). Our best private LB (0.58) was with such validated TMA. Unfortunately we did not select it as final submission.<br>
<strong>MIL</strong> training procedure and parameters:</p>
<ul>
<li>CV4, Stratified Group KFold</li>
<li>Random batch sampler to balance samples</li>
<li>Augmentations: Stain: Vahadane, Macenko, Reinhard</li>
<li>Mask on attention, batch size = 32, epochs = 32</li>
<li>EMA</li>
<li>AdamW optimizer, CosineAnnealingLR, LR=1e-3</li>
<li>Cross Entropy Loss<br>
<br><br>
<strong>Transformer/CNN</strong> training procedure and parameters:</li>
<li>ImageNet pretrained backbones (<a href="https://github.com/huggingface/pytorch-image-models" target="_blank">Timm</a>):<ul>
<li>tiny_vit_21m_512.dist_in22k_ft_in1k</li>
<li>tf_efficientnetv2_s_in21ft1k</li></ul></li>
<li>Augmentations: <ul>
<li>H/V flips, Rot90</li>
<li>Stain: Vahadane, Macenko, Reinhard</li>
<li>Random BrightnessContrast/Gamma, HueSaturationValue, ColorJitter, CLAHE</li>
<li>GaussianBlur, MotionBlur, GaussNoise</li>
<li>Cut Mix, DropOut</li></ul></li>
<li>EMA</li>
<li>Batch size = 32, Epochs = 32</li>
<li>AdamW optimizer, CosineAnnealingLR, LR=1e-4</li>
<li>Cross Entropy Loss<br>
Models have been trained with full data after checking stability on cross validation.<br>
Here is a 2D t-SNE projection of TinyVit trained embeddings on generated 23k TMA:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2Fe0859f7f5adffe00c3e30cb246b40101%2Ftma_training_embeddings.png?generation=1704327121454397&alt=media" alt=""></li>
</ul>
<h1>Other useful strategies or approaches</h1>
<p>Validation was quite difficult, MIL models were overfitting quite fast. Using EMA helped to limit it. </p>
<h1>Model inference</h1>
<p>Most of the inference time was lost in image loading/tiling. We’ve implemented a multiprocess inference to benefit from all CPUs but optimized to balance the memory issues due to concurrent large images loading. It reduced the loading + tiling of all images to around 5h30-6h. Features exaction was the most time consuming task that is why we’ve limited ourselves to the two best ones. We’ve limited the number of tiles to 350 max and at the end our inference ran in around 11h15-30min.</p>
<h1>What did not work or improve?</h1>
<p>A quick sum up of what did not work or not improve:<br>
Resnet50-based features extractors such as RetCCL and Lunit-BT.<br>
External data:</p>
<ul>
<li><a href="https://portal.gdc.cancer.gov/repository?facetTab=cases&filters=%7B%22op%22%3A%22and%22%2C%22content%22%3A%5B%7B%22op%22%3A%22in%22%2C%22content%22%3A%7B%22field%22%3A%22cases.primary_site%22%2C%22value%22%3A%5B%22ovary%22%5D%7D%7D%2C%7B%22op%22%3A%22in%22%2C%22content%22%3A%7B%22field%22%3A%22files.data_type%22%2C%22value%22%3A%5B%22Slide%20Image%22%5D%7D%7D%5D%7D" target="_blank">TCGA</a></li>
<li><a href="https://www.cancerimagingarchive.net/collection/ovarian-bevacizumab-response/" target="_blank">ATEC</a><br>
<br><br>
Usually adding more data is always better but here it did not help on both CV and LB. However the quality of many slides was very bad and could explain it. Also the labeling of some was not obvious.<br>
Train a WSI model based on ImageNet pre-trained backbone. It worked but Ctranspath and LunitDINO outperformed it.<br>
Down scale to x5 for WSI models (instead of x10)<br>
Train a tumor segmentation model in order to sample tumor TMA, but since subtypes have significant morphological variations, we preferred to train a stroma segmentation model and predict the carcinoma mask by complementarity. Finally it was impossible to set up a unique threshold for TMA selection because of high variation in epithelial surface area between solid and mucinous architecture. <br>
Pseudo labeling has not been tried.</li>
</ul>
<h1>Sources</h1>
<ul>
<li>1) Deep Learning for Detecting BRCA Mutations in High-Grade Ovarian Cancer Based on an Innovative Tumor Segmentation Method From Whole Slide Images: <a href="https://www.modernpathology.org/article/S0893-3952(23)00209-0/fulltext" target="_blank">https://www.modernpathology.org/article/S0893-3952(23)00209-0/fulltext</a></li>
<li>2) A Good Feature Extractor Is All You Need for Weakly Supervised Learning in Histopathology: <a href="https://arxiv.org/pdf/2311.11772.pdf" target="_blank">https://arxiv.org/pdf/2311.11772.pdf</a><br>
<br><br>
We had a lot of fun solving this kaggle. It was a lot of data to handle, in addition to the ML challenge it was an optimization challenge to make the inference fast.<br>
<br><br>
Raphaël Bourgade and MPWARE</li>
</ul>
<p>Let me a few days to send a late submission to answer you</p>
<p>Let me try that</p>
<p><a href="https://www.kaggle.com/quan0095" target="_blank">@quan0095</a> I've submited and replaced all WSI predictions by HGSC predictions:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F698363%2Fade4320e88b0a8fc9d5896d4b48537b3%2Ftma_ony.png?generation=1704477551260196&alt=media" alt=""></p> |
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | 1st Place Solution 🥇 [Owkin] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Indeed, we downloaded the training data locally. The models were trained locally.</p>
<p>We did not look into it. Looks interesting, though! Did you make a (late) submission using this LongViT?</p>
<h1>1st Place Solution 🥇 [Owkin] -- Phikon & Chowder</h1>
<h2>Introduction</h2>
<p>First of all, we would like to thank the University of British Columbia (UBC) for this exceptional multi-centric cohort and the Kaggle staff for organizing this competition. We got into this competition to showcase the efficiency and robustness of Phikon (<a href="https://huggingface.co./owkin/phikon" target="_blank">model card</a>, <a href="https://www.medrxiv.org/content/10.1101/2023.07.21.23292757v1" target="_blank">paper</a>, <a href="https://huggingface.co./blog/EazyAl/phikon" target="_blank">blog post</a>), the foundation model (FM) for digital pathology made available to the community by <a href="https://www.owkin.com/" target="_blank">Owkin</a> last November. We are very pleased with the outcome and really enjoyed participating in this competition.</p>
<p>Our solution is straightforward: we trained an ensemble of <a href="https://arxiv.org/pdf/1802.02212.pdf" target="_blank">Chowder</a> models on top of Phikon tile embeddings. We used high entropy predictions to detect outliers. We did not use extra training data nor annotations (other than the ones provided by the organizers). Our <em>winning submission</em> submission scored 0.64/<strong>0.66</strong> (public/private) and our <em>top submission</em> scored 0.62/<strong>0.68</strong>. These submissions run in approximately 6 hours 🚀</p>
<p>Our code is available <a href="https://www.kaggle.com/code/jbschiratti/winning-submission" target="_blank">here</a>. We cleaned our code (removed comments, unused code, added sections…) and created the <code>winning_submission</code> notebook. After a late submission, this notebook scores 0.63/<strong>0.66</strong>. The slight difference on the public LB is likely due to differences in the sampling of patches.</p>
<p>Our solution write-up is structured as follows:</p>
<ol>
<li><a href="#1-our-main-takeaways">Main takeaways</a></li>
<li><a href="#2-matter-detection-and-tiling">Matter detection and tiling</a></li>
<li><a href="#3-feature-extraction">Feature extraction</a></li>
<li><a href="#4-subtypes-classification">Subtypes classification</a></li>
<li><a href="#5-outlier-detection">Outlier detection</a></li>
<li><a href="#6-some-(un)successful-ideas">Some (un)successful ideas</a></li>
<li><a href="#7-notes">Notes</a></li>
</ol>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F705762%2Fe5d2f490c74b954bf94ce12871659276%2Fovr_pipeline.png?generation=1704733650042787&alt=media" alt="Our pipeline"></p>
<h2>1. Our main takeaways</h2>
<ul>
<li><p>Foundation models (or domain-specific large vision models) are the way of the future. Our results further validate the effectiveness of <a href="https://huggingface.co./owkin/phikon" target="_blank">Phikon</a>, Owkin's foundation model for digital pathology. The next frontier is multimodality: combining spatial-omics, imaging, clinical data - from genotype to phenotype - and blending it with medical knowledge and reasoning powered by Large Language Models (LLMs).</p></li>
<li><p>Occam's razor: our simple and efficient pipeline outperformed more complex approaches. In particular, <a href="https://arxiv.org/pdf/1802.02212.pdf" target="_blank">Chowder</a> - a Multiple Instance Learning (MIL) model - is still on par with more recent MIL models (e.g. <a href="https://arxiv.org/abs/2106.00908" target="_blank">TransMIL</a>, <a href="https://arxiv.org/abs/2203.12081" target="_blank">DTFD-MIL</a>), especially when combined with ensembling techniques. As opposed to more elaborate MIL models, Chowder is also interpretable.</p></li>
<li><p>This competition was not easy! The PNG image format posed a major challenge: loading large PNG images with standard libraries (<a href="https://pillow.readthedocs.io/en/stable/" target="_blank">Pillow</a>, <a href="https://opencv.org/" target="_blank">OpenCV</a>) was time consuming and used a lot of RAM. Standard formats in digital pathology (SVS, TIFF, NDPI) store data pyramidally to prevent the need for loading the entire images into RAM. Although Kaggle staff members <a href="https://www.kaggle.com/competitions/UBC-OCEAN/discussion/446688" target="_blank">acknowledged that PNG format was a bad choice</a>, pyramidal images were not made available to the participants. Furthermore, either by design or as a result of converting pyramidal images to PNG, useful metadata such as mpp (image resolution in microns per pixels) and ICC profile (if available) were stripped from the images. In addition to this, working with images at different resolutions and dealing with outliers (rare variants and normal cases) in the test set made this competition challenging (and quite interesting!).</p></li>
<li><p>How well do our models generalize? Locally, in cross-validation (CV), the balanced accuracy scores were in the (0.8, 0.9) range. However, the scores on the public/private LB were at least 20 points lower. Obviously, we can (partly) explain this discrepancy by our ability to predict the 'Other' class. However, it also raises the question of how well our models generalize to new data (<em>i.e.</em> data points from new centers/hospitals). Even when using <a href="https://huggingface.co./owkin/phikon" target="_blank">Phikon</a>, it is likely that differences in tissue preparation, tissue staining or differences in scanner type and magnification across centers still hinder generalization.</p></li>
</ul>
<h2>2. Matter detection and tiling</h2>
<p>Whole Slide Images (WSI) in digital pathology are often too large and cannot be directly analyzed using convolutional neural networks. The WSI in this competition were no exception. A well-established workaround consists in splitting regions containing tissue into smaller patches (e.g. 224 x 224 px or 512 x 512 px). As a result, a WSI can be seen as a collection of hundreds, thousands of patches. Although Tissue Microarrays (TMA) were much smaller, these images were also split into patches.</p>
<h3>2.1. Matter detection</h3>
<p>In order to detect the regions of the WSI (or TMA) which contain tissue, we employed Otsu thresholding. This thresholding was applied to the thumbnail image, in the HSV color space. Although this method is not perfect, we found that it worked quite well on the images from this competition.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F705762%2Fd3dbaa1da15d2cbbfa5b7ce32381c3ad%2F1020_matter_detection.png?generation=1704733858157340&alt=media" alt=""></p>
<h3>2.2. Tiling</h3>
<h4>Patch size</h4>
<p>We set the patch size to 224 x 224 px for WSI. According to the <a href="https://www.kaggle.com/competitions/UBC-OCEAN/data" target="_blank">data description</a>, the train set is composed of a majority of WSI at magnification 20x and few (25) TMA at magnification 40x. We hypothesized that given the low number of TMA in the train set, learning would be more efficient if we standardized all images (WSI or TMA) to a 20x resolution. Therefore, we set the patch size to 448 x 448 px for TMA; These patches were then resized to 224 x 224 px. As illustrated below [left: a 448 x 448 px patch from 91.png (TMA) resized to 224 x 224 px; Right: a 224 x 224 px patch from 4211.png (WSI)], cells have roughly the same size in resized tiles from TMA and tiles from WSI. At test time, we used a logistic regression (LR) to detect if an image is a TMA. The LR was trained on features extracted from the thumbnail of train images using a pretrained (ImageNet) ResNet18.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F705762%2F88d1e1c133cfaac9dee1cce00fb977a4%2Fsample_tiles.png?generation=1704733902739438&alt=media" alt=""></p>
<h4>Runtime 🕕</h4>
<ul>
<li><p>For efficient tiling, we developed custom C code which splits a large PNG image into patches and saves them to disk (also as PNG images). This C code leverages the <code>libpng</code> library. In particular, it uses the <code>png_read_row</code> function to limit the amount of data read and loaded into RAM. This C code was easily compiled in a Kaggle notebook. The performance of the compiled code is likely to be similar to pyvips.</p></li>
<li><p>With a patch size of 224 x 224 px, WSI have - on average - more than 10 000 patches. Processing that many patches was not feasible given the 12 hours runtime limit. With a limit of <strong>200 patches</strong> per image, our first submissions successfully ran in approximately 10 hours. Later, we used <a href="https://www.ray.io/" target="_blank">ray</a> to process images in pairs: matter detection + tiling + feature extraction for two images at a time on a single P100 GPU. Ray would spawn two processes, each using 0.5 GPU and 2 CPU cores. As a result, our submissions successfully ran in less than 7 hours. The limiting resource was the RAM; With more RAM, we could have processed 4 images at a time (on a single P100). Note that the 200 patches limit only applies to WSI. Given that TMA are small images, the number of 448 x 448 px patches hardly ever exceeded 50. Most TMA had less than 30 patches.</p></li>
</ul>
<h2>3. Feature extraction</h2>
<p><a href="https://huggingface.co./owkin/phikon" target="_blank">Phikon</a> is a ViT-Base pre-trained with <a href="https://github.com/bytedance/ibot" target="_blank">iBOT</a> on 40M tiles from the TCGA dataset (📝 see our <a href="https://www.medrxiv.org/content/10.1101/2023.07.21.23292757v1" target="_blank">paper</a> for more detailed info). We benchmarked Phikon against multiple backbones including <a href="https://github.com/Xiyue-Wang/TransPath" target="_blank">CTransPath</a>, <a href="https://github.com/lunit-io/benchmark-ssl-pathology#pre-trained-weights" target="_blank">LUNIT</a> and <a href="https://github.com/facebookresearch/dinov2" target="_blank">DinoV2</a>. Phikon outperformed these models in our local cross-validation tests and showed no improvement when combined in an ensemble. For each input patch, a <code>(3, 224, 224)</code> tensor, Phikon outputs a 768-dimensional embedding vector. Therefore, a WSI or a TMA is represented as a 2D tensor with shape <code>(n_patches, 768)</code>.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F705762%2F13ea1a1f74c768e3f1c5075e25ea282b%2Fbackbones_scores.png?generation=1704734075166073&alt=media" alt=""></p>
<h2>4. Subtypes classification</h2>
<p>In order to predict the cancer subtypes from the extracted features, we considered several Multiple Instance Learning (MIL) models from our public GitHub repository <a href="https://github.com/owkin/HistoSSLscaling" target="_blank">HistoSSLScaling</a>: <a href="https://arxiv.org/pdf/1802.02212.pdf" target="_blank">Chowder</a>, <a href="https://arxiv.org/abs/1802.04712" target="_blank">DeepMIL</a>, <a href="https://arxiv.org/abs/2011.08939" target="_blank">DSMIL</a>, <a href="https://arxiv.org/abs/2106.00908" target="_blank">TransMIL</a> and MeanPool. Chowder outperformed others including DeepMIL, MeanPool, and DSMIL.</p>
<p>Before the competition deadline, we selected two submissions. One submission scored <strong>0.64</strong>/0.64 and consisted of an ensemble of 5 Chowder, 5 DeepMIL, 5 DSMIL and 5 MeanPool trained upon Phikon, CTranspath and LUNIT. We will detail the <em>winning submission</em> which only used Chowder and Phikon.</p>
<h3>4.1. Chowder's architecture</h3>
<p>The input dimension of Chowder was set to 768 (<em>i.e.</em> the dimension of Phikon embeddings) and its output dimension to 5. The first layer of Chowder is a <code>TilesMLP</code> layer with a hidden dimension of 192. The <code>n_top</code> and <code>n_bottom</code> values of its <code>ExtremeLayer</code> layer were both set to 10. The last layer of Chowder is a <code>MLP</code> with hidden dimension 96, a dropout rate of 30% and Sigmoid activation. We refer the reader to <a href="https://github.com/owkin/HistoSSLscaling/blob/main/rl_benchmarks/models/slide_models/chowder.py" target="_blank">this Python file</a> for an implementation of Chowder. These hyperparameters were manually selected (no hyperparameter tuning).</p>
<h3>4.2. Cross-validation</h3>
<p>Locally, we used stratified 5-fold cross-validation (CV) to estimate the predictive performance of our model. For each fold, four-fifths of the data were used to create a train-validation split (75%-25%) and the remaining fifth of the data was used as a test set. This stratified 5-fold CV was repeated 3 times (with different seeds changing how data was shuffled). We designed the stratified 5-fold CV to ensure that each test set would contain at least 60% of TMA. Hence, most of the TMA were used for evaluation and not for training.</p>
<h3>4.3. Training</h3>
<p>Our models were trained for a maximum of 30 epochs. The validation set was used for early stopping (using the validation balanced accuracy as stopping criterion) with a patience parameter of 4 epochs. In addition to this, we used the <a href="https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html" target="_blank">AdamW</a> optimizer with a constant learning rate of 0.0001 and weight decay of 0.001. The loss was the <a href="https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html" target="_blank">Cross-Entropy (CE) loss</a>.</p>
<p>Two strategies were implemented to mitigate class imbalance:</p>
<ol>
<li><a href="https://pytorch.org/docs/stable/data.html#torch.utils.data.WeightedRandomSampler" target="_blank">Weighted sampling</a> to create balanced batches for training,</li>
<li>Using class weights in <a href="https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss" target="_blank">CE loss</a>.</li>
</ol>
<h3>4.4. The ensembling trick</h3>
<p>Chowder can be quite sensitive to weight initialization. Instead of training a single Chowder model, we decided to train an ensemble of N=50 Chowder models. The Chowder models in the ensemble only differ by their initialization. We found the ensemble to be more stable during training - and more efficient - than a single Chowder model. The Python code below is copied from our winning submission:</p>
<pre><code> (nn.ModuleList):
() -> :
().__init__(modules=models)
() -> torch.Tensor:
predictions, scores = [], []
model self:
logits_, scores_ = model(x, mask)
predictions.append(logits_.unsqueeze(-))
scores.append(torch.mean(scores_, dim=, keepdim=).unsqueeze(-))
predictions = torch.cat(predictions, dim=)
scores = torch.cat(scores, dim=)
predictions, scores
chowder_models = [Chowder(**chowder_kwargs) _ ()]
model = ModelEnsemble(chowder_models)
</code></pre>
<h3>4.5. Submissions</h3>
<p>Three repetitions of stratified 5-fold CV with an ensemble of 50 Chowder lead to a great number of Chowder models! Through several submissions, we noticed that it was more efficient to select specific repetitions and folds rather than ensembling all the 3 x 5 x 50 models. Our winning submission is the <strong>average prediction of 65 Chowder models</strong> trained on different data splits. We calibrated these models using a logistic regression on their internal validation set (CV), which appeared to yield a slight improvement on the public LB. After calibration, we added a “model filtering” step: we selected only a subset of the 50 Chowder models in an ensemble based on the performance of the calibrated models on their internal test set (CV).</p>
<h2>5. Outlier detection</h2>
<p>With balanced accuracy as the metric, correctly predicting the 'Other' category, worth 16.6 points (100/6), was key! Our strategy to identify outliers raised our public leaderboard score from 0.59 to 0.64, highlighting its importance as the most challenging class.</p>
<p>We found that using a threshold on the entropy of predictions, calculated as H = -sum(p*log(p)), was most effective for us, with high entropy indicating uncertainty in predictions. As no outliers were provided, we calibrated this threshold based on the public leaderboard.</p>
<h2>6. Some (un)successful ideas</h2>
<ul>
<li><p>To account for inter-center variability, we tried several color normalization schemes (Vahadane, Reinhard) but did not observe any improvement doing so. This finding is aligned with <a href="https://www.nature.com/articles/s41598-023-46619-6" target="_blank">recent publications</a> suggesting that staining normalization does not improve the performance of models for histopathological classification tasks.</p></li>
<li><p>Increasing the number of patches for TMA using a sliding window (with 30% to 80% overlap between two consecutive patches). This method significantly increased the runtime of our submissions (most failing with <code>Notebook Timeout</code>) and did not provide any performance improvement.</p></li>
<li><p>We explored the idea of identifying normal cases using a tumor detection model trained on the provided annotations. We binarized the annotations (tumor=1, stroma/necrosis=0) and trained a logistic regression on patch features to identify patches containing tumor. The percentage of tumor patches in a WSI/TMA would be used to identify normal cases. This idea did not provide any performance improvement on the public LB.</p></li>
<li><p>We used the <a href="https://docs.ray.io/en/latest/tune/index.html" target="_blank">Ray Tune</a> library to do hyperparameter tuning with <a href="https://arxiv.org/pdf/1802.02212.pdf" target="_blank">Chowder</a> and <a href="https://arxiv.org/abs/1802.04712" target="_blank">DeepMIL</a>. The hyperparameters we optimized for were: batch size, number of training epochs, learning rate, dimensions of hidden layers in MLP, activation functions. The hyperparameter tuning resulted in an increase of our local CV scores, but in a decrease of our submissions scores on the public LB. We hypothesized that hyperparameter tuning led to overfitting the train set and dropped the idea.</p></li>
<li><p>[Successful idea 💡] Fine-tuning Phikon. Here, we’re not talking about fine-tuning in a conventional way; Instead, we mean pretraining a ViT-Base, initialized with Phikon’s weights, using <a href="https://github.com/bytedance/ibot" target="_blank">iBOT</a> on patches from the images in the train set. To do so, we extracted a total of 6.5M patches (224 x 224 px RGB) from the images in the train set. Following the recent paper from <a href="https://arxiv.org/abs/2309.16588" target="_blank">Darcet et al. 2023</a> and the work of <a href="https://github.com/facebookresearch/dinov2" target="_blank">Dino V2</a>, we added 4 register tokens to the ViT-Base. This ViT was trained for a single epoch with an initial learning rate of 0.0005 and batch size (per device) of 32. A single epoch took 2.5 days on 2 NVIDIA P100 GPUs. In a submission, we combined Chowder models trained on features extracted with the <em>original Phikon</em> and Chowder models trained on features extracted with the <em>fine-tuned Phikon</em> (with register tokens). This submission scored 0.62/<strong>0.67</strong> (not selected for the final evaluation).</p></li>
<li><p>[Successful idea 💡] Using the variance of predictions (across models in an ensemble of Chowder) to identify outliers. A submission implementing this idea (along with the entropy of predictions) scored 0.59/<strong>0.67</strong> (not selected for the final evaluation).</p></li>
</ul>
<h2>7. Notes</h2>
<p>The magnification of the images proved to be a key information. We found that resizing the TMA patches to match those from WSI allowed us to have consistent performance over all images in the test set. Keeping the original pyramidal image would have saved the participants from cumbersome image processing and could have led to overall better performances. </p>
<p>For each image (WSI or TMA), we converted the extracted patches to grayscale and applied contrast equalization; Phikon features were then averaged across all available patches. We applied UMAP dimension reduction to these averaged representations and noticed that 38 WSI could be told apart from the remaining 500 images. . Red cells can be used as a scale to compare the resolution levels. Comparing red-cells’ size in slide 431 (among the main group of slides) and slide 4 (among the 38 odd slides) shows there is a substantial difference in resolution between the two groups, with an estimated x12 magnification for the odd slides, instead of the normal x20 magnification. This could be the result of a failed conversion from the original .svs (or .tiff) format of the slide to .png, resulting from the selection of the wrong zoom level. </p>
<p>Such resolution variances are significant: 8% of all slides exhibited incorrect zoom levels impacting model performance. Notably, 20% (9 out of 47) of all LGSC slides had an incorrect zoom level. Finally, this UMAP also shows that after downsampling the TMA, their features are mixed with those of the WSI, this confirms our ability to use a single approach for these two data types.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F705762%2Fac94b6191f21b8c8184f40f03800a74f%2Fumap.png?generation=1704734453560448&alt=media" alt=""></p>
<p>The ids of the odd looking slides are the following: [4,970,1080,2097,3222,3511,3881,9509,12159,12244,13364,13387,15583,15871,25604,26124,29888,31300,31793,32035,32192,33839,34688,34720,40079,40639,41099,44432,44700,49995,51215,52308,52784,53402,61100,63298,63836,64629]</p>
<p>jbschiratti, on behalf of Owkin's team</p>
<p>Thanks! Indeed, we are convinced that <em>domain-specific</em> large vision models are the way to go. Andrew Ng shared <a href="https://www.linkedin.com/posts/andrewyng_the-lvm-large-vision-model-revolution-is-activity-7137483177714995200-nxlM?utm_source=share&utm_medium=member_desktop" target="_blank">a post on LinkedIn</a> about a month ago with a similar point of view.</p>
<p>Discussions are still ongoing.</p>
<p>Yes, at some point we developed our own segmentation model (trained on the annotated data) to identify tumor regions on new images and sample patches in them. It turns out that it didn't improve our submissions scores on the public LB. Therefore,we decided not to use this segmentation model to guide the sampling of patches. A posteriori, I'm thinking that non-tumor patches may also matter for subtypes classification (TBD with a pathologist).</p>
<p>"Fine-tuning Phikon" should be understood as: pre-train with iBOT a ViT-Base, initialized with Phikon's weights, on a large dataset of patches (extracted from the images in the train set). To do so, we just used the code from the <a href="https://github.com/bytedance/ibot" target="_blank">iBOT Github repository</a>. Locally (in CV), the results were slightly worse with those "fine-tuned" versions of Phikon (see the fourth figure of our write-up). As a result, the "fine-tuned" version of Phikon only did not improve our scores (public/private). However, in a submission (not selected for final evaluation), we combined Chowder models trained on top of the <em>original</em> Phikon + Chowder models trained on top of the <em>fine-tuned</em> Phikon. This submission scored 0.67 on the private LB. Overall, this idea of "fine-tuning Phikon" is promising but further work/research is needed.</p>
<p>We did some experiments with the annotated data (masks) but it did not increase our score on the public LB. </p>
<p>For instance, we used the annotated data to train a tumor prediction model at the patch level (i.e. a model which predicts if a given patch contains tumor or not). To do so, we binarized the annotations (tumor=1, stroma/necrosis=0). Once trained, we used model (at test time) to quantify the percentage of tumor patches in an image. If this percentage was lower than a given threshold (e.g. 1%), we would predict "Other". Although it seemed like a good idea (especially to identify the "normal cases"), it did not improve (nor decrease) our scores.</p>
<p>Thank you for your interest in our work. <br>
We are currently in the process of discussing this with the Kaggle staff and the organizing team (UBC).</p>
<p>Thank you for your comment.</p>
<p>Indeed, it seems that working at 20x resolution is preferable with Phikon. As explained in the associated <a href="https://www.medrxiv.org/content/10.1101/2023.07.21.23292757v2.full.pdf" target="_blank">publication</a>, Phikon was pretrained on tiles at mpp 0.5 (equiv. 20x). </p>
<p>For <em>this</em> specific competition/task, Chowder worked well with Phikon embeddings. </p> |
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | 20th Place Solution - UBC-OCEAN | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Close! The input_shape during training is <code>batch_size x 8 x 3 x 1280 x 1280</code>. We used a package called <a href="https://einops.rocks/" target="_blank">einops</a> to reshape the tensors in the <code>forward</code> function. Something like this.</p>
<pre><code>def (self, x):
b, t = x.()[:]
# Feature extractor
x = (x, )
x = self.(x)
x = self.(x)
x = (x, , b=b, t=t)
# Attention Pooling
a = self.(x)
a = torch.(a, dim=)
x = torch.(x * a, dim=)
x = self.(x)
return x
</code></pre>
<p>Thanks <a href="https://www.kaggle.com/jirkaborovec" target="_blank">@jirkaborovec</a> for all your contributions to lightning! It is a great package.</p>
<p>We did select the tiles somewhat sequentially by creating a list of sequential x,y pairs, and then iterating over the list and calling the <code>.crop()</code> function. Not sure how the processing time would change if we shuffled the pairs.. </p>
<p>Sure, I just created a notebook <a href="https://www.kaggle.com/code/brendanartley/ubco-efficient-tiling-code/notebook" target="_blank">here</a> with the tiling code. <a href="https://www.kaggle.com/seeingtimes" target="_blank">@seeingtimes</a>, <a href="https://www.kaggle.com/samu2505" target="_blank">@samu2505</a>, <a href="https://www.kaggle.com/jirkaborovec" target="_blank">@jirkaborovec</a>.</p>
<p>Sure! Here is the pseudocode.</p>
<pre><code>. Load thumbnail
. Select areas (x,y) of the that are cell tissue
. Convert (x,y) pairs to full
. Load tile the pixel value
. Select the top tiles with the darkest pixel
</code></pre>
<p>This process took ~4hrs to save all tiles on submission.</p>
<p>Although I am posting the write-up, this was a great team effort by <a href="https://www.kaggle.com/kevin0912" target="_blank">@kevin0912</a> and me. Also, thanks to UBC for hosting this competition, it was a fun competition, and interesting working with such large images! </p>
<p>Our solution is based on a multiple instance learning (MIL) architecture with attention pooling. We use an ensemble of <code>efficientnet_b2</code>, <code>tf_efficientnetv2_b2.in1k</code> and <code>regnety_016.tv2_in1k</code> backbones trained on sequences of 8 x 1280 x 1280 images, and ignore the <code>other</code> class. We also apply light TTA during inference (rot90, flips, transpose, random image order).</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5570735%2Fcd22d830c5e4d19b09cb8af0a71a65be%2Farchitecture.jpg?generation=1704326896955132&alt=media" alt="Cropper"></p>
<h2>Strategies</h2>
<p><strong>Efficient Tiling</strong></p>
<p>We select tiles from WSIs based on the darkest median pixel value. To make the pipeline more efficient, we use multiprocessing on 3 CPU cores, and prefilter crop locations using the smaller thumbnail images. This prefiltering selects the largest area of tissue on the slide and ignores other smaller areas of tissue.</p>
<p>For TMAs, we take 5 central crops of size 2560 x 2560 and resize to 1280 x 1280 to match WSI magnification. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5570735%2F983d5d3df388060aaceed59f780c2f1e%2Fsmart_cropper.jpg?generation=1704327003387823&alt=media" alt="Cropper"></p>
<p>Although efficient, a limitation of the pipeline is that it may not extract informative tiles from each image. We also experimented with a lightweight tile classifier trained on the ~150 segmentation masks, but this did not improve tile selection.</p>
<p><strong>Modeling</strong></p>
<p>We trained each model for 20-30 epochs with heavy augmentations and SWA (Stochastic Weight Averaging). Most models were trained on all the WSIs and TMAs, but some were trained using synthetically generated TMAs (aka. TMA Planets) from the <a href="https://www.kaggle.com/datasets/sohier/ubc-ovarian-cancer-competition-supplemental-masks" target="_blank">supplemental masks</a>. We would likely have explored TMA planets further but we were skeptical of the mask quality, and low count relative to the total number of WSIs.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F5570735%2F63447a1f1ace988b300842a49ec40c53%2Ftma_planet.JPG?generation=1704326966847250&alt=media" alt="Cropper"></p>
<p><strong>OOF Relabel + Remove</strong></p>
<p>Based on <a href="https://www.kaggle.com/competitions/UBC-OCEAN/discussion/445804#2559062" target="_blank">Noli Alonso's comments</a>, we removed ~5% of the images and relabelled 8 images. We used a similar denoising method to that in the <a href="https://www.kaggle.com/competitions/prostate-cancer-grade-assessment/discussion/169143" target="_blank">1st place solution</a> of the <a href="https://www.kaggle.com/competitions/prostate-cancer-grade-assessment/overview" target="_blank">PANDA Competition</a>.</p>
<pre><code>relabel_dict = {
'3': 'MC',
'5': 'LGSC',
'2': 'CC',
'8': 'LGSC',
'9': 'MC',
'7': 'EC',
'4': 'CC',
'6': 'LGSC',
}
</code></pre>
<h2>External Data</h2>
<p>The only external dataset we used was the <a href="https://www.medicalimageanalysis.com/data/ovarian-carcinomas-histopathology-dataset" target="_blank">Ovarian Carcinoma Histopathology Dataset (SFU)</a>. This dataset had 80 WSIs at 40x magnification from 6 different pathology centers.</p>
<p>Class distribution: <code>{'HGSC': 30, 'CC': 20, 'EC': 11, 'MC': 10, 'LGSC': 9}</code></p>
<h2>Did not work for us</h2>
<ul>
<li>Larger backbones</li>
<li>Lightweight tile classifier</li>
<li>Stain normalization (staintools, stainnet, etc.)</li>
<li>JPGs</li>
</ul>
<h2>Frameworks</h2>
<ul>
<li><a href="https://lightning.ai/docs/pytorch/stable/" target="_blank">Pytorch Lightning</a> (training)</li>
<li><a href="https://wandb.ai/site" target="_blank">Weights + Biases</a> (logging)</li>
<li><a href="https://huggingface.co./timm" target="_blank">Timm</a> (backbones)</li>
</ul>
<p>Thanks <a href="https://www.kaggle.com/huyduong7101" target="_blank">@huyduong7101</a>! </p>
<p>We had submissions reach 0.54 with and without this. I suspect our ensemble of 16 models helped overcome the noisy labels.</p>
<p>Thanks <a href="https://www.kaggle.com/crsuthikshnkumar" target="_blank">@crsuthikshnkumar</a>! We tried predicting the “other” class using prediction variance and a high threshold, but it did not work well. In the end we just ignored the “other” class.</p> |
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | 28th solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It was an interesting competition and I have learned many techniques and insights from others!<br>
Thanks to the competition host, Kaggle, and <a href="https://www.kaggle.com/jirkaborovec" target="_blank">@jirkaborovec</a> for nice train and inference code.</p>
<h1>Overview of my approach</h1>
<ul>
<li>TMA images : center crop and tiling and inference with TMA model </li>
<li>WSI images : tile the WSI thumbnail images and classify tumor area with tumor classifier model, and extract tiles from original WSI images and classify sub-types with TMA model </li>
<li><a href="https://github.com/jinhopark8345/UBC-OCEAN-30th-place-solution" target="_blank">train/inference code</a> </li>
</ul>
<h1>TMA pipeline</h1>
<ul>
<li>Train <ul>
<li>TMA model : extract tiles from WSI images with supplemental masks (crop size : 1024x1024 -> resize 512x512) and fine-tune <a href="https://huggingface.co./timm/maxvit_tiny_tf_512.in1k" target="_blank">maxvit_tiny_tf_512.in1k</a><br>
(For TMA model training, I used tiles with more than 70% cancerous tumor pixels and for validation, 30% ~ 70% tumor pixels)</li></ul></li>
<li>Inference <ul>
<li>cropped and resized TMA tiles (extract tiles from TMA images with 2048x2048 resolution, and resize them to 512x512, stride 256, zoom : x40->x10)</li>
<li>inference with TMA model -> each tile with predicted ovarian sub type</li>
<li>majority votes and make final prediction</li></ul></li>
</ul>
<h1>WSI pipeline detail</h1>
<ul>
<li><p>Train</p>
<ul>
<li>Tumor classifier : TMA model but with WSI thumbnails and compressed WSI supplemental masks</li>
<li>(TMA model : used the same TMA model from TMA pipeline)</li></ul></li>
<li><p>Inference</p>
<ul>
<li>tile WSI thumbnail image</li>
<li>inference with Tumor classifier -> each thumbnail tile with tumor or non-tumor result<ul>
<li>no tumor tiles -> 'Other'</li>
<li>tumor tiles -> center crop and pass it to TMA model and do majority votes and make final prediction</li></ul></li></ul></li>
</ul>
<h1>Tried but didn't work</h1>
<ul>
<li><a href="https://github.com/khtao/StainNet" target="_blank">StainNet</a> did not get better result than simple normalization</li>
</ul> |
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | 2nd Place Solution - UBC-OCEAN | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Just manually. So in the end, I didn't get too many samples from here.😂</p>
<p>As shown in my code above, firstly filter out the coordinates of effective patches on the thumbnails, then crop and extract features on the images, and finally train MIL.</p>
<p>Thanks🥳. This is indeed due to time limit. TMA has less than a hundred patches, and of course, all of them can be used for prediction. Therefore, discarding patches from some large images is always better than discarding patches from all images at a fixed ratio. Moreover, TMA accounts for a considerable proportion in private data.</p>
<p>It really has some data that does not belong to the five classes. However, there is almost no significant improvement compared to not using the Other class.😂</p>
<p>I'm sorry for the mistake I made when inserting the website address before. It has been corrected and all can be opened.🙂</p>
<p>Thank you for your attention. We will release all the code and data in a few days. And all links have been corrected.</p>
<h1>Preface</h1>
<p>The most significant difficulty in whole slide image (WSI) classification is the extremely high resolution, which should have been experienced by all competitors. Although the organizers of the competition provided a data type difficult to process, fortunately, the resolution of the data is much lower than that of typical WSI. In this discussion, we will provide a detailed introduction to our method.</p>
<h1>Overview</h1>
<p>Following the commonly used methods in academia, we toke the following steps:</p>
<ol>
<li><strong>Crop</strong> an entire WSI into thousands of <strong>patches</strong>;</li>
<li>Use extractors to <strong>extract the features</strong>;</li>
<li>Train the <strong>MIL</strong> models.</li>
</ol>
<h1>External Data</h1>
<p>We used two external data with labels. All competitors can download without payment. We found that although more external data and Other class were used for training, there was no significant improvement in scores. We believe this is due to quality issues with external data or significant differences from competition data. Just as some competitors can achieve high scores without using external data, we believe that the external data is not necessary in this competition.</p>
<ul>
<li><a href="https://wirtualnymikroskop.mostwiedzy.pl/list/" target="_blank">https://wirtualnymikroskop.mostwiedzy.pl/list/</a></li>
<li><a href="https://www.cancerimagingarchive.net/collection/ptrc-hgsoc/" target="_blank">https://www.cancerimagingarchive.net/collection/ptrc-hgsoc/</a></li>
</ul>
<h1>Crop Patches and Extract Features</h1>
<p>We create one Dataset for one WSI. Code is here:</p>
<pre><code> ():
():
().__init__()
self.data_path = data_path
self.wsi_name = wsi_name
self.ratio = ratio
mode [, ]
self.mode = mode
self.wsi = pyvips.Image.new_from_file(os.path.join(data_path, , wsi_name + ))
self.is_tma = self.wsi.height < self.wsi.width <
self.patch_size = patch_size
self.transform = T.Compose([T.ToTensor(), T.Resize((, ), antialias=), T.Normalize(mean=[, , ], std=[, , ])])
self.cor_list = self.get_patch()
():
cor_list = []
self.is_tma:
thumbnail = self.wsi
:
thumbnail = pyvips.Image.new_from_file(os.path.join(self.data_path, , self.wsi_name + ))
wsi_width, wsi_height = self.wsi.width, self.wsi.height
thu_width, thu_height = thumbnail.width, thumbnail.height
h_r, w_r = wsi_height / thu_height, wsi_width / thu_width
down_h, down_w = (self.patch_size / h_r), (self.patch_size / w_r)
cors = [(x, y) y (, thu_height, down_h) x (, thu_width, down_w)]
x, y cors:
tile = thumbnail.crop(x, y, (down_w, thu_width - x), (down_h, thu_height - y)).numpy()[..., :]
black_bg = np.mean(tile, axis=) <
tile[black_bg, :] =
mask_bg = np.mean(tile, axis=) >
np.(mask_bg) < (down_h, thu_height - y) * (down_w, thu_width - x) * (cor_list) == self.is_tma:
cor_list.append(((x * w_r), (y * h_r)))
self.is_tma:
cor_list
self.wsi.height < self.wsi.width < :
R_ratio =
self.wsi.height < self.wsi.width < :
R_ratio =
:
R_ratio =
random.shuffle(cor_list)
cor_list = cor_list[:(((cor_list) * R_ratio), )]
cor_list
():
(self.cor_list)
():
x, y = self.cor_list[idx]
tile = self.wsi.crop(x, y, (self.patch_size, self.wsi.width - x), (self.patch_size, self.wsi.height - y)).numpy()[..., :]
tile = self.transform(tile)
tile
</code></pre>
<h1>Feature Extraction Model</h1>
<p>We used <strong>dino_vit_small_patch16_200ep.torch</strong> and <strong>dino_vit_small_patch8_200ep.torch</strong>.</p>
<ul>
<li><a href="https://github.com/lunit-io/benchmark-ssl-pathology/releases/tag/pretrained-weights" target="_blank">https://github.com/lunit-io/benchmark-ssl-pathology/releases/tag/pretrained-weights</a></li>
</ul>
<h1>MIL Model</h1>
<ul>
<li>ABMIL</li>
<li>DSMIL</li>
<li>TransMIL</li>
</ul>
<h1>Codes</h1>
<p>Simplified Version</p>
<ul>
<li><a href="https://www.kaggle.com/code/zznznb/wsi-train" target="_blank">https://www.kaggle.com/code/zznznb/wsi-train</a></li>
<li><a href="https://www.kaggle.com/code/zznznb/wsi-inference-public-0-6-private-0-58" target="_blank">https://www.kaggle.com/code/zznznb/wsi-inference-public-0-6-private-0-58</a></li>
</ul>
<p>Final Version</p>
<ul>
<li><a href="https://www.kaggle.com/code/hustzx/2nd-0-61-train-abmil-dsmil-transmil" target="_blank">https://www.kaggle.com/code/hustzx/2nd-0-61-train-abmil-dsmil-transmil</a></li>
<li><a href="https://www.kaggle.com/code/hustzx/2nd-0-61-infernece-abmil-dsmil-transmil" target="_blank">https://www.kaggle.com/code/hustzx/2nd-0-61-infernece-abmil-dsmil-transmil</a></li>
</ul>
<p>Feature Extraction Codes</p>
<ul>
<li><a href="https://github.com/z1186464862/UBC-OCEAN" target="_blank">https://github.com/z1186464862/UBC-OCEAN</a></li>
</ul>
<p>We collected some samples that did not belong to the original five classes as independent Other class, so we only selected the one with the highest score as the prediction result. As for multiple classes in the image, I think it depends on which patches the model considers to have more prominent features.</p>
<p>Thank you all for your attention. I provide a simplified version of the codes that is close to our final score. I hope they are helpful to you.</p>
<p>Thank you for your information. We have tried this method before, but it cannot be determined whether it is effective in public score during the competition😄</p>
<p>Please see <strong>"beifen.csv"</strong> at <a href="https://www.kaggle.com/datasets/zznznb/checkpoints" target="_blank">https://www.kaggle.com/datasets/zznznb/checkpoints</a>. "extrain.csv" is a subset of "beifen.csv". It contains the WSI information, including external data. Generating codes can be found at <a href="https://github.com/z1186464862/UBC-OCEAN" target="_blank">https://github.com/z1186464862/UBC-OCEAN</a>.</p> |
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | 30th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This was my first Kaggle competition and it was a fun ride. Many thanks for the organizers and everyone involved. Special thanks to <a href="https://www.kaggle.com/jirkaborovec" target="_blank">@jirkaborovec</a> whose public notebooks helped me a lot at the beginning of this competition to kickstart my approach. </p>
<h1>Best model</h1>
<p>I used a simple tile-based approach where I trained a ConvNeXt_Tiny model to classify each tile as one of the 5 categories:</p>
<ul>
<li>ConvNeXt_Tiny, pretrained</li>
<li>512x512 px image size</li>
<li>batch size: 32</li>
<li>random horizontal/vertical flip augmentation</li>
<li>color augmentation (brightness, contrast, saturation, hue)</li>
<li>color normalization</li>
<li>Cross-Entropy Loss</li>
<li>AdamW optimizer, StepLR (step_size=2, gamma=0.1), LR=1e-4</li>
<li>1 + 5 epochs (convolutional layers feezed for the initial epoch)</li>
<li>CV5, Stratified Group KFold</li>
</ul>
<h1>WSI tiling</h1>
<p>Using the supplemental masks, I trained an EfficientNet_B0 (pretrained) model with similar training parameters as the main model to detect tiles with tumor (0.97 CV5 balanced accuracy). As training data, I selected tiles with tumor label > 95% for the tumor class and tiles with stroma+necrosis label > 50% and tumor label < 5% for the no-tumor class.</p>
<p>I cut each WSI into 1024x1024 px tiles (dropping tiles with > 60 % black background) and used 32 random tumor tiles (based on the EfficientNet_B0 model with 0.5 threshold) from each image for both training and inference. Predictions were averaged between tiles during inference.</p>
<h1>TMA “tiling”</h1>
<p>One 2048x2048 px (to compensate for the x2 magnification) tile was cut from the center of the image.</p>
<h1>Other class</h1>
<p>Using a sigmoid activation function, an image was marked as Other if the largest activation was smaller than 0.8.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F8375965%2F3753ea56085656582000e41a1a4666aa%2Fubc.png?generation=1704405042031267&alt=media" alt=""></p>
<p>(As an extra: some accidental augmented images with questionable artistic value.)</p> |
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | 38th solution (Private 0.52, Hight score 0.55) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>No, applying argmax to the BCE model reduced the positive detection ability in LGSC. I used two models</p>
<ol>
<li>Model trained with BCE for outlier detection</li>
<li>This is a model learned with CE for class classification.</li>
</ol>
<p>If there were no classes exceeding the threshold (0.4~0.5) in the BCE model, it was passed on to the CE model to classify the correct subclass.</p>
<p>(This didn't work for pb, but it worked for lb)</p>
<p>The label was used by one-hot encoding.</p>
<p>ex) HGSC -> [1, 0, 0, 0, 0], EC -> [0, 0, 1, 0, 0] -> training with BCEWithLogitsLoss</p>
<p>By doing this, unlike Cross Entropy, I thought I would be able to detect outliers because I could find the probability of each class alone, and there was an increase of about 0.1 points.</p>
<p>Since I first started learning about data analytics, I've heard about Kaggle from many people and have come to admire them. If this competition ends successfully, I will become a competition master two years after starting Kaggle! Thanks everyone!</p>
<p>And 'Gunes Evitan''s pyvips code was very helpful during the competition. Thank you.<br>
<a href="url" target="_blank">https://www.kaggle.com/code/gunesevitan/libvips-pyvips-installation-and-getting-started</a></p>
<h3>Preprocessing</h3>
<p>After using the back ground provided by the competition, we applied the otsu threshold. And I cut the image to size 512 x 512 and saved it.</p>
<h3>Training</h3>
<ul>
<li>Model : VIT-s + TransMIL</li>
<li>Augmentation : VerticalFlip, HorizontalFlip, CLAHE, RandomGamma, GridDistortion, ShiftScaleRotate</li>
<li>Optimizer & learning rate: Since vis-s was already pre-trained and MIL was prone to overfitting, vit-s was trained with a learning rate of 1e-6 and MIL was trained at a learning rate of 1e-5, and EMA was applied to each. AdamW and CE were used.</li>
</ul>
<blockquote>
<p>optimizer = torch.optim.AdamW([{'params': model.image_extractor.parameters(),'lr':1e-6}, {'params': model.mil.parameters()}], lr=1e-5, weight_decay=1e-3)<br>
extractor_ema = ModelEma(model.image_extractor, decay=ema_decay, device=None, resume='')<br>
mil_ema = ModelEma(model.mil, decay=ema_decay, device=None, resume='')</p>
</blockquote>
<p>I experimented with two methods.</p>
<ol>
<li>Traning only MIL: A weakly supervised method that extracts features from patch images using the vit-s model and then learns using only those features.</li>
<li>Training with image encoder (vit-s) together: We randomly selected 100 images from a 512x512 patch for learning and evaluation.</li>
</ol>
<p>Of the two, method 2 showed better pb score.</p>
<h3>Tried(helpful)</h3>
<ul>
<li>Pseudo-labeling 1536x1536 : After pseudo labeling the 1536x1536 image using MIL learned at 512x512, we learned a model for TMA prediction using images with a probability of 0.5 or higher. Although it was not good in pb score, it achieved 0.55 in private.</li>
<li>Outlier detect: Each class was learned using binary cross entropy. After applying sigmoid, if all class predictions were less than 0.5, it was predicted as 'Other'. It's not exact, but there was an increase of about 0.1.</li>
<li>Upscaling : It was better than applying weights to cross entropy.</li>
</ul>
<h3>Tried(but didn't help)</h3>
<ul>
<li>staintools: augmentation with staintools. But it didn't help much.</li>
<li>Other dataset(external) : <a href="url" target="_blank">https://www.cancerimagingarchive.net/collection/ovarian-bevacizumab-response/</a> In this dataset, I trained a model with the class corresponding to 'UC' as other, but it did not help at all.</li>
</ul>
<pre><code> (nn.Module):
():
self.image_extractor = ViT(size=)
self.mil = TransMIL()
():
x = self.image_extractor(x)
x = self.mil(x)
x
</code></pre>
<p>This is a simple code for my model. </p>
<p>The backpropagation of the loss is transmitted to both ViT, which extracted the image, and MIL, which makes the final prediction.</p>
<p>However, it requires a lot of VRAM and there is an overfitting problem, so different learning rates and ema are essential.</p> |
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | 3rd Place Solution for the UBC-OCEAN UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Context</h2>
<p>Business context: <a href="https://www.kaggle.com/competitions/UBC-OCEAN" target="_blank">UBC Ovarian Cancer Subtype Classification and Outlier Detection</a></p>
<p>Data context: [The] challenge in this competition is to classify the type of ovarian cancer from microscopy scans of biopsy samples. <a href="https://www.kaggle.com/competitions/UBC-OCEAN/data" target="_blank">Link to data description</a></p>
<h2>Overview of the Approach</h2>
<ul>
<li><p>Finding <strong>more public external data was key</strong> for me. Overfitting was a big problem due to the small number of examples. Initially I hoped the <a href="https://github.com/mahmoodlab/CLAM" target="_blank">CLAM</a> or a multiple instance learning (MIL) approach could remedy that, because many images are so large they can be split in tens of thousands of tiles. But my models were still overfitting a lot. I assume the tiles of the same patient are similar in important ways and the models can use that as shortcuts that don't generalize well. Or there is just so much heterogeniety between cancers, so that the number of samples are just not enough to capture all different variants of the subtypes.</p></li>
<li><p>I used the <strong>segmentation data provided to create synthetic tumor micro array (TMA) images</strong>, basically jsut cropping tiny images from the segmentation of the large image. I did this for the cancer tissue and generated some "Other" synthetic images by cropping small tiles that were marked as healthy or as stroma.</p></li>
<li><p>Following the paper "<a href="https://arxiv.org/pdf/2311.11772.pdf" target="_blank">A Good Feature Extractor Is All You Need for Weakly Supervised Learning in Histopathology</a>" I used the <strong>pretrained model Lunit-DINO to extract smaller size features</strong> to handle the massive image size. I ran the feature extractino in 16-bit so that it runs faster. I didn't see much negative impact on feature quality.</p></li>
<li><p><strong>I filtered the tiles containing tissue using the thumbnails and then cropped the tissue tiles using PyVips</strong>. I lost so much time on this one, because I first tried to re-write the feature extraction code in CLAM, but couldn't make it work with the resource limits on Kaggle. Then I tried using the <a href="https://github.com/girder/large_image" target="_blank">large_image</a> library that is made for histopatholy images, but could not make it work with the Kaggle resource limits. I alternated between out of memory, out of disk and timeouts. Finally I made it work using PyVips and asynchronous data loading in PyTorch. This part of the competition was frustrating, as I wanted to spend my time on deep learning and spend weeks on image processing and guessing Kaggle errors.</p></li>
<li><p><strong>On the extracted features I trained the <a href="https://github.com/mahmoodlab/CLAM" target="_blank">CLAM</a> model</strong>, which is simiar to MIL, but calculates an attention matrix to weight the tiles. I made some changes to the instance level loss function for the "Other" label, as it's sematically different from the other labels. A tile of a slide containing a cancer subtype could still be have the label "Other", if the tile would only show healthy tissue.</p></li>
</ul>
<h2><strong>Details of the submission</strong></h2>
<h3>CLAM Model</h3>
<p>The following diagram shows the CLAM model from <a href="https://faisal.ai/" target="_blank">Mahmood Lab @ Harvard/BWH & MGH </a>[1]. This model takes as input concatenated features that have been extracted from all tiles of a Whole Slide Image containing tissue. </p>
<p>The top part of the diagram calculates the attention scores A, a vector with one entry per tile. The bottom part creates an A-weighted sum of the transformed input features h and feeds it into a multi-class classification head.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F408221%2Ff3052ba2667ca9ef1ae1459c52026861%2FScreenshot%202024-01-04%20at%2018.46.17.png?generation=1704390874024392&alt=media" alt="CLAM diagram"></p>
<p>Diagram from Paul Pham [2]</p>
<p>PyTorch Code for my adapted CLAM model:</p>
<pre><code> (nn.Module):
():
(Attn_Net_Gated, self).__init__()
self.attention_a = [
nn.Linear(L, D),
nn.Tanh()
]
self.attention_b = [
nn.Linear(L, D),
nn.Sigmoid()
]
dropout > :
self.attention_a.append(nn.Dropout(dropout))
self.attention_b.append(nn.Dropout(dropout))
self.attention_a = nn.Sequential(*self.attention_a)
self.attention_b = nn.Sequential(*self.attention_b)
self.attention_c = nn.Linear(D, n_classes)
():
a = self.attention_a(x)
b = self.attention_b(x)
A = a.mul(b)
A = self.attention_c(A)
A, x
(nn.Module):
():
().__init__()
self.size_dict = {
: [feature_dim, , ],
: [feature_dim, , ],
: [feature_dim, , ],
: [feature_dim, , ]
}
size = self.size_dict[size_arg]
fc = [nn.Linear(size[], size[]), nn.ReLU()]
dropout > :
fc.append(nn.Dropout(dropout))
gate:
attention_net = Attn_Net_Gated(L = size[], D = size[], dropout = dropout, n_classes = )
:
attention_net = Attn_Net(L = size[], D = size[], dropout = dropout, n_classes = )
fc.append(attention_net)
self.attention_net = nn.Sequential(*fc)
self.classifiers = nn.Linear(size[], n_classes)
instance_classifiers = []
class_idx (n_classes):
layers = []
depth_idx (inst_class_depth-):
divisor = ** depth_idx
layers.append(nn.Linear(size[] // divisor, size[] // (divisor * )))
layers.append(nn.ReLU())
inst_dropout :
layers.append(nn.Dropout(inst_dropout))
layers.append(nn.Linear(size[] // **(inst_class_depth-), ))
instance_classifiers.append(nn.Sequential(*layers))
self.instance_classifiers = nn.ModuleList(instance_classifiers)
self.k_sample = k_sample
self.instance_loss_fn = instance_loss_fn
self.n_classes = n_classes
self.subtyping = subtyping
self.use_inst_predictions = use_inst_predictions
self.other_idx = label_mapping[]
self.class_weights = class_weights
initialize_weights(self)
self.to()
():
torch.full((length, ), , device=device).()
():
torch.full((length, ), , device=device).()
():
device=h.device
(A.shape) == :
A = A.view(, -)
is_tma:
k_sample = self.k_sample //
:
k_sample = self.k_sample
k_sample <= math.ceil(A.shape[] / ):
top_p_ids = torch.topk(A, k_sample)[][-]
:
top_p_ids = torch.topk(A, math.ceil(A.shape[] / ))[][-]
top_p_ids = top_p_ids.repeat(k_sample)[:k_sample]
top_p = torch.index_select(h, dim=, index=top_p_ids)
k_sample <= math.ceil(A.shape[] / ):
top_n_ids = torch.topk(-A, k_sample, dim=)[][-]
:
top_n_ids = torch.topk(-A, math.ceil(A.shape[] / ))[][-]
top_n_ids = top_n_ids.repeat(k_sample)[:k_sample]
top_n = torch.index_select(h, dim=, index=top_n_ids)
p_targets = self.create_positive_targets(k_sample, device)
n_targets = self.create_negative_targets(k_sample, device)
p_logits = classifier(top_p)
n_logits = classifier(top_n)
inst_preds = (p_logits.squeeze() > ).long()
p_loss = self.instance_loss_fn(p_logits.squeeze(), p_targets) * (self.n_classes -)
n_loss = self.instance_loss_fn(n_logits.squeeze(), n_targets)
is_tma is_other_class:
loss = p_loss + n_loss
: loss = p_loss
loss, inst_preds, p_targets, p_logits
():
device=h.device
(A.shape) == :
A = A.view(, -)
is_tma:
k_sample = self.k_sample //
:
k_sample = self.k_sample
k_sample <= math.ceil(A.shape[] / ):
top_ids = torch.topk(A, k_sample)[][-]
:
top_ids = torch.topk(A, math.ceil(A.shape[] / ))[][-]
top_ids = top_ids.repeat(k_sample)[:k_sample]
top_inst = torch.index_select(h, dim=, index=top_ids)
top_targets = self.create_negative_targets(k_sample, device)
logits = classifier(top_inst)
inst_preds = (logits.squeeze() > ).long()
instance_loss = self.instance_loss_fn(logits.squeeze(), top_targets)
instance_loss, inst_preds, top_targets, logits
():
A, h = self.attention_net(h)
A = torch.transpose(A, , )
attention_only:
A
A_raw = A
A = F.softmax(A, dim=)
M = torch.mm(A, h)
logits = self.classifiers(M)
bag_Y_prob = F.softmax(logits.squeeze(), dim=)
is_tma:
k_sample = self.k_sample //
:
k_sample = self.k_sample
all_inst_logits = []
top_p_ids =
bag_pred_weight < label :
total_inst_loss =
all_inst_preds = []
all_targets = []
i ((self.instance_classifiers)):
classifier = self.instance_classifiers[i]
i == label.item():
is_other_class = (label.item() == self.other_idx)
instance_loss, inst_preds, targets, inst_logits = self.inst_eval(A, h, classifier, is_tma, is_other_class)
all_inst_preds.extend(inst_preds.cpu().numpy())
all_targets.extend(targets.cpu().numpy())
all_inst_logits.append(inst_logits)
self.class_weights :
instance_loss *= self.class_weights[i]
:
self.subtyping:
instance_loss, inst_preds, targets, inst_logits = self.inst_eval_out(A, h, classifier, is_tma)
all_inst_preds.extend(inst_preds.cpu().numpy())
all_targets.extend(targets.cpu().numpy())
all_inst_logits.append(inst_logits)
:
total_inst_loss += instance_loss
self.subtyping:
total_inst_loss /= * (self.instance_classifiers)
:
self.k_sample <= math.ceil(A.shape[] / ):
top_p_ids = torch.topk(A, k_sample)[][-]
:
top_p_ids = torch.topk(A, math.ceil(A.shape[] / ))[][-]
top_p_ids = top_p_ids.repeat(k_sample)[:k_sample]
top_p = torch.index_select(h, dim=, index=top_p_ids)
classifier self.instance_classifiers:
class_logits = classifier(top_p)
all_inst_logits.append(class_logits)
self.use_inst_predictions:
all_inst_logits = torch.concatenate(all_inst_logits, axis=)
self.k_sample <= math.ceil(A.shape[] / ):
top_p_ids = torch.topk(A, k_sample)[][-]
:
top_p_ids = torch.topk(A, math.ceil(A.shape[] / ))[][-]
top_p_ids = top_p_ids.repeat(k_sample)[:k_sample]
all_inst_logits =A_raw[, top_p_ids].reshape(-, ) * all_inst_logits
softmax_inst_probs = torch.softmax(all_inst_logits, dim=)
agg_inst_probs = softmax_inst_probs
agg_inst_probs = torch.mean(agg_inst_probs, dim=)
Y_probs = bag_Y_prob * bag_pred_weight + agg_inst_probs * ( - bag_pred_weight)
Y_hat = torch.topk(Y_probs, , dim=)[]
results_dict = {}
bag_pred_weight < :
results_dict.update({
: all_inst_logits.detach().cpu().numpy(),
: agg_inst_probs.detach().cpu().numpy()
})
self.use_inst_predictions:
results_dict.update({
: softmax_inst_probs.detach().cpu().numpy()
})
label :
results_dict.update({
: np.array(all_targets),
: np.array(all_inst_preds).flatten(),
: total_inst_loss
})
logits, Y_probs, Y_hat, A_raw, results_dict
</code></pre>
<h3>Description of the Data Used</h3>
<p>From the <a href="https://www.cancerimagingarchive.net/" target="_blank">Cancer Imaging Archive</a> I used the <a href="https://doi.org/10.7937/TCIA.985G-EY35" target="_blank">Ovarian Bevacizumab Response</a> and the <a href="https://doi.org/10.7937/TCIA.ZS4A-JD58" target="_blank">CPTAC-OV</a> data. The labels didn't map perfectly to the labels of the competition. For example a label would be Papillary Serous Carcinoma, which I assumed means either HGSC or LGSC. I just used a model trained on the my other data to decide which of the two to select.</p>
<p>I used the Ovarian <a href="https://www.medicalimageanalysis.com/data/ovarian-carcinomas-histopathology-dataset" target="_blank">Carcinoma Histopathology Dataset</a> by the <a href="https://www.medicalimageanalysis.com/home" target="_blank">Hamarneh Lab</a>.</p>
<p>I also used data from the <a href="https://tma.im/cgi-bin/home.pl" target="_blank">Stanford Tissue Microarray Database</a>, most of which didn't have H&E staining, but at least the cell shapes looked similar and I wanted to also have some TMA data.</p>
<p>I also got in touch with <a href="https://chtn.cancer.gov" target="_blank">Cooperative Human Tissue network</a>, who where so nice to allow me to use the data that was publicly available on the website. I thank them for the time they took to discuss this topic, even though I didn't use their data in the end, as they took the data down from their website after my outreach. As the data wasn't available for other participants anymore, I assume I'm not allowed to use it to train my model.</p>
<h3>Validation Setup</h3>
<p>For a long time I pooled all my data and used 5-fold cross validation, making sure multiple images from the same patient would all be in the same fold. Still this lead to inflated validation scores. I seems like my models where overfitting by using some shortcuts from my datasets that do not generalize. Later I excluded the data from the Harmanreh lab completely for validation which lead to much more reliable cross-validation scores.</p>
<h3>Technical Setup</h3>
<p>As the data size of the competition was so large I trained my models locally on my desktop computer with a GTX 4090 card. Feature extraction would take around 6 hours for all my datasets and training my model would take another hour.</p>
<h3>Data sources</h3>
<p>A big thank you to everyone who made their pathological image data publicly available. This is a tremendous help for anyone who wants to build deep learning models to improve digital pathology, but especially for individuals without access to close source clinical data.</p>
<p>Bevacizumab: Wang et al. <em>Weakly Supervised Deep Learning for Prediction of Treatment Effectiveness on Ovarian Cancer from Histopathology Images.</em> Computerized Medical Imaging and Graphics. <a href="https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdoi.org%2F10.1016%2Fj.compmedimag.2022.102093&data=05|01|[email protected]|a1a4f263214846a156f908da58bd5e3c|14b77578977342d58507251ca2dc2b06|0|0|637919868422135484|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=424GtHH8SDUEjvHtaXZvOwt9hcHaZgl36YHL1tzH6T4%3D&reserved=0" target="_blank">https://doi.org/10.1016/j.compmedimag.2022.102093</a></p>
<p>CPTAC-OV: National Cancer Institute Clinical Proteomic Tumor Analysis Consortium (CPTAC). (2020). <strong>The Clinical Proteomic Tumor Analysis Consortium Ovarian Serous Cystadenocarcinoma Collection (CPTAC-OV)</strong> (Version 3) [Data set]. The Cancer Imaging Archive. <a href="https://doi.org/10.7937/TCIA.ZS4A-JD58" target="_blank">https://doi.org/10.7937/TCIA.ZS4A-JD58</a></p>
<p>Harmanreh Data: Köbel, Martin; Kalloger, Steve E.; Baker, Patricia M.; Ewanowich, Carol A.; Arseneau, Jocelyne; Zherebitskiy, Viktor; Abdulkarim, Soran; Leung, Samuel; Duggan, Máire A.; Fontaine, Dan; et al. (2010). "Diagnosis of ovarian carcinoma cell type is highly reproducible: a transcanadian study". <em>The American Journal of Surgical Pathology</em>, 34(7), 984–993. LWW.</p>
<p>I used some screenshots for healthy tissue from <a href="https://www.proteinatlas.org/learn/dictionary/normal/ovary" target="_blank">The Human Protein Atlas</a> and the <a href="https://histology.medicine.umich.edu/resources/female-reproductive-system#ovary-oviduct-suggested-readings" target="_blank">University of Michigan Histology and Cirtual Microscopy</a> page. I zoomed into their full section image of a healthy ovary, zoomed in to the maximum and took screenshots of many differently looking regions.</p>
<h3>Open Source Code</h3>
<p>A second thank you to everyone who made their code or their model weights openly available. This greatly improves innovation and allows individual contributors to stand on the shoulders of giants.</p>
<p><a href="https://github.com/mahmoodlab/CLAM" target="_blank">CLAM</a>: Lu, M.Y., Williamson, D.F.K., Chen, T.Y. et al. Data-efficient and weakly supervised computational pathology on whole-slide images. Nat Biomed Eng 5, 555–570 (2021). <a href="https://doi.org/10.1038/s41551-020-00682-w" target="_blank">https://doi.org/10.1038/s41551-020-00682-w</a></p>
<p><a href="https://libvips.github.io/pyvips" target="_blank">PyVips</a></p>
<p>[HistomicsTK(<a href="https://github.com/DigitalSlideArchive/HistomicsTK" target="_blank">https://github.com/DigitalSlideArchive/HistomicsTK</a>)</p>
<p>[large_iamge]<a href="https://github.com/girder/large_image" target="_blank">https://github.com/girder/large_image</a></p>
<h3>Research Papers</h3>
<p>Wölflein, Georg; Ferber, Dyke; Meneghetti, Asier Rabasco; El Nahhas, Omar S. M.; Truhn, Daniel; Carrero, Zunamys I.; Harrison, David J.; Arandjelović, Ognjen; Kather, Jakob N. (2023). "A Good Feature Extractor Is All You Need for Weakly Supervised Learning in Histopathology". arXiv:2311.11772.</p>
<p><strong>Thanks for you interest in my solution and you can fine me on <a href="https://twitter.com/swanint" target="_blank">Twitter here</a>.</strong></p>
<p>Yes, the resource limits where quite a struggle for me too. For the feature extraction step I'm using PyTorch Dataloader with num_workers = 4 and do the image loading and cropping of tiles with PyVips inside the Dataloader. This way the tiles get prepared asynchronously on multiply threads while the the feature extractor is doing its work. This approach is more memory hungry, but PyVips is very resourceful if you load the images in the "sequential" mode. Sequential mode allows you to read the image only top to bottom, so I always load all tiles in one row at once.</p> |
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | 3th on Public and 6th on Private, A Very Simple Solution: Big Pretrained-Model is All You Need! | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>No, we selected the patches from the original scale rather than the thumbnail image.</p>
<p>Except for the pre-trained weights, we didn't use any extra data.😉</p>
<p>The Chowder MIL model we used is totally from the public implementation in <a href="https://github.com/owkin/HistoSSLscaling/blob/main/rl_benchmarks/models/slide_models/chowder.py" target="_blank">this repo</a>. What we do is model ensembling through adopting different training loss and multi-folds cross-validation.</p>
<p>I think about 30 seconds per wsl image.</p>
<p>haha! XDU YYDS!</p>
<p>Hello, everyone, I'm here to share our solution! We only used the simplest pre-trained weights from iBOT-ViT-Base. Thanks a lot for this excellent work! Here is the project: <a href="https://github.com/owkin/HistoSSLscaling" target="_blank">iBOT-ViT</a>. Specifically, our algorithm consists of five steps:<br>
1、Tiling the WSL image ( or TMA image), we random select 1000 patches (tiles) per image, if not enough, copy them;<br>
2、Using the pre-trained model to extract features, dimension: 1000x768 per image;<br>
3、Training a MIL (Multi-Instance Learning) model, we use the recommended chowder model <a href="https://arxiv.org/pdf/1802.02212.pdf" target="_blank">Chowder</a> mentioned in the above iBOT-ViT method;<br>
4、Model ensemble, (we use 7 different trained chowder models) and use the average entropy ( E=-sum(p*logp) ) for detecting "other";<br>
5、 Adjust the threshold of "other";</p>
<p>Some tips:<br>
1、We found that patch selection has an important impact on performance. Still, we just use the simplest random selection, Recent work: <a href="https://rhazeslab.github.io/PathDino-Page/" target="_blank">PathDINO</a> proposed a fast patch selection method, but we didn't get any improvements.<br>
2、Deep ensemble and uncertainty estimation through entropy help us from 0.59 to 0.65 on public data, but it doesn't seem to have earned me a bonus😔<br>
3、Here is our source code: <a href="https://github.com/yangzhou321/UBC_Challenge/blob/main/ubc_ours.ipynb" target="_blank">UBC_Challenge</a></p>
<p>Any idea or discussion is highly welcomed!</p>
<p>Yes! We use the softmax score of each model's prediction and average them, then we can get the average prediction <em>p</em>. (dimension Bx5 ). Then, calculating the entropy through <em>e=-sum(p</em>logp)* we can get the uncertainty score <em>e</em>. Finally, adjusting the threshold for normal class and "other".</p>
<p>You mean resize? Actually no. In both TMA and WSL, we select patches (size: 256x256) from the original image (not thumbnails). That is, read the image in the "test images" folder whether it belongs to TMA or WSL, and select patches from it. </p>
<p>Actually, 256x256 per patch.</p>
<p>We tried to use LoRA to fine-tune the pre-trained model, but the score didn't change much. So we choose to just use the pre-trained model to extract features for better generalization.</p>
<p>I think patch size is not very important, what matters a lot is the content inside. The focus should be on the selection of patches. But we haven't found a better solution for patch selection. From the extra mask data, we found that the tumor area seemed to have a significant proportion of the whole image, so we selected patches randomly. Actually, in the real scenario, it's best to first segment the tumor area.</p> |
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | 4th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>TMA</h2>
<p>TMA images are centre cropped (eg. 3000 -> 2500) and resized to 768x768 pixels.</p>
<h2>WSI</h2>
<p>A single segmentation model is used for title selection. Segmentation is trained on thumbnail images and supplemental mask data. Tiles from thumbnails are used for mask generation. The location of the pixel with highest probability of being cancerous is selected on the WSI and a region of 1536x1536 pixels around it is cropped and resized to 768x768 pixels, which is then used to predict scores.</p>
<h2>Models</h2>
<ul>
<li>A total of 16 models based on Convnext, Hornet, Efficientnetv1, Efficientnetv2 are used. Instead of softmax,<br>
sigmoid activation is used for predicting scores.</li>
<li>Models are trained on 5 labels + non-cancerous label (using supplemental mask data).</li>
<li>Loss used: Binary crossentropy.</li>
<li>Augmentations used: Stain augmentation, scaling, rotation, flipud, fliplr, random contrast, random brightness, and random hue (thought it might work as stain augmentation).</li>
<li>Median averaging is used for generating score.</li>
<li>A single classifier model along with the segmentation model gives a score of 0.49, 0.55 for public and private leaderboard respectively. </li>
<li>Models are divided among the two gpus (T4 x 2) for memory efficiency.</li>
</ul>
<h2>External Data</h2>
<p>No external data was used.</p>
<h2>Outliers</h2>
<p>Prediction with low scores (< 0.05) can be labelled as <em>Others</em>. Another method is to predict bottom 5 or 10 percentile scores as <em>Others</em>.</p>
<h2>Code</h2>
<p><a href="https://www.kaggle.com/code/mmelahi/ubc-ocean-final-inference/notebook" target="_blank">Submission notebook</a><br>
<a href="https://www.kaggle.com/mmelahi/ubc-ocean-final-single-model-inference" target="_blank">Submission notebook - single model</a><br>
<a href="https://github.com/ManzoorElahi/UBC-Ovarian-Cancer-Subtype-Classification-and-Outlier-Detection" target="_blank">Github</a></p>
<h2>Problematic thumbnail images</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2168426%2Fa993388b8035435ca1ec54ca830059fd%2F5251_thumbnail.png?generation=1704476177150516&alt=media" alt=""><br>
<strong>5251_thumbnail.png</strong></p>
<p>When two or more slices are added side by side as in the above image, the height of the thumbnail becomes smaller, making it harder for the model to predict accurately. For such thumbnails, new thumbnails using WSI are generated - this boosted my score significantly. </p> |
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | 50th Place Solution (Simple Solution) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I am happy to achieve my first solo silver competition medal. It was a very good experience to deal with a medical problem like this. </p>
<h2>My Solution</h2>
<p>I utilized the public solution provided <a href="https://www.kaggle.com/code/jirkaborovec/cancer-subtype-tiles-masks-w-lightning-timm?scriptVersionId=153762364" target="_blank">here</a> with two modifications : <br>
1- In training, I used Maxvit Base (achieved 0.48 in both public and private testing)<br>
2- In Inference, I applied a Softmax threshold technique such that : </p>
<pre><code>
max_mean_prob < :
row[] =
:
lb = np.argmax(mean_probs)
row[] = labels[lb]
</code></pre>
<p>The inference notebook is <a href="https://www.kaggle.com/code/mohammedessam97/top50-v3-cancer-subtype-inference?scriptVersionId=154793672" target="_blank">here</a></p> |
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | 5th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you!</p>
<ol>
<li>Stain normalization: <a href="https://github.com/EIDOSLAB/torchstain" target="_blank">https://github.com/EIDOSLAB/torchstain</a></li>
<li>We only use the tumor tiles provided by the tumor masks from Camelyon as 'Other'</li>
</ol>
<p>First of all, I would like to express my gratitude to the Kaggle staff for hosting this fantastic competition, as well as to the organizers at UBC. I also want to extend my appreciation to all the hardworking participants who dedicated themselves to this competition. Special thanks go to my teammates <a href="https://www.kaggle.com/kapenon" target="_blank">@kapenon</a> and <a href="https://www.kaggle.com/tascj0" target="_blank">@tascj0</a> for their tireless efforts and insightful discussions.</p>
<h1>Solution Overview</h1>
<h2>Summary of submitted final solution:</h2>
<ol>
<li>Inference contains two stages:<ol>
<li>Tile selection model -> Only WSI</li>
<li>Classification model -> WSI and TMA</li></ol></li>
</ol>
<h2>Training details of segmentation model (mainly WSI, TMA is simply center-cropped):</h2>
<ol>
<li>Tile classification helper model (Not used for inference)<ol>
<li>Random cropped tiles at 1536x1536, background excluded</li>
<li>Augmentations: Random horizontal and vertical flips, RandomRotation, RandAugment, RandomGrayscale, RandomErasing</li>
<li>WSI label as tile label</li>
<li>ConvNeXt-base</li>
<li>6-class classification (Hubmap external data as “Other”)</li></ol></li>
<li>Segmentation helper model (Not used for inference)<ol>
<li>Use host provided mask</li>
<li>Tumor binary classification</li>
<li>2x magnification</li>
<li>SEResNeXt101 UNet</li></ol></li>
<li>Tile selection segmentation model (Used for inference)<ol>
<li>Label generation<ol>
<li>Inference all tiles using 1st step models and save predicted probabilities of WSI class</li>
<li>Inference all WSIs using 2nd step models</li>
<li>Create a heatmap at 2x magnification</li>
<li>Heatmap GT is 0.5 classification confidence + 0.5 tumor confidence</li></ol></li>
<li>Training<ol>
<li>SEResNeXt101 UNet</li></ol></li></ol></li>
</ol>
<p>Generated heatmap example:<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2369671%2Fcd379f02a28d0f794dedc644678c0d7f%2F61823.png?generation=1704564181303725&alt=media" alt="heatmaplabel"><br>
Inference example of segmentation model:<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F2369671%2F6e02f20e8a358c0af557f97e2af934fa%2F61823compressed.jpeg?generation=1704564312351461&alt=media"></p>
<h2>Training details of tile classification model (Used for inference)</h2>
<ol>
<li>Random cropped tiles at 1536x1536, background excluded</li>
<li>Augmentations: Random horizontal and vertical flips, RandomRotation, RandAugment, RandomGrayscale, RandomErasing + StainNorm</li>
<li>WSI label as tile label</li>
<li>ConvNeXt-base, ConvNeXt-large, EVA (at 448x448)</li>
<li>6-class classification (Hubmap, Camelyon16, Camelyon17, etc. tiles as “Other”)</li>
<li>Data mining:<ol>
<li>Train 1st round</li>
<li>Predict all foreground tiles<ol>
<li>Confidence <0.3 tiles are pseudo labeled “Other” in 2nd round</li>
<li>Confidence 0.3-0.6 tiles are ignored</li>
<li>Confidence >0.6 tiles are pseudo labeled as WSI label</li></ol></li>
<li>Train 2nd round</li></ol></li>
</ol>
<h2>Inference details:</h2>
<h3>WSI</h3>
<ol>
<li>Tile selection<ol>
<li>Predict heatmap at 2x magnification, top-5 confidence tiles are selected for classification</li></ol></li>
<li>Classification<ol>
<li>Prediction 5 tiles and average predictions as WSI prediction</li>
<li>StainNorm as TTA</li></ol></li>
<li>There are some handling of large WSI images, refer to the submission notebook</li>
</ol>
<h3>TMA</h3>
<ol>
<li>Center crop 1 3072x3072 tile</li>
<li>Resize to 1536x1536</li>
<li>Predict using the classification model</li>
<li>StainNorm as TTA</li>
</ol>
<h2>Used External Datasets and their license:</h2>
<ol>
<li>Hubmap: HuBMAP + HPA - Hacking the Human Body (<a href="https://www.kaggle.com/competitions/hubmap-organ-segmentation/overview" target="_blank">https://www.kaggle.com/competitions/hubmap-organ-segmentation/overview</a>)</li>
<li>Camelyon16, 17: CC0 (<a href="https://www.google.com/url?q=https://camelyon17.grand-challenge.org/Data/&sa=D&source=docs&ust=1704419788737632&usg=AOvVaw1BeJOZcucbo9R0wgu5NK3q" target="_blank">https://www.google.com/url?q=https://camelyon17.grand-challenge.org/Data/&sa=D&source=docs&ust=1704419788737632&usg=AOvVaw1BeJOZcucbo9R0wgu5NK3q</a>)</li>
<li>ovarian-bevacizumab-response (only for WSI labeled as 'Other'): CC BY 4.0 (<a href="https://www.google.com/url?q=https://www.cancerimagingarchive.net/collection/ovarian-bevacizumab-response/&sa=D&source=docs&ust=1704419788739913&usg=AOvVaw0hQlYStI1_qpi7N_YIuHhY" target="_blank">https://www.google.com/url?q=https://www.cancerimagingarchive.net/collection/ovarian-bevacizumab-response/&sa=D&source=docs&ust=1704419788739913&usg=AOvVaw0hQlYStI1_qpi7N_YIuHhY</a>)</li>
</ol>
<h2>What worked</h2>
<ol>
<li>Stain normalization.</li>
<li>Data mining (pseudo labeling).</li>
<li>Segmentation model for identifying valuable patches.</li>
<li>Using external datasets as ‘Other’.</li>
<li>ConvNext and EVA02 models.</li>
<li>Multiscale ensemble.</li>
</ol>
<h2>Codes</h2>
<ol>
<li>Train: <a href="https://github.com/ShuzhiLiu/UBC-OCEAN_5th_solution" target="_blank">https://github.com/ShuzhiLiu/UBC-OCEAN_5th_solution</a></li>
<li>Submission Notebook: <a href="https://www.kaggle.com/code/liushuzhi/5thplacesolutionsubnotebook?scriptVersionId=158180788" target="_blank">https://www.kaggle.com/code/liushuzhi/5thplacesolutionsubnotebook?scriptVersionId=158180788</a><ol>
<li>Without EVA02: Public=0.61, Private=0.61 -> Selected for final sub</li>
<li>With EVA02: Public=0.60, Private=0.63 -> Not selected due to the lack of local CV</li></ol></li>
</ol>
<h2>Acknowledgement</h2>
<p>We would like to express our gratitude to the Kaggle support system and the emotional support of Rist inc.</p>
<p>Thank you! I forgot to set it to public, it should be accessible now.</p> |
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | 7th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Training code has been released to <a href="https://github.com/m1dsolo/UBC-OCEAN-7th" target="_blank">github</a>.</p>
<p>I will start cleaning up the training code tomorrow, I hope it can be helpful to everyone.</p>
<p>Thanks to kaggle and UBC for hosting this interesting competition and congrats to all the winners for their hard work! I would also like to thank my teammates and everyone in the discussion forum for their help!</p>
<h1>Method</h1>
<h2>Summary</h2>
<p>Our final solution is based on multiple instance learning(MIL) for <strong>ovarian cancer subtype classification</strong> and use <code>sigmoid</code> and thresholding for <strong>outlier detection</strong>.<br>
We did not use mask annotation and additional datasets in the final submission.</p>
<h3>1. preprocess</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F15788664%2F16e938db8ccddff048fef4f4b9f306a3%2FUBC-OCEAN-1.jpeg?generation=1704447344840948&alt=media"></p>
<p>1. Use <code>pyvips</code> to speed up png reading speed. (Thanks for <a href="https://www.kaggle.com/code/gunesevitan/libvips-pyvips-installation-and-getting-started" target="_blank">GUNES EVITAN's pyvips notebook</a>.)</p>
<pre><code>image = pyvips.Image.new_from_file(image_id, access=).numpy()
is_tma = image.shape[] <= image.shape[] <=
</code></pre>
<p>2. Downsample WSI and TMA from x20 and x40 to x10 respectively. (Maybe x20 results will be better, but I can't submit due to resource constraints.)</p>
<pre><code> is_tma:
resize = A.Resize(image.shape[] // , image.shape[] // )
:
resize = A.Resize(image.shape[] // , image.shape[] // )
image = resize(image=image)[]
</code></pre>
<p>3. Deduplicate the identical tissue areas for WSI. (I'm not sure if this contributed to the results, but it saved me a lot of local memory.)</p>
<pre><code> ():
image = image.astype(np.float16)
image = (image[..., ] * + image[..., ] * + image[..., ] * ) /
image.astype(np.uint8)
is_tma:
resize = A.Resize(image.shape[] // , image.shape[] // )
thumbnail = resize(image=image)[].astype(np.float16)
mask = rgb2gray(thumbnail) >
x0, y0, x1, y1 = get_biggest_component_box(mask)
scale_h = image.shape[] / thumbnail.shape[]
scale_w = image.shape[] / thumbnail.shape[]
x0 = (, math.floor(x0 * scale_w))
y0 = (, math.floor(y0 * scale_h))
x1 = (image.shape[] - , math.ceil(x1 * scale_w))
y1 = (image.shape[] - , math.ceil(y1 * scale_h))
image = image[y0: y1 + , x0: x1 + ]
</code></pre>
<p>4. Use the non-overlapping sliding window method to tile the tissue area into 256x256 patches. (For TMA I used overlap, but not sure if that would have an impact on the results.)</p>
<pre><code> ():
patches = []
i (, image.shape[], step):
j (, image.shape[], step):
patch = image[i: i + patch_size, j: j + patch_size, :]
patch.shape != (patch_size, patch_size, ):
patch = np.pad(patch, ((, patch_size - patch.shape[]), (, patch_size - patch.shape[]), (, )))
is_tma:
patch = transform(image=patch)[]
patches.append(patch)
:
patch_gray = rgb2gray(patch)
patch_binary = (patch_gray <= ) & (patch_gray > )
np.count_nonzero(patch_binary) / patch_binary.size >= ratio:
patch = transform(image=patch)[]
patches.append(patch)
(patches) != :
patches = torch.stack(patches, dim=)
:
patches = torch.zeros(, dtype=torch.uint8)
patches
image2patches(image, , [, ][is_tma], , transform, is_tma)
</code></pre>
<h3>2. Subtype classification</h3>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F15788664%2Ff7021178fffcbd920a2bbf69c47f4cf2%2FUBC-OCEAN-2.jpeg?generation=1704447379688941&alt=media"></p>
<p>Cancer subtype classification method is mainly based on multiple instance learning(MIL).<br>
After trying various backbone and MIL methods, <code>CTransPath</code> and <code>LunitDINO</code> were finally selected as the backbone, <code>DSMIL</code> and <code>Perceiver</code> were selected as the MIL classifier. For their specific information, please refer to:</p>
<ol>
<li><a href="https://github.com/Xiyue-Wang/TransPath" target="_blank">CTransPath, MIA2022</a></li>
<li><a href="https://github.com/lunit-io/benchmark-ssl-pathology" target="_blank">LunitDINO, CVPR2023</a></li>
<li><a href="https://github.com/binli123/dsmil-wsi" target="_blank">DSMIL, CVPR2021</a></li>
<li><a href="https://github.com/cgtuebingen/DualQueryMIL" target="_blank">Perceiver, BMVA2023</a></li>
</ol>
<p>Local CV results:</p>
<table>
<thead>
<tr>
<th>exp</th>
<th>CC</th>
<th>EC</th>
<th>HGSC</th>
<th>LGSC</th>
<th>MC</th>
<th>mean</th>
</tr>
</thead>
<tbody>
<tr>
<td>CTransPath + DSMIL</td>
<td>0.9300</td>
<td>0.7657</td>
<td>0.8909</td>
<td>0.7822</td>
<td>0.7911</td>
<td>0.8320</td>
</tr>
<tr>
<td>CTransPath + Perceiver</td>
<td>0.9695</td>
<td>0.8147</td>
<td>0.8818</td>
<td>0.8044</td>
<td>0.9156</td>
<td>0.8772</td>
</tr>
<tr>
<td>LunitDINO + DSMIL</td>
<td>0.9400</td>
<td>0.7240</td>
<td>0.8864</td>
<td>0.8244</td>
<td>0.9356</td>
<td>0.8621</td>
</tr>
<tr>
<td>LunitDINO + Perceiver</td>
<td>0.9300</td>
<td>0.7983</td>
<td>0.8591</td>
<td>0.8711</td>
<td>0.8933</td>
<td>0.8704</td>
</tr>
</tbody>
</table>
<p>Leaderboard results:</p>
<table>
<thead>
<tr>
<th>exp</th>
<th>public</th>
<th>private</th>
</tr>
</thead>
<tbody>
<tr>
<td>CTransPath + LunitDINO + DSMIL</td>
<td>0.57</td>
<td>0.54</td>
</tr>
<tr>
<td>CTransPath + LunitDINO + Perceiver</td>
<td>0.58</td>
<td>0.57</td>
</tr>
<tr>
<td>CTransPath + LunitDINO + DSMIL + Perceiver</td>
<td>0.6</td>
<td>0.58</td>
</tr>
</tbody>
</table>
<p>I almost didn't adjust the MIL hyperparameters because I found that high CV score tended to be low public score.</p>
<ol>
<li>For <code>DSMIL</code>, we use <code>nn.CrossEntropyLoss</code> as loss function.</li>
<li>For <code>Perceiver</code>, we use <code>nn.BCEWithLogitsLoss</code> as loss function and use <code>mixup</code>, <code>label smoothing</code> to alleviate overfitting.</li>
</ol>
<h3>3. Outlier detection</h3>
<p>We tried many methods, two of which can get a private score of 0.6. (Private score 0.58 if not use outlier detection.)</p>
<h4>1. BCE + Thresholding</h4>
<p>Score: public 0.6 and private 0.6.</p>
<p>This method is very simple. Use <code>nn.BCEWithLogitsLoss</code> as the loss function to train the model, and then for the maximum prediction probability, if it is less than 0.4, it is considered an outlier.</p>
<pre><code>logits = self.model(x)
probs = F.sigmoid(logits)
pred = probs.argmax(dim=).item()
(probs) < PROB_THRESH:
pred =
</code></pre>
<h4>2. Probability entropy</h4>
<p>Score: public 0.54 and private 0.6.</p>
<p>This method is also very simple. Compared to setting a probability threshold, this method detects outliers by calculating the entropy of the probability.</p>
<pre><code>logits = self.model(x)
probs = F.sigmoid(logits)
pred = probs.argmax(dim=).item()
entropy = (probs * torch.log2(probs)).mean(dim=)
entropy > ENTROPY_THRESH:
pred =
</code></pre>
<h1>Summary</h1>
<h2>which didn't work</h2>
<ol>
<li>Extra dataset: ATEC, PTRC-HGSOC, CPTAC-OV, TCGA-OV, Bevacizumab.</li>
<li>End-to-end finetune the backbone and MIL together by selecting cancer areas through attention or mask.</li>
<li>Select only patches in cancer areas for MIL.</li>
<li>Detect outliers based on patch prediction probability entropy. (<a href="https://www.sciencedirect.com/science/article/pii/S1361841522002833" target="_blank">MIA2023</a>)</li>
<li>Detect outliers based on KNN classifier. (<a href="https://arxiv.org/abs/2309.05528" target="_blank">Arxiv2023</a>)</li>
</ol>
<h1>Supplementary</h1>
<p>All pytorch codes(include submission notebook) are built based on <a href="https://github.com/m1dsolo/yangdl" target="_blank">a simple pytorch-based deep learning framework</a>.<br>
This framework has only a few hundred lines of code and I think it is very suitable for beginners to learn.</p>
<ol>
<li><a href="https://www.kaggle.com/code/m1dsolo/ubc-ocean-7th-submission" target="_blank">submission notebook</a></li>
<li><a href="https://github.com/m1dsolo/UBC-OCEAN-7th" target="_blank">Training code</a></li>
</ol>
<p>Yes, it is <code>if not is_tma</code>, I made a mistake when organizing the code. I've corrected it. The purpose of <code>thumbnail</code> is only to speed up the process of finding the bounding box <code>[x0, y0, x1, y1]</code>, the bounding box will be scaled in <code>x0 = max(0, math.floor(x0 * scale_w)) ...</code>. Selecting ROI through bounding boxes can achieve the purpose of deduplication.</p>
<p>Thanks! Happy new year to you too!</p>
<p>The <code>Ctranspath</code> weight I used is the <a href="https://drive.google.com/file/d/1DoDx_70_TLj98gTf6YTXnu4tFhsFocDX/view" target="_blank">official weight</a>. I only changed the file name.</p> |
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | 9th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes,25 TMA not in the training.<br>
In testing, I can correctly predict 21-23 out of 25.<br>
In all 25 TMA cases, at least one of my three models predicted correctly, but I couldn't effectively ensemble it.</p>
<h3>Only use Competition Data, no External Data</h3>
<h2><strong>Split WSI and TMA:</strong></h2>
<p>WSI images have black pixels (all zeros in all three channels), while TMA images do not. Therefore, if both the image width and height are less than 6000, but the area of black pixels is greater than 5% of the image (all WSI images in the training data have more than 10% black pixels), it is classified as WSI; otherwise, it is classified as TMA.</p>
<h2><strong>Make tile :</strong></h2>
<p>First, reduce the size of the WSI by 0.33 times, and then divide it into 512*512 tiles. Subsequently, categorize these tiles into three levels based on the presence of bad pixels, identified by the condition "np.sum(np.ptp(tile, axis=2) < 20)".<br>
Inference make tile code:</p>
<pre><code> ():
path=
p_mask=
image=cv2.imread(path)
image=cv2.resize(image,(,),fx=scale,fy=scale,interpolation=cv2.INTER_AREA)
mask=np.load(p_mask)
os.makedirs(,exist_ok=)
count=
(count<):
idxs=[(y,x) y (,image.shape[]//) x (,image.shape[]//)]
random.shuffle(idxs)
k, (y, x) (idxs):
tile=image[y*:(y+)*,x*:(x+)*,:]
bg_count=np.(np.ptp(tile,axis=)<)
((bg_count/(*))<=):
cv2.imwrite(,tile)
count+=
count>=:
count<:
idxs=[(y,x) y (,image.shape[]//) x (,image.shape[]//)]
random.shuffle(idxs)
k, (y, x) (idxs):
tile=image[y*:(y+)*,x*:(x+)*,:]
bg_count=np.(np.ptp(tile,axis=)<)
((bg_count/(*))<=)&((bg_count/(*))>):
cv2.imwrite(,tile)
count+=
count>=:
count<:
idxs=[(y,x) y (,image.shape[]//) x (,image.shape[]//)]
random.shuffle(idxs)
k, (y, x) (idxs):
tile=image[y*:(y+)*,x*:(x+)*,:]
bg_count=np.(np.ptp(tile,axis=)<)
((bg_count/(*))<=)&((bg_count/(*))>):
cv2.imwrite(,tile)
count+=
count>=:
</code></pre>
<h4>Training tile:</h4>
<p><strong>Step 1:</strong>Using all tiles if bg_count/area less 0.5<br>
<strong>Step 2:</strong>If tiles of WSI image<50,add ((bg_count/area) between 0.5-0.65) tiles until there are 50 tiles.<br>
<strong>Step 3:</strong>If tiles of WSI image<20,add ((bg_count/area) between 0.65-0.75) tiles until there are 20 tiles.</p>
<h2>Model Training:</h2>
<p>Only use WSI tiles. Randomly select 6 tiles from each image for training in every batch.<br>
Loss Function: Binary Cross-Entropy (BCE)<br>
<strong>Step 1:</strong> Normal Training<br>
<strong>Step 2:</strong><br>
Utilize the results from Step 1 to generate auxiliary labels.If the predicted value for true label is greater than 0.3, set the auxiliary label to 1; otherwise, set it to 0.<br>
Re-train the model without using the weights from Step 1.<br>
Loss function: Label loss (BCE) + 0.3 * Auxiliary Label loss (BCE)<br>
Learning rate: 2e-4<br>
<strong>Step 3:</strong> Fine-tuning with Step 2 Weights<br>
Further refine the model using the weights obtained from Step 2.<br>
Loss function: Label loss (BCE) + 0.15 * Auxiliary Label loss (BCE)<br>
Learning rate: 5e-5</p>
<h4>Models with different backbone:</h4>
<p>efficientnetb4,efficientnet_v2s,maxvit_tiny (The model settings of different backbones are slightly different.)</p>
<h3>WSI</h3>
<p>Use model to predict tiles.</p>
<h3>Wsi tile ensemble:</h3>
<pre><code>tile_df[]=np.(tile_df[[,,,,]],axis=)
tile_df[]=np.argmax(tile_df[[,,,,]].values,axis=)
tile_df=tile_df[[,,,]].groupby([,])[[,]].mean().reset_index()
idx=tile_df.groupby([])[].idxmax()
wsi_df=tile_df.loc[idx1].reset_index(drop=)
</code></pre>
<h3>Outliers(WSI):</h3>
<p>The predicted mean value of aux_label<0.5(The score is almost the same as not predict "Other",maybe+0.01)</p>
<h2>tma:</h2>
<p><strong>Step1.</strong>Crop tma</p>
<pre><code> ():
ks=((img.shape[],img.shape[])//,)
mask=(img.(axis=)-img.(axis=))>
kernel = np.ones((ks, ks),np.uint8)
mask=cv2.erode(mask.astype(np.uint8),kernel)
nonzero_pixels = np.column_stack(np.where(mask > ))
(nonzero_pixels.size)<(img.size//):
img
:
min_y, min_x = np.(nonzero_pixels, axis=)
max_y, max_x = np.(nonzero_pixels, axis=)
img[(,min_y-ks):max_y+ks+,(,min_x-ks):max_x+ks+,:]
</code></pre>
<p><strong>Step2.</strong>Resize to 512*512(The size of tma *0.33*0.5~512, so we can directly resize to 512 to predict)<br>
<strong>Step3.</strong>Using wsi training model to predict</p>
<h3>Outliers(tma):</h3>
<p>The predicted value of aux_label <0.5 (compared to tma without predict "Other", public score +0.03, private score +0.06)</p>
<h2>Ensemble different models:</h2>
<p>Voting(Compared with a single model, maybe only +0.01)</p>
<h2>Maybe not work:</h2>
<p>segmentation</p> |
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | A Bronze Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hey.<br>
First, I want to thank the competition hosts for this challenge - I learnt a lot and it was nice to use actual histological data. Event though I "only" have a bronze medal, I thought it might be interesting for some people to see my approach.<br>
I have written a more detailled report on my website if somebody wants to have all the details (it's in German but Google Translate does a god job): <a href="https://manuelk-net.translate.goog/portfolio/UBC_KI.html?_x_tr_sl=de&_x_tr_tl=en&_x_tr_hl=en-US&_x_tr_pto=wapp" target="_blank">Complete case study</a></p>
<h2>Overview of the approach: The 2 Stage Model</h2>
<h3>Data extraction</h3>
<p>I tried a lot of different things, but in the end my own prerequisites I set for the solution were as follows:</p>
<ul>
<li>The classification can only work on a cellular scale. A macroscopic image of the whole slide won't provide nearly as much information as needed.</li>
<li>There are cancerous and non-cancerous areas in most of the slides. I have to extract relevant patches and train a model only on these.</li>
</ul>
<p>I guess most of the other competitors would agree when I say, that a big challenge was the data preprocessing. The method I used is based on the masks provided, so I wasn't able to use all the images. The process is as follows:</p>
<ul>
<li>The original image is divided into N x M patches of size 512px. This is done without any spacing, meaning the patches are right next to each other.</li>
<li>The tissue type is read from the mask and the patch is sorted into a corresponding folder</li>
</ul>
<p>The procedure becomes clear if you look at the following example:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F14931205%2F420d56830587ac5694d68037acb741c2%2Fimage11.png?generation=1704451956255218&alt=media" alt=""></p>
<h3>Detector and Classifier</h3>
<p>Now that we have clear information about which patches show healthy tissue and which show malignant tissue, my approach is a 2 stage model. So two stages that carry out the following classifications one after the other:</p>
<ol>
<li>Detector: A neural network detects patches that show malignant tissue. This is then a binary classification of all patches.</li>
<li>Classifier: A second neural network determines the subtype (HGSC, MC, …) of all patches detected as malignant.</li>
</ol>
<p>Number 1 is of course the easier task, but the above requirement should be met here: The classifier only sees actually malignant tissue and not healthy patches.</p>
<p>EdgeNext is used for both stages, as it already delivered very good in earlier experiments.</p>
<p>Approximately 75% accuracy was achieved in the validation data set. Interestingly, this is only slightly better than compared to my first model (~72%) which used randomly selected patches.</p>
<p>The following procedure was then used for inference:</p>
<ol>
<li>M x N patches with a size of 512px are extracted from each slice image.</li>
<li>Completely dark or light patches are sorted out</li>
<li>Detection: The detection model determines whether all relevant patches are cancerous tissue.</li>
<li>Classification: The Classifier Model uses all cancerous patches and determines the subtypes.</li>
<li>The result is a list of predictions of all malignant patches. The average value is determined from this and written into the output table as the final result.</li>
</ol>
<p>The final result on the private test set is <strong>0.44</strong>. Unfortunately, the best model I choose on the public LB wasn't the best on the private. Theoretically, the best score was 0.48.</p>
<h2>What didn't work</h2>
<ul>
<li><p>I tried Multi Instance Learning with a neural network as feature extractor and Boosting, SVM or a RNN. All of these experiments gave good results but were always a few percent worse compared to the 2 Stage approach.</p></li>
<li><p>I used external dataset (e.g. the <a href="https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=83593077" target="_blank">Bevacizumab Response Dataset</a> ) ,but I never got any improvement on the score. Maybe these images differ too much from the ones we had here.</p></li>
<li><p>Simple novelty detection. Based on the feature extractor, I tried a One-Class SVM and Isolation Forest to detect the "Other" class but there was no improvement.</p></li>
</ul>
<h2>How to improve</h2>
<p>Two things: Outlier detection and generalization. I didn't spent much time on the outlier detection part, but from the solutions I've seen so far it can improve the results a lot.</p>
<p>In all my experiments the difference between train, validation and LB score was quite high. So it was always a mild form of overfitting and with more (labeled and segmented) data this could be reduced.</p>
<p>In the end, I am very happy that I won my first bronze medal and congratulations to all other participants! The solutions I looked over from you guys were amazing.</p> |
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | [14th Place Notes] Image Agumentation + Domain Adaptation + ABMIL | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This was my first Kaggle competition, an enjoyable and educational journey. <br>
Here are some key takeaways from my experience:</p>
<h1>==================================================</h1>
<h1>Basic structure of my model</h1>
<p>1 backbone <strong>(densenet201)</strong> for instance-level feature extraction >> 2 <strong>ABMIL</strong> models (TMA and WSI separated) for bag-level classification</p>
<h1>==================================================</h1>
<h1>Training procedure</h1>
<ul>
<li>Use the <strong>152 WSI masks</strong> to extract tiles whose types I'm certain of for <strong>backbone training.</strong></li>
<li><strong>Lock the backbone</strong> for feature extraction, applying <strong>MIL training</strong> on <strong>513 WSI dataset</strong> with <strong>a light attantion model that has its own classifier</strong>. The bag classifier's params are inherited from the backbone and is applied with a low learning rate during MIL training.</li>
<li>Use TMA samples for <strong>unsupervised domain adaptation</strong> training and monitoring the model's perfomance on TMA during the whole train process.</li>
</ul>
<h1>==================================================</h1>
<h1>Inference procedure</h1>
<ul>
<li>All images are patched into 224×224 tiles: <br>
WSI is scaled down by 0.5, TMA is scaled down by 0.25;<br>
WSI is patched in grid of 224, TMA is patched in grid of 120;<br>
the maximum number of tiles for one bag is set at 512 (this is for large WSIs);</li>
<li>A single backbone is shared for feature extraction of both WSIs and TMAs. It also performs instance-level prediction, and only tiles that are classified as cancer types will be sent to attention models. (heathy or dead tiles are eliminated)</li>
<li>Two attention models dedicated to WSI and TMA separately transform features of tiles into one bag label, a softmax confidence threshold of 0.4 is used to re-label low-confidence predicitons as "Other". </li>
</ul>
<h1>==================================================</h1>
<h1>Breakthroughs during exploration</h1>
<h2>Image Augmentation</h2>
<p>Learning that both TMA and WSI can vary significantly in staining, color, and clarity, I implemented extensive augmentation techniques. This included custom tools like using a circular mask to make WSI tiles resemble TMA more closely. Based on my LB perfomrance, I think Image Augmentation is a critical step for improving the models' genralization ability on TMA. <br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9629127%2F8da84bf07bd9986b2f1594b7aa2e81f9%2F2024-01-04%20211824.png?generation=1704374341582625&alt=media" alt=""><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9629127%2F3a7515caf75739270a9242fdbbea5937%2F2024-01-04%20212152.png?generation=1704374567405661&alt=media" alt=""></p>
<h2>Domain Adaption</h2>
<p>Still, the model trained on WSI tiles performs worse than my expection on TMA. So I used <strong>domain adversial training techniques</strong>, from classic <strong>DANN</strong>, <strong>heurstic domain adaptaion</strong>, to <strong>toAlign</strong>. The key idea is to <strong>use the limited TMA images to help the model extract more task-related and less domain-related features without revealing their labels</strong>. This is the second and most critical step for my score boost on LB.</p>
<p>During training, I use TMA acc to actively monitoring the model's transfering ability on TMA:<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F9629127%2Fcd8350f60477009890400cb196944b47%2F2024-01-04%20200547.png?generation=1704369980249966&alt=media" alt=""></p>
<p><strong><em>related resources:</em></strong><br>
Domain-Adversarial Training of Neural Networks: <a href="https://arxiv.org/abs/1505.07818" target="_blank">https://arxiv.org/abs/1505.07818</a><br>
Heuristic Domain Adaptation: <a href="https://arxiv.org/abs/2011.14540" target="_blank">https://arxiv.org/abs/2011.14540</a><br>
ToAlign: Task-oriented Alignment for Unsupervised Domain Adaptation: <a href="https://arxiv.org/abs/2106.10812" target="_blank">https://arxiv.org/abs/2106.10812</a></p>
<h2>AB-MIL (Attention-based Deep Multiple Instance Learning)</h2>
<p>I used <strong>the most basic attention-based MIL model</strong> with <strong>a self-attention kernel</strong> whose impact I'm unclear of. Based on my observation, if the feature extractor is trained well, the model performed adequately with basic MIL (max/mean scoring) on public LB. However, AB-MIL offered much better accuracy on my local valid dataset, therefore theoretically more stable and superior performance.</p>
<p><strong><em>related resources:</em></strong><br>
Attention-based Deep Multiple Instance Learning: <a href="https://arxiv.org/abs/1802.04712" target="_blank">https://arxiv.org/abs/1802.04712</a><br>
Kernel Self-Attention in Deep Multiple Instance Learning: <a href="https://arxiv.org/abs/2005.12991" target="_blank">https://arxiv.org/abs/2005.12991</a></p>
<h1>==================================================</h1>
<h1>Approaches that I found not quite useful</h1>
<ul>
<li><strong>Feature augmentation</strong><br>
I aimed to further narrow the gap between WSI and TMA after applying various image augmentation methods. Attempting to add noise directly to the extracted features, however, proved ineffective.<br>
<strong><em>related resources:</em></strong><br>
A Simple Feature Augmentation for Domain Generalization: <a href="https://openaccess.thecvf.com/content/ICCV2021/papers/Li_A_Simple_Feature_Augmentation_for_Domain_Generalization_ICCV_2021_paper.pdf" target="_blank">https://openaccess.thecvf.com/content/ICCV2021/papers/Li_A_Simple_Feature_Augmentation_for_Domain_Generalization_ICCV_2021_paper.pdf</a></li>
<li><strong>Switch backbones</strong><br>
I tried different types of backbones, from classic reset to popular efficientnet, None of them perfomed better than densenet with my pipeline. </li>
<li><strong>Traditional anomaly detection techniques for outlier detection</strong><br>
I tried Isolation Forest, DBSCAN on the features I extracted and found that these methods couldn't even tell the existing cancer types apart. I soon releazed that there was no way these methods could surpass my specially trained classifiers. To me this is an absoulte wrong path.</li>
</ul>
<h1>==================================================</h1>
<h1>Potential improvements in the future</h1>
<ul>
<li>Fundimental training techniques:<br>
Label Smooth<br>
Mixup and CutMix<br>
CV for the best backbone+ABMIL combination<br>
Median averaging for basic MIL approach instead of max/mean<br>
Use BCEWithLogitsLoss. Unlike CrossEntropy, it computes independently for each label.<br>
Train in 16-bit float to increase speed, usually doesn't hurt performance</li>
<li>Smarter ways to disthiguish outlier:<br>
Use Entropy to thresholding<br>
Predict bottom 5 or 10 percentile scores as Others<br>
Train directly with external other cancer types<br>
synthesize images with exsiting types as "Other" data for training (I doubt its validity, but seems to work as well)</li>
<li>I didn't use Model Ensemble at all. There are multiple ways to use ensemble:<br>
Ensemble of backbones of the same structure but trained on different scales<br>
Ensemble of backbones of different strutrues<br>
Ensemble of backbones (like ConvNext, HoRNet, EfficientNetV1, and EfficientNetV2) predicts different sets of labels for the same input, using sigmoid activations to combine / compare independent probabilities across models<br>
Ensemble of attention models of different structures</li>
<li>Despite my unsuccessful tests with different backbones, many teams with top LB scores credited models specifically trained on Pathology dataset, which I think should be of vital help:<br>
CTransPath ( <a href="https://github.com/Xiyue-Wang/TransPath" target="_blank">https://github.com/Xiyue-Wang/TransPath</a> )<br>
LunitDINO ( <a href="https://github.com/lunit-io/benchmark-ssl-pathology" target="_blank">https://github.com/lunit-io/benchmark-ssl-pathology</a> )<br>
iBOT-ViT ( <a href="https://github.com/owkin/HistoSSLscaling" target="_blank">https://github.com/owkin/HistoSSLscaling</a> )</li>
<li>Try more sophisticated MIL attention models:<br>
DTFD-MIL ( <a href="https://arxiv.org/abs/2203.12081" target="_blank">https://arxiv.org/abs/2203.12081</a> )<br>
TransMIL ( <a href="https://arxiv.org/abs/2106.00908" target="_blank">https://arxiv.org/abs/2106.00908</a> )<br>
CLAM ( <a href="https://github.com/mahmoodlab/CLAM" target="_blank">https://github.com/mahmoodlab/CLAM</a> )<br>
DSMIL ( <a href="https://github.com/binli123/dsmil-wsi" target="_blank">https://github.com/binli123/dsmil-wsi</a> )<br>
Perceiver ( <a href="https://github.com/cgtuebingen/DualQueryMIL" target="_blank">https://github.com/cgtuebingen/DualQueryMIL</a> )</li>
</ul>
<h1>==================================================</h1>
<p>My notebook link: <br>
<a href="https://www.kaggle.com/code/yannan90/ubc-submit-att" target="_blank">https://www.kaggle.com/code/yannan90/ubc-submit-att</a></p> |
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | [8th Place Solution] Understanding Data Before Designing Methods | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi <a href="https://www.kaggle.com/lizzylee1111111111" target="_blank">@lizzylee1111111111</a> ,</p>
<ol>
<li><p><code>May I ask how you came up with the idea of "rough detection on outliers" with Arcface on medical images?</code> . Arcface is usually used in Image Retrieval task. To detect outliers, tackling with image retrieval instead of image classification is better.</p></li>
<li><p>I fine-tuned the arcface with the patch cropped on the masks provided by the host.</p></li>
</ol>
<p>depends on the backbone and the resolution. For all my backbones and all the resolution, BS is from 16 to 64.</p>
<p>Thank you! There are nothing special in my code. I may make it available after I review them. But you remind me an important key points which I forget to write in my post. That is, <strong>Rank the DataFrame of WSI with the number of pixels first, and process the WSIs in the order with multi-thread can greatly speed the the WSI processing procedure.</strong></p>
<p>I only use the GPU on Kaggle for inference debugging. I finish all my training on 1 A100 .</p>
<p>Yes, it's a 5-folds cross validation.</p>
<p></p>
<p>The architecture of my methods is updated. </p>
<h1>Acknowledgments</h1>
<p>I would like to express my gratitude to Kaggle, the organizers, and other participants in the community. I have learned a great deal from this competition and hope that it will promote the advancement of research in MIL,Classification and outliers detection of Human tissues, and the study of women's health.</p>
<h1>Introduction</h1>
<p>The first and most crucial step in this competition is to familiarize oneself with the feature of Whole Slide Images (WSI) and Tissue Microarray (TMA) images. A simple visual examination reveals that <strong>the features of TMA and WSI are presented at vastly different scales.</strong> TMA features are at the cellular level, while WSI features are several to tens of times larger than cell clusters. Hence, from the onset of the competition, I decided to tackle WSI and TMA with two completely distinct approaches. </p>
<h2>Key points for the global approach:</h2>
<ol>
<li><strong>Address WSI and TMA separately</strong> .</li>
<li><strong>Align the magnification of TMA and WSI</strong> to enable the reuse of TMA training code for optimizing the WSI feature extractor. <ul>
<li>WSI images are 20x magnified, while TMA images are 40x. Thus, TMA images need to be downsampled by a factor of 2 to align with the physical scale of WSI, which can also be observed with a visual inspection of the training data. </li></ul></li>
</ol>
<h2>Architecture of the solution</h2>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F7285387%2F2ad9ed3ef7a2b0b9d8f8f13e1f30bc00%2FArch.png?generation=1704357989295647&alt=media" alt=""></p>
<h2>TMA Approach</h2>
<h3>Summary of Key Points:</h3>
<ol>
<li><strong>Train with patches tiled with official masks</strong> (most important globally). </li>
<li>Train the model with <strong>healthy and death patches</strong> to predict some outliers. </li>
<li>Employ <strong>arcface</strong> to retrieve some outliers and part of the 5-class classification. </li>
<li>Heavy ensemble of 6 classification models to predict samples that arcface retrieval did not cover. </li>
</ol>
<h3>Inference:</h3>
<ol>
<li><p>Use arcface retrieval with the training set's TMA as templates. Output 5-class results for samples with a close cosine distance, and consider those with a far cosine distance as outliers. Samples that are uncertain in the arcface stage are left for the subsequent phase. </p>
<ul>
<li>Models: 5-fold effv2s + 5-fold convnext small with dynamic margin, subcenter=3. </li>
<li>Top1 threshold at 0.05, Other threshold at 0.2. </li></ul></li>
<li><p>The 6-class model uses 2-fold effv2s + 5-fold effv2l + 4-fold convnext small + 3-fold convnext large. </p></li>
</ol>
<table>
<thead>
<tr>
<th>Model</th>
<th>Resolution</th>
</tr>
</thead>
<tbody>
<tr>
<td>Effv2s</td>
<td>1280</td>
</tr>
<tr>
<td>Effv2l</td>
<td>1280</td>
</tr>
<tr>
<td>Convnext Small</td>
<td>1024</td>
</tr>
<tr>
<td>Convnext Large</td>
<td>1024</td>
</tr>
</tbody>
</table>
<p>Note: The arcface can tackle about 60% of the TMAs. So even the second stage is heavy, it doesn't cause timeout.</p>
<h3>Training:</h3>
<ol>
<li>Tile patches using the official segmentation masks. In addition to the official 5 categories, classify patches of <code>healthy</code> and <code>dead</code> as <code>Other</code> class. Instead of training with the official TMAs , which has limited number, use them for subsequent validation and retrieval. </li>
<li>Use the first-phase models to generate pseudo-label patches on the remaining 300+ WSIs without mask. </li>
<li>Inherit the weights from the first or second step, training only the last layer of the backbone and the arcface head. </li>
</ol>
<h2>WSI Approach</h2>
<h3>Summary of Key Points:</h3>
<ol>
<li><strong>Train the feature extractor using the TMA pipeline</strong>. </li>
<li><strong>Synthesize <code>Other</code> WSI</strong> during the training process. </li>
<li><strong>Integrate different magnification scales.</strong> </li>
<li>Rank the DataFrame of WSI with the number of pixels, and process the WSIs with multithread can greatly speed the the WSI processing procedure.</li>
</ol>
<h3>Inference:</h3>
<ol>
<li>Ensemble of two resolutions of feature extractor: 3072 resized to 768, and 1024 without down-sampling. After extracting features, apply DTFD-MIL. </li>
<li>For speed up the image processing procedure, I only use the center region of each 3072 tile, so that I only need to crop patch from the WSI once. </li>
</ol>
<table>
<thead>
<tr>
<th>Model</th>
<th>Resolution</th>
<th>Number folds</th>
</tr>
</thead>
<tbody>
<tr>
<td>Convnext Small</td>
<td>3072 resize 768</td>
<td>2</td>
</tr>
<tr>
<td>Effv2s</td>
<td>3072 resize 768</td>
<td>3</td>
</tr>
<tr>
<td>Effv2s</td>
<td>1024</td>
<td>4</td>
</tr>
</tbody>
</table>
<h3>Training:</h3>
<ol>
<li>Train the feature extractor using the TMA pipeline. </li>
<li>Extract features using the feature extractor. </li>
<li>Predict the probability of 'Other' on all patches using the feature extractor, and <strong>create an <code>Other pool</code></strong> with patches that have a high probability of 'Other'. </li>
<li>During the training of DTFD-MIL, dynamically synthesize some <code>WSI</code> from the <code>Other pool</code> each epoch. <ul>
<li>Training DTFD-MIL serves as a validation for whether the TMA pipeline models truly learned useful features. If we use the pretrained weights from ImageNet, DTFD-MIL requires up to 200 epochs to converge. In contrast, using models trained on TMA, the MIL Head may take as little as 1 epoch and at most 20 epochs to converge.</li></ul></li>
</ol>
<h2>Something I don't have time to try but I think may work</h2>
<ol>
<li>Arcface for WSI. </li>
<li>Large transformer pretrained on Large set of Slides. In fact I tried PLIP at the early stage of this competition, But I didn't dig deeper.</li>
<li>Better retrieve strategy for arcface.</li>
<li>More resolution for WSI . I tried to add <code>6144 resize to 1024</code> into my final pipeline. But the notebook crashed.</li>
<li>Ensemble of MIL head</li>
<li>External data.</li>
</ol>
<h2>Todo:</h2>
<p>This post is almost completed :</p>
<ol>
<li></li>
<li></li>
<li></li>
<li><br>
If possible, I may add more ablation study in two weeks.</li>
</ol>
<h2>Discussion and Citation</h2>
<ol>
<li><a href="https://arxiv.org/abs/2203.12081" target="_blank">DTFD-MIL</a> is a strong and robust baseline for MIL, which follows ABMIL. </li>
<li>I think MIL methods are not sensitive to the position and the number of patches, which is observed in my experiment. That's why I randomly synthesized some <code>WSI</code> from the <code>Other pool</code> each epoch for WSI. And reduce the number on <code>1024 resolution wo down-sample</code> for submission. This is also proved in <a href="https://ieeexplore.ieee.org/document/10219719/authors#authors" target="_blank">this paper</a>. </li>
<li>The reason why I ensemble multiple resolution for WSI: if you check the <a href="https://www.pathologyoutlines.com/topic/ovarytumorwhoclassif.html" target="_blank">clarification on WHO</a>. You will find that Pathologists distinguish the subtypes of Ovarian cancer on different magnification. So I believe ensemble multiple resolution for classifying a WSI should be very important. </li>
<li>About Larger transformer-based model: I believe the competition needs model with better generalization. And larger models are often more robust. At the early stage of this competition, I takes some time to test PLIP offline, but the CV is not good. Also, since I decide to solo this time, So I have to spend my limited time on the direction I am most confident about, solving this task in a more traditional and solid way. </li>
</ol>
<h2>Closing Thoughts: My Journey to Grandmaster</h2>
<p>The path to becoming a Grandmaster has been lengthy, filled with challenges, and an experience I will cherish for life. I am incredibly thankful for the support from my teammates in past competitions, as well as the unwavering encouragement from my family and girlfriend throughout my journey. The final step to becoming a Grandmaster, achieving a solo gold medal, has been particularly solitary and tough. This was my fourth attempt at a solo gold medal. if I failed this time, it might have been three years, ten years, or perhaps never before I'd have the chance again, as I am about to graduate with my master's degree and start my career in a busy company. Fortunately, I have realized the dream I had three years ago and have now brought my student years to a close with the title of Grandmaster. Wishing everyone a Happy New Year!</p>
<p>Thank you! </p>
<p>Thank you LTY, working with you in previous competition is enjoyable. </p>
<p><a href="https://www.kaggle.com/kevin1742064161" target="_blank">@kevin1742064161</a> Thank you! You are an extraordinary teammate. Hope to have more cooperation with you in the future. Also, wish you to reach a grandmaster soon.</p>
<p>Congratulations! Wish you become a gm as you wish in the future. </p> |
UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN) | [Our Solution] UBC-OCEAN | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UBC Ovarian Cancer Subtype Classification and Outlier Detection (UBC-OCEAN)
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thank you! And congratulations for your becoming a GM!</p>
<p>Of course! we just adopted the official implementation of CTransPath at <a href="https://github.com/Xiyue-Wang/TransPath/blob/main/ctran.py" target="_blank">https://github.com/Xiyue-Wang/TransPath/blob/main/ctran.py</a>.</p>
<p>For ABMIL, its <a href="https://github.com/AMLab-Amsterdam/AttentionDeepMIL/blob/master/model.py" target="_blank">official implementation</a> is patch-based (i.e., the input are RGB images rather than features), we made a slight modification on it.</p>
<pre><code> (nn.Module):
():
(ABMILConv, self).__init__()
self.L, self.D = ,
self.K =
fc = [nn.Linear(input_dim, self.L), nn.ReLU()]
dropout:
fc.append(nn.Dropout(dropout))
self.fc = nn.Sequential(*fc)
self.conv = nn.Conv1d(in_channels=self.L, out_channels=self.L, kernel_size=kernel_size, padding=kernel_size//)
self.attention = nn.Sequential(
nn.Linear(self.L, self.D),
nn.Tanh(),
nn.Dropout(dropout) dropout nn.Identity(),
nn.Linear(self.D, self.K)
)
self.classifier = nn.Linear(self.L*self.K, n_classes)
():
...
</code></pre>
<p>Besides, thank you for the excellent training and inference code, helped us a lot during the competition! :)</p>
<h2>TL;DR</h2>
<p>We modified a multiple instance learning (MIL) framework, ABMIL[1], with 224x224-shaped patches under 20x magnification to predict the subtypes. The patch features are extracted by CTransPath [2]. To predict the outliers, synthesized WSIs and TMAs are generated using patches in the stroma and necrosis category.</p>
<h2>Preprocessing for MIL</h2>
<ul>
<li>WSIs and TMAs are splitted into 224x224 shaped patches under 20x (which means the TMAs should be 2x downsampled as they are 40x). </li>
<li>Patches with large backgrounds are discarded.</li>
<li>The pretrained model CTransPath is utilized to extract instance (patch) features.</li>
<li>During feature extraction, data augmentation including brightness, contract, color, color and sharpness adjustment are utilized for each WSI/TMA. Data augmentation is not utilized in testing.</li>
</ul>
<h2>MIL model for ID</h2>
<ul>
<li>We choose the ABMIL framework as a baseline of the MIL model. To model the dependencies between different patches, an 1D convolution block is added before the attention block, which contributes to ~0.04 improvement on the public LB.</li>
<li>Other MIL frameworks such as CLAM, TransMIL, and DTFDMIL are tried, but with no performance improvement.</li>
</ul>
<h2>OOD synthesis for WSI</h2>
<ul>
<li>We randomly choose patches masked as stroma and necrosis to synthesize the Other category, and train another MIL model using ABMIL for six class classification (5 ID + 1 OOD),</li>
<li>~0.04 improvement compared with no 'Other'</li>
</ul>
<h2>OOD classification for TMA</h2>
<ul>
<li>We split the masked patches in WSIs under 10x magnification and produce 512x512 patches and train a ViT-S for TMA classification. For tumor patches, we assign them subtype labels. For stroma and necrosis patches, we treat them as Others. For TMAs, one 2048x2048 patch is center cropped, and downsample to 512x512 to produce 10x patches.</li>
<li>~0.01 improvement on public LB</li>
</ul>
<h2>Producing the final submission</h2>
<p>The probabilities produced by the two MIL models and the ViT-S are nan-averaged to get the final prediction labels.</p>
<h2>References</h2>
<p>[1] M. Ilse, J. M. Tomczak, and M. Welling, ‘Attention-based Deep Multiple Instance Learning’, in International Conference on Machine Learning, PMLR, Jul. 2018, pp. 2127–2136.<br>
[2] X. Wang et al., ‘Transformer-based unsupervised contrastive learning for histopathological image classification’, Medical Image Analysis, vol. 81, p. 102559, Oct. 2022, doi: 10.1016/j.media.2022.102559.</p> |
UPenn and Mayo Clinic's Seizure Detection Challenge | Required model documentation and code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UPenn and Mayo Clinic's Seizure Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hey everyone,</p>
<p>My code and documentation are now ready. It was great fun competing with you all, that final week was pretty intense!</p>
<p>https://github.com/MichaelHills/seizure-detection/raw/master/seizure-detection.pdf</p>
<p>https://github.com/MichaelHills/seizure-detection</p>
<p>Quickly summarising my model, for feature selection I used FFT 1-47Hz, concatenated with correlation coefficients (and their eigenvalues) of both the FFT output data, as well as the input time data. The data was then trained on per-patient Random Forest classifiers (3000 trees).</p> |
UPenn and Mayo Clinic's Seizure Detection Challenge | Universal model vs. subject-specific models | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UPenn and Mayo Clinic's Seizure Detection Challenge
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Did the best scoring teams use a single model for all subjects, or a collection of models--one for each dog/patient?</p>
<p>I initially tried individual models, which typically scored very well using cross-validation but barely got above 0.90 AUC on the leaderboard 15% test set. Using a universal model I got a significant boost. I believe this is due to increased amount of data that could be used for training, but also because it better allowed me to hone the regularization (getting good regularization was difficult for single-subject models because the data was too easy to discriminate).</p>
<p>A quick summary of my model:</p>
<p>Resample to 500 sps. Extract 0.5 second windows from the beginning, middle, and end of each segment. Apply Hanning windows and compute DFTs. Sum the power in bands 4-8, 8-13, 13-30, and 30-100 Hz and convert to log scale. Discard all but 16 channels (I did this because I was short on time and didn't want to search for a better way to incorporate additional channels). The channels that provided the greatest d-prime discrimination of ictal vs. interictal were retained and ordered by their d-prime values. This resulted in a feature vector of length 3x4x16 = 192 for each segment. I used SVMs with RBFs and gamma = 1.58. One SVM for each of the predictions we needed to make.</p>
<p>Matt</p> |
UW-Madison GI Tract Image Segmentation | 10-th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UW-Madison GI Tract Image Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>here's code of nnunet, you can refer the instructions in the github to use the nnUNet. <a href="https://github.com/MIC-DKFZ/nnunet" target="_blank">https://github.com/MIC-DKFZ/nnunet</a></p>
<p>Main idea:</p>
<p>nnUNet framework trained with all data with different patch size.</p>
<p>At the beginning of the competition, we used 2.5D U-Net but get 0.877 and rank 70+ in public LB. This is not satisfied. So we changed to use 3D UNet.</p>
<p>We tried 3D MONAI framework and nnUNet framework. But the results of 3D MONAI framework cannot show a comparable results so we used nnUNet.</p>
<p>nnUNet:<br>
Remove the mistake labeled data: case7_day0, case81_day30<br>
Trained with fold1 500 epoch. Patch size=160,160,80 LB:0.878 w/o TTA. 0.881 w TTA.</p>
<p>Train a 3D model take too long time, so we decide to train the model using all data and always select the final saved model.</p>
<p>Train with all data, Patch size=160,160,80 LB:0.881 w/o TTA. 0.884 w TTA.<br>
Train with all data and load the Patch size=160,160,80 pretrained model, Patch size =192, 192, 96. LB:0.882 w/o TTA. 0.886 w TTA. private LB: 0.877.<br>
Train with all data and load the Patch size=160,160,80 pretrained model, Patch size = 224, 224, 112 LB:0.885 w TTA. private LB: 0.880 (we didnot use this as our final submission.)<br>
Ensemble trained 3 models: LB:0.885 private LB: 0.880.(we didnot use this as our final submission.)<br>
Ensemble trained 6 models (include some models in 5-fold cross validation): public LB:0.886 private LB: 0.879.(as our final submission).</p>
<p>Training time:<br>
Patch 160 160 80, 1000 epoch, about 24h in a single RTX3090.<br>
Patch 160 160 80, 1000 epoch, about 120h in 3 RTX6000(trained by my teammates <a href="https://www.kaggle.com/harwawang" target="_blank">@harwawang</a> )<br>
Patch 224 224 112, 1000 epoch, about 83h in a single RTX3090.</p>
<p>summary:<br>
nnUNet is all your need.</p>
<p><a href="https://www.kaggle.com/yeqingshen" target="_blank">@yeqingshen</a> is a experienced computer vision engineer, he joined the competition at the beginning of the competition, when I was struggling in 2.5D U-Net, he gave me many useful suggestions and reproduce some ideas. <a href="https://www.kaggle.com/dengchy728" target="_blank">@dengchy728</a> have 2×RTX6000 GPUs,he had trained lots of models using nnUNet framework (including some models in 5-fold cross validation we had used in model ensemble), which boosted the the updation of our methods. <br>
I don't think someone do some contributions only if he is active online, they had done many experiments offline and also worked for the competition.</p>
<p>We divided the entire dataset into training sets and simply used the final epoch of the model. We didn't try multi-fold cross training since the limitation of the time (train a model took 24/83h with patch size=160,160,80/224,224,112), and for inference, used multi-fold cross models took too long time to ensemble multi patch size models in 9 hours GPU time (ensemble 3 models w/o tta took ~4h for inference), so we just trained the models with all data.</p> |
UW-Madison GI Tract Image Segmentation | 11th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UW-Madison GI Tract Image Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congrats to all the prize winners and gold medalists, looking forward to reading your solutions.<br>
Also, thanks to my team mates <a href="https://www.kaggle.com/hoanganhpham" target="_blank">@hoanganhpham</a> <a href="https://www.kaggle.com/tuanho27" target="_blank">@tuanho27</a> for their contribution.</p>
<p>Our final submission consisted of 7 2.5D models trained on full data, with different inputs, training schedules, and encoder + decoder combinations (<a href="https://www.kaggle.com/code/andy2709/fork-of-best-public-kernel-ef10e0/notebook)" target="_blank">https://www.kaggle.com/code/andy2709/fork-of-best-public-kernel-ef10e0/notebook)</a>. </p>
<p>Our highest score kernel (<a href="https://www.kaggle.com/andy2709/3d-and-2-5d" target="_blank">https://www.kaggle.com/andy2709/3d-and-2-5d</a>) simply replaced 2 2.5D models with 2 3D models, we didn't have the nerve picking this one 😄.</p>
<p>I put some single models' stats here.</p>
<table>
<thead>
<tr>
<th>Encoder</th>
<th>Decoder</th>
<th>Num slices</th>
<th>Stride</th>
<th>Size</th>
<th>Type</th>
<th>Epoch</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>convnext_xlarge</td>
<td>deeplabv3+</td>
<td>17</td>
<td>2</td>
<td>608x608</td>
<td>2.5d</td>
<td>30</td>
<td>0.88</td>
<td>0.868</td>
</tr>
<tr>
<td>v2l</td>
<td>unet++</td>
<td>9</td>
<td>3</td>
<td>608x608</td>
<td>2.5d</td>
<td>30</td>
<td>0.88</td>
<td>0.868</td>
</tr>
<tr>
<td>convnext_xlarge</td>
<td>unet++</td>
<td>9</td>
<td>2</td>
<td>608x608</td>
<td>2.5d</td>
<td>30</td>
<td>0.882</td>
<td>0.867</td>
</tr>
<tr>
<td>convnext_base</td>
<td>unet</td>
<td>5</td>
<td>1</td>
<td>512x512</td>
<td>2.5d</td>
<td>30</td>
<td>0.879</td>
<td>0.866</td>
</tr>
<tr>
<td>convnext_base</td>
<td>unet</td>
<td>80</td>
<td>1</td>
<td>224x224</td>
<td>3d</td>
<td>2000</td>
<td>0.878</td>
<td>0.866</td>
</tr>
</tbody>
</table>
<ul>
<li><p>What worked for us</p>
<ul>
<li>Convnext backbones are much stronger than others.</li>
<li>ACS Conv (<a href="https://github.com/M3DV/ACSConv/):" target="_blank">https://github.com/M3DV/ACSConv/):</a> nice idea to replace Conv3d with 3 Conv2d on xy, yz, and xz planes, which allows us to utilize pre-trained ImageNet backbones for medical segmentation tasks. </li>
<li>Mask thresholding by class</li></ul></li>
<li><p>What didn't work</p>
<ul>
<li>Negative slice removal by heuristics</li>
<li>UNet3d, UNetr, Swin UNetr</li>
<li>Weighted ensemble</li></ul></li>
</ul>
<p>Congrats to your team for the first prize !</p>
<p>thanks man. Your shake-up game is really strong !</p>
<p>We trained all models with either Dice + PolyLoss + Hausdorff or Dice + PolyLoss. PolyLoss is slighty better than Focal and BCE in our experiments.<br>
I also think that the arbitrary annotation truncating by hosts confused the 3d nets. In one of our last exps, I tried to increase the depth to 144 (using Monai ResizeWithPadOrCrop transform) but it didn't bring any improvement. </p>
<p>we tried resnet200 weight from that repo. with unet3d but the score was very bad (836 public, 828 private).</p> |
UW-Madison GI Tract Image Segmentation | 14th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UW-Madison GI Tract Image Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>sure, like this</p>
<pre><code>def strict_ensemble(row):
pred_25d = row['predicted_25d']
pred_monai = row['predicted_monai']
flg_25d = (pred_25d != '')
flg_monai = (pred_monai != '')
width = row['width']
height = row['height']
if (flg_25d) & (flg_monai):
msk_25d = rle_decode(pred_25d, (height,width))
msk_monai = rle_decode(pred_monai, (height,width))
msk = msk_25d + msk_monai
msk[msk>0] = 1
pred = rle_encode(msk)
else:
pred = ''
return pred
df['predicted'] = df.apply(strict_ensemble,axis=1)
</code></pre>
<p>It is really interesting to see the solutions</p>
<p>the 2.5d training image is built like this.<br>
Channel 1: the image to predict(ex:slice 001)<br>
Channel 2: slice 003<br>
Channel 3: slice 005</p>
<p>Thus, the image to predict is in channel 1, the usage of channel 2 and 3 is to give additional depth information. We can randomly drop it to perform an augmentation in order to make models extract image in each channel better, especially the one in channel 1</p>
<p>(That boosted my score from 0.877 to 0.878, actually I had 0.879 in my submissions T T)</p>
<p>Congrats to all prize and medal winners!<br>
And thanks all the competitors because I have learned so much from this comp.</p>
<p>My brief solution summary:</p>
<p>2~2.5D models(trained on BCE+DICE first, and then change to DICE only)<br>
・backbone: eca_nfnet_l2 (5 folds)<br>
・augmentations: </p>
<ul>
<li>hflip</li>
<li>vflip</li>
<li>RandomBrightnessContrast</li>
<li>ShiftScaleRotate</li>
<li>OneOf(GridDistortion,ElasticTransform)</li>
<li>CoarseDropout</li>
</ul>
<p>Custom Trick: 2~2.5D augmentation. I randomly(30%) drop the image in channel 2 or channel 3 or both to 0, to give less depth information to the model so the model will learn well using limited information. This boosted my score about 0.01.</p>
<p>At first, I had trouble in training because the trainning loss and valid loss become nan. This is caused by learning rate. The proper rate is under 6e-4.</p>
<p>3D UNet (MONAI)<br>
Thanks this discussion.<br>
<a href="https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/325646" target="_blank">monai solution</a></p>
<p>Ensemble<br>
Only when both models prediction is not null, ensemble all the pixels, else null.</p>
<p>postprocess</p>
<ol>
<li>remove predicted masks in the bottom area of the voxel if prediction of two models are not similar.</li>
<li>if prediction of the nearby image is not null, then trust the prediction of either model whose prediction is not null.</li>
</ol>
<p>What did not work for me<br>
・motionblur, RandomResizedCrop<br>
・postprocess 2(I dont know why T T It was good in oof)</p> |
UW-Madison GI Tract Image Segmentation | 15th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UW-Madison GI Tract Image Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We can easily create submissions with different LB scores by truncating predictions with different thresholds, even with the same model. Thus, we can probe test dataset statistics by changing the truncation threshold according to the value of probing target.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F745525%2F6c9a13e900dd873d008b0b0102ce8b4a%2Fprob.png?generation=1657961486710351&alt=media" alt=""></p>
<p>I think fixing training samples' size does not make sense because truncation points (depths) are quite different from one to another.<br>
In fact, models (in all public notebooks) "implicitly" learn truncation points.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F745525%2Fd4a2b53ee802ec2cb5f16096bd4c2b27%2F2022-07-16%2015.05.37.png?generation=1657951549599417&alt=media" alt=""></p>
<p>Congrats to all prize and medal winners!<br>
I'd like to thank to my teammate <a href="https://www.kaggle.com/inoueu1" target="_blank">@inoueu1</a> for strong mask prediction models!</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F745525%2Fcc6f62967aa387e2155a60cbecbfdb7e%2Fsummary.png?generation=1657844743336671&alt=media" alt=""></p>
<p>Our brief solution summary:</p>
<ul>
<li><p><strong>key ideas</strong></p>
<ul>
<li>As known in some discussions, ground-truth labels were truncated in the bottom area of each voxels, and the truncation criteria is not clear. Thus we try to predict truncation area (depth) using label truncation prediction models, and remove bottom predicted masks in postprocessing.</li>
<li>In model training, we remove slices where the ground-truth labels seems to be truncated (not give negative signals to the models). By doing so, the mask prediction models do not have to "implicitly" predict label truncation and can predict masks as much as possible.</li></ul></li>
<li><p>preprocess</p>
<ul>
<li>crop low intensity area in order to accelerate prediction</li></ul></li>
<li><p>2.5D models</p>
<ul>
<li>UNet or UNetPlusPlus + EfficientNet backbones (smp)</li></ul></li>
<li><p>3D models</p>
<ul>
<li>3D UNet (MONAI)</li></ul></li>
<li><p>label truncation prediction models</p>
<ul>
<li>3D UNet (MONAI) + spacial pooling (d, h, w) -> (d,)</li></ul></li>
<li><p>postprocess</p>
<ul>
<li>remove predicted masks in the bottom area of the voxel using label truncation prediction</li>
<li>remove top and bottom two pixels</li></ul></li>
<li><p>some probing results about test dataset</p>
<ul>
<li>the numbers of slices are 144 and 80, same as train dataset</li>
<li>pixel spacings are 1.5 and 1.63, same as train dataset</li>
<li>image sizes are ('266', '266'), ('276', '276'), ('360', '310'), the size ('234', '234') is in the train set but not in the test set</li>
<li>53%-55% of data come from the same case in the train dataset</li>
<li>at most, 64% of the above known case data are included in public test set (thus, public LB tends to reflect the known case results)</li></ul></li>
</ul>
<p>Yes. The score would be improved as long as mask prediction models are trained without truncated mask slices. Simply using label truncation prediction models with mask prediction models trained using all data does not work.</p> |
UW-Madison GI Tract Image Segmentation | 17th Solution [3d part] and Kaggle Issue Report | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UW-Madison GI Tract Image Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, thanks to my teammates <a href="https://www.kaggle.com/ammarali32" target="_blank">@ammarali32</a> and <a href="https://www.kaggle.com/jaafarmahmoud1" target="_blank">@jaafarmahmoud1</a> , they are pretty amazing guys and keep us stay in the top zone in public lb until the end. I believe we just lack a bit luck to win the gold.</p>
<p>Thank you for Kaggle and host for hosting this great competition, I will summarize my part of our solution and report a potential Kaggle Issue for notebook competitions in this discussion.</p>
<h3>Summary:</h3>
<p>nnunet trained on all data and 5-fold cross-validation.</p>
<h3>Framework:</h3>
<p><a href="https://github.com/MIC-DKFZ/nnUNet" target="_blank">nnunet</a></p>
<h3>Preprocessing:</h3>
<p>refer to <a href="https://www.kaggle.com/code/yiheng/3d-solution-with-monai-produce-3d-data" target="_blank">3D Solution with MONAI [produce 3d data]
</a> and thanks to <a href="https://www.kaggle.com/yiheng" target="_blank">@yiheng</a> </p>
<h3>Training</h3>
<p>Follow the <a href="https://github.com/MIC-DKFZ/nnUNet" target="_blank">instructions</a> but modify batchsize=4, epochs=1500 and some lr adjustment. With these settings the model trained on all data could achieve 0.881 and a 5-fold ensemble can achieve 0.885 without tta or any post-processing. Ensemble all 6 models (all+5fold) can achieve 0.887. (all numbers are based public lb)</p>
<h3>Inference [Only 3d part]:</h3>
<p><a href="https://www.kaggle.com/code/leonshangguan/nnunet-inference?scriptVersionId=100497471" target="_blank">nnunet inference</a></p>
<h3>Ensemble:</h3>
<p>we use the union of 2.5d and 3d predictions.</p>
<h2>Potential Kaggle Code Competition Notebook Timeout Issue <a href="https://www.kaggle.com/addisonhoward" target="_blank">@addisonhoward</a> :</h2>
<p>The issue is: even using the same code, some submissions will pass and some may not according to your submission time. This issue is mentioned in <a href="https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/335456" target="_blank">here</a> and <a href="https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/336677" target="_blank">here</a>. </p>
<p>Based on our experience (after tens of Notebook Timeout), we find 10 PM to 6 AM EST is a good time to make submissions. we didn't face any issues when we submit during this time zone (not guaranteed). We think this because there are peaks and valleys when people submit during the day. We wish we can always make submissions during this time zone but that time period is a late night to me. </p>
<p>On the last day of the competition, we made some modifications to our code and submit at 10 AM EST which we wish to select as one of our final results. It should pass at about 8hr20min. However, still Notebook Timeout.</p>
<p>For code competitions and with the 9hour time limit, I believe all submissions regardless of submission time should be treated as the same. I think this is a critical issue for all code competitions and looking forward to getting a response from Kaggle Team <a href="https://www.kaggle.com/addisonhoward" target="_blank">@addisonhoward</a> .</p> |
UW-Madison GI Tract Image Segmentation | 1st place solution for 2.5d parts | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UW-Madison GI Tract Image Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><ol>
<li>The inference phase only resize image to the maximum size.</li>
<li>The classfication model will judge whether there exit organs, if so, this image will send to the segmentation phase.</li>
</ol>
<p>BCELoss focuses more on pixel classification accuracy, and DiceLoss focuses more on the dice value between the predicted mask and the ground-truth. The purpose of setting the loss function ratio to 1:3 is to lead the model learn features that are more like the ground-truth without forgetting the accuracy of pixel classification.</p>
<h1>2.5D method Overall pipeline</h1>
<p>The overall pipeline of our 2.5D solution is divided into two stages: the classification stage and the segmentation stage. The former one is responsible for determining whether there are targets in the image, while the latter one segments the targets in the image. By analyzing the classification stage to control the target segmentation in the image, the classification and segmentation stage are both model-weighted fusion to enhance the robustness of our 2.5D, and the pure 2.5D model can achieve a score of 0.889 on the public leaderboard.<br>
<strong>The details are presented below ↓</strong></p>
<h2>Data</h2>
<p>For data production, we refer to <a href="https://www.kaggle.com/awsaf49" target="_blank">@awsaf49</a> 's data production method, set the stride size to 2, and take a total of three to produce 2.5D data. <a href="https://www.kaggle.com/code/awsaf49/uwmgi-2-5d-stride-2-data" target="_blank">https://www.kaggle.com/code/awsaf49/uwmgi-2-5d-stride-2-data</a></p>
<h2>Traing & test time augmentation</h2>
<p>We tried 640*640 and 512*512 image resize resolution. where in the case of 640*640 resolution, we used RandomCrop method to randomly crop down the target of 448*448 resolution during training. We also used random flip, elastic transformation, grid distortion and optical distortion, the specific parameters are</p>
<pre><code>dict(type='RandomFlip', direction='horizental', p=0.5)
dict(type='ElasticTransform', alpha=120, sigma=6.0, alpha_affine=3.6, p=1)
dict(type='GridDistortion', p=1)
dict(type='OpticalDistortion', distort_limit=2, shift_limit=0.5, p=1)
</code></pre>
<p>When it comes to test time, we use horizental flip method to preduce a new image and use weighted fusion to get the output mask. TTA can boost our score for 0.001 ~ 0.002 approximately.</p>
<h2>Backbone</h2>
<p>The algorithm is based on Unet, the backbone network we tried efficientNet b4 ~ b7, single model can reach about 0.883 on the public leaderboard, and model fusion at submission can reach about 0.889. And for the deep learning framework, we use our teammate <a href="https://www.kaggle.com/CarnoZhao" target="_blank">@CarnoZhao</a> 's work <a href="https://www.kaggle.com/code/carnozhao/uwmgit-mmsegmentation-end-to-end-submission" target="_blank">https://www.kaggle.com/code/carnozhao/uwmgit-mmsegmentation-end-to-end-submission</a></p>
<h2>Training and inference tips & loss function</h2>
<p>The classification network is trained for full data, which contains empty pictures as well as images with mask. The segmentation network is only responsible for segmenting the target images, and the segmentation network is trained on full data with mask images only. The classification network is trained with a single BCELoss, and the segmentation network is trained with a BCE and Dice weighted loss function with a 1:3 loss function ratio. <br>
Besides, we use fp16 traing method to make our training more faster and it can save about 50% gpu memory, which allowed us to use bigger batch size.</p>
<h2>Furture work</h2>
<p>When analyzing the data, i found that there exits some margin that contains no useful imformation, i have tried using CenterCrop method to remove those useless margin(maybe useless, i'm not sure), the score have little influence.</p>
<pre><code>def CenterCrop(image, crop_ratio=0.9):
'''
input numpy type image
width
h ###################
e # #
i # #
g # #
h # #
t ###################
crop_ratio -> remain ratio
return crop_image, xmin, ymin, xmax, ymax, org_size
'''
height, width, channel = image.shape
xmin, ymin, xmax, ymax = int(width * (1 - crop_ratio) / 2), \
int(height * (1 - crop_ratio) / 2), \
int(width * (1 + crop_ratio) / 2), \
int(height * (1 + crop_ratio) / 2)
crop_image = image[ymin:ymax, xmin:xmax, ...]
extra_info = [height, width, xmin, ymin, xmax, ymax]
return crop_image, extra_info
def PaddingCrop(crop_image, extra_info):
'''
input crop_image -> the centerCrop image
input extra_info -> original image size and Crop size infomation
'''
crop_shape = np.array(crop_image.shape[:2])
height, width, xmin, ymin, xmax, ymax = extra_info
pady = [ymin - 1, height - ymax + 1]
padx = [xmin - 1, width - xmax + 1]
original_image = np.pad(crop_image, [pady, padx, [0, 0]])
return original_image
</code></pre>
<h2>Insights from a year of cv competition on kaggle</h2>
<p>Kaggle is a talent-packed data science platform that has taught me many algorithms, techniques, and ways of thinking about problems that I had not been exposed to before. I would like to thank my teammates <a href="https://www.kaggle.com/namgalielei" target="_blank">@namgalielei</a> <a href="https://www.kaggle.com/cgq081616" target="_blank">@cgq081616</a> <a href="https://www.kaggle.com/CarnoZhao" target="_blank">@CarnoZhao</a> <a href="https://www.kaggle.com/shajiayu" target="_blank">@shajiayu</a> who supported me immensely during the competition. After this year of competition, I successfully reached the Kaggle Master title. Next stage goal is Kaggle Grand Master (which I don't think I will ever reach).😂</p> |
UW-Madison GI Tract Image Segmentation | 22th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UW-Madison GI Tract Image Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I trained with Colab (TPU). From the logs, 1 fold was completed in about 10-11h.</p>
<p>Thank you all for your hard work.<br>
And congratulations to all the gold medal winners.</p>
<p>To summarize our solution, it is a 3D UNet + 2.5D UNet/FPN/UPerNet ensemble.</p>
<p><strong>Pre-processing;</strong><br>
Training was performed using normalization dividing by the 5-channel maximum as well as dividing by the case_day maximum</p>
<p><strong>Models;</strong><br>
2.5D (by hirune924)</p>
<ul>
<li>FPN backbone efficientnet-b2</li>
<li>input size: 512x512, 3ch</li>
<li>loss: bce+dice</li>
<li>augmentation: <a href="https://github.com/hirune924/imgaug-tf" target="_blank">imgaug-tf</a></li>
</ul>
<p>2.5D (by SiNpcw)</p>
<ul>
<li>FPN backbone efficientnet-b6</li>
<li>input size: 512x512, 5ch</li>
<li>loss: bce</li>
<li>augmentation: <a href="https://github.com/hirune924/imgaug-tf" target="_blank">imgaug-tf</a></li>
</ul>
<p>2.5D (by SiNpcw)</p>
<ul>
<li>UNet backbone efficientnet-b5</li>
<li>input size: 512x512, 5ch</li>
<li>loss: bce</li>
<li>augmentation: <a href="https://github.com/hirune924/imgaug-tf" target="_blank">imgaug-tf</a></li>
</ul>
<p>2.5D (by anonamename)</p>
<ul>
<li>UPerNet (<a href="https://github.com/sithu31296/semantic-segmentation" target="_blank">semantic-segmentation</a>) backbone convnext_large_384_in22ft1k (from timm)</li>
<li>input size: 320x384, 5ch (padding to keep the aspect ratios)</li>
<li>loss: dice+tversky</li>
<li>augmentation: Hflip, ShiftScaleRotate, GridDistortion, RandomBrightnessContrast, RandomGamma, GaussNoise, CoarseDropout</li>
</ul>
<p>3D (by hirune924)</p>
<ul>
<li>3D UNet backnone resnet34</li>
<li>input size: 384x384xD</li>
<li>train patch: 192x192x64</li>
<li>loss: bce+dice</li>
<li>augmentation: <a href="https://github.com/ZFTurbo/volumentations" target="_blank">volumentations</a></li>
</ul>
<p><strong>Post-processing;</strong><br>
No special processing was done, but the threshold was lowered. we seem stomach threshold sensitivity is not high, but small bowel is sensitve. we believe this is due to the fact that GT mask censoring is different for train and test (public).</p>
<p><strong>Supplementary information;</strong><br>
Our goal in this competition was to improve our tensorflow & TPU skills. When we started using tensorflow I found that there is no augmentation library for tensorflow that works with TPU. So <a href="https://www.kaggle.com/hirune924" target="_blank">@hirune924</a> created an augmentation library for tensorflow that works with TPU. That is <a href="https://github.com/hirune924/imgaug-tf" target="_blank">imgaugtf</a>. We think it is easy to use because it can be used like an albumentations package.</p> |
UW-Madison GI Tract Image Segmentation | 23rd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UW-Madison GI Tract Image Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I want to start this post by thanking my teammates <a href="https://www.kaggle.com/kolyaforrat" target="_blank">@kolyaforrat</a> and <a href="https://www.kaggle.com/artemtprv" target="_blank">@artemtprv</a>. They were really helpful, friendly, and I have learned from them a lot in the short period of time since we have merged.</p>
<p>Our final solution was a blend of 2.5D models and 3D models. </p>
<h2>2.5D approach</h2>
<p>5 models trained on 5 folds of the training set split using group stratified split using mixed precision training.</p>
<ul>
<li>architecture: Unet++</li>
<li>backbone: regnety_160 with imagenet weights</li>
<li>data: (-2, 0,+2) 2.5D Slices - (192, 192) random crops of (320, 384) padded slices with MixUp</li>
<li>augmentations: HorizontalFlips, ElasticTransform, GridDistortion, ShiftScaleRotate, CoarseDropout, RandomBrightnessContrast, RandomGamma, Sharpen</li>
<li>loss function: Deep supervision loss of 2<em>Dice + 3</em>Focal + BCE for the upper four layers in Unet++ architecutre</li>
<li>epochs: 100</li>
<li>optimizer: Adam</li>
<li>learning rate: 6e-3 with one cycle policy</li>
<li>inference: tta (horizontal flips only) and minimal post processing removing any predictions from slices that didn’t contain any annotations</li>
</ul>
<p>I didn’t focus much on post processing strategies or training sampling strategies, and in hind sight, it looks like I should have. But anyways my pipeline was as simple as this, and a single model achieved 0.883 on public LB and 5 folds achieved 0.884.</p>
<h3>Caveats</h3>
<ol>
<li>Training with larger crops didn’t improve performance and only increased training time</li>
<li>Deep supervision loss was detrimental in the pipeline and significantly improved both local CV and public LB score (0.865 → 0.878)</li>
<li>MixUp enabled breaking a certain barrier in local CV and public LB metric score (0.878 → 0.883)</li>
<li>RandomResizedCrops didn’t work well as just RandomCrops</li>
<li>The final pipeline took around 12 hours to train on one fold using RTX 5000.</li>
</ol>
<h2>3d pipeline</h2>
<p>We used MONAI Unet multilabel. 3 folds + full train</p>
<ul>
<li>channels=(56, 112, 224, 448, 896)</li>
<li>loss function DiceLoss</li>
<li>optimizer AdamW</li>
<li>lr_mode warmup_restart every 100 epoch<br>
Train it on raw data with random crops 160, 160, 80</li>
</ul>
<p>Inference on whole case_day for length =80 and for length=144 we crop it to 124 and 3 separate inference with step =20 with sliding window, roi_size=(288,288,80) , overlap=0.9</p>
<p>Augmentations<br>
RandFlipd, RandAffined, RandGridDistortiond, RandScaleIntensityd, RandShiftIntensity</p>
<h3>Didn't work for 3d:</h3>
<ul>
<li>deep supervision loss</li>
<li>mixup</li>
<li>data cleansing</li>
<li>pseudo labeling on samples with low metric</li>
<li>train on only areas where is masks ob train for each size</li>
</ul>
<p>As postprocessing we used few technics:</p>
<ul>
<li>Delete impossible - it is deleting all slices on which we don't have any masks in train set</li>
<li>Fill missing - filling slices by nearest if s-1 and s+1 is no empty</li>
<li>Clustering - in train set every mask appears on some slice and expire on some other. We found biggest cluster for every case_day and delete everything what not in cluster</li>
<li>Delete last N - deleting last n, m masks for case_days with Z-dim = 144 and 80. Best values m = n = 2</li>
</ul>
<p>It added + ~0.004 to LB</p>
<h2>2.5D and 3D Ensemble</h2>
<p>Ensembling of 2.5d * 0.7 + 3d * 0.3 added ~0.004</p>
<p>The pleasure has been mine <a href="https://www.kaggle.com/artemtprv" target="_blank">@artemtprv</a> 👐</p> |
UW-Madison GI Tract Image Segmentation | 27th place solution , with potential gold | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UW-Madison GI Tract Image Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Glad to know!<br>
And congrats on the Grandmaster !</p>
<p>Hi everyone , and congrats to all the winners!<br>
My solution is pretty much ootb nnUNet that carried me.</p>
<p>I trained 3 models with nnUNet , using the nnUnet trainerV2 fp32 with different DA ( data augmentation) settings -</p>
<p><strong>Public LB -</strong> </p>
<table>
<thead>
<tr>
<th>default</th>
<th>insaneDA</th>
<th>noDA</th>
<th>weighted ensemble (0.6 , 0.25, 0.15)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.881, 0.883 with TTA</td>
<td>0.878 , 0.881 with TTA</td>
<td>0.875</td>
<td>0.885</td>
</tr>
</tbody>
</table>
<p>I added a simple postprocessing that Darien had posted - remove preds for slices 138-144 for all.<br>
This postproc made no difference on public LB for me , but gave +0.001 in private LB.</p>
<p><strong>Private LB:</strong></p>
<table>
<thead>
<tr>
<th>default</th>
<th>insaneDA</th>
<th>noDA</th>
<th>weighted ensemble (0.6 , 0.25, 0.15)</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.872</td>
<td>0.872 , 0.877(!) with TTA</td>
<td>0.865</td>
<td>0.877</td>
</tr>
</tbody>
</table>
<p>Some insights -</p>
<p>insaneDA seems to work really well on private LB but didn't perform good compared to other experiments on public LB.<br>
I also gave that model a really lower weight of 0.25 in the ensemble , and did not do TTA on it.<br>
0.88 on private would have been pretty feasible with these models , had I gone ahead with insaneDA model as my main model.<br>
I also wanted to ensemble the 3d nnUnet with a 2.5D model , however I could not cross 0.87 with 2.5D and hence decided to not go for it. However , as per Carno's experiments , ensembling 2.5D and 3D even if one has a relatively low score seems to give a pretty good boost (0.003) ! I was also able to shake up from 45th public -> 27th private LB . </p>
<p>Thanks to everyone who shared their valuable insights , I was able to learn a lot from this competition!</p> |
UW-Madison GI Tract Image Segmentation | 28th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UW-Madison GI Tract Image Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First, thanks all organizers for this comp. And congrats to all prize and medal winners!</p>
<p>Here is my brief solution summary:</p>
<h1>Framework</h1>
<p>3D UNet (MONAI). Thanks Wang's discussion: <a href="url" target="_blank">https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/325646</a></p>
<h1>Training Tricks</h1>
<p>All models are trained on full data. Each model is trained with large lr for more than 1000 epochs and tuned for more than 100 epochs. Loss: BCE + Dice.</p>
<h1>Countermeasure for overfitting</h1>
<p>Basic augmentation is same as Wang's setting. Default dropout in DynUNet (MONAI) will not be used even if parameter dropout is set >0, so I use strong aug to replace it (strong aug = basic aug + following augs).</p>
<ul>
<li>RandAdjustContrastd</li>
<li>RandBiasFieldd</li>
<li>RandGaussianNoised</li>
</ul>
<h1>Detail</h1>
<table>
<thead>
<tr>
<th>Model</th>
<th>encoder / channel</th>
<th>input size</th>
<th>augmentation</th>
<th>supervision num</th>
<th>Public</th>
<th>Private</th>
</tr>
</thead>
<tbody>
<tr>
<td>DynUNet</td>
<td>48,96,192,384,480,480</td>
<td>160 * 160 * 80</td>
<td>strong aug</td>
<td>4</td>
<td>0.878</td>
<td>0.865</td>
</tr>
<tr>
<td>Unet_large</td>
<td>64,128,256,512,1024</td>
<td>224 * 224 * 80</td>
<td>basic aug (dropout=0.2)</td>
<td>0</td>
<td>0.879</td>
<td>0.868</td>
</tr>
<tr>
<td>MSNet</td>
<td>res2net50_v1b_26w_4s</td>
<td>192 * 192 * 80</td>
<td>strong aug</td>
<td>0</td>
<td>0.877</td>
<td>0.870</td>
</tr>
<tr>
<td>DynUNet</td>
<td>32,64,128,256,320,320</td>
<td>192 * 192 * 80</td>
<td>strong aug</td>
<td>4</td>
<td>0.880</td>
<td>0.871</td>
</tr>
<tr>
<td>Ensemble</td>
<td>/</td>
<td>/</td>
<td>/</td>
<td>/</td>
<td>0.882</td>
<td>0.876</td>
</tr>
</tbody>
</table>
<p>Ensemble weight: 0.1, 0.2, 0.35, 0.35<br>
MSNet: Multi-scale Subtraction Network: <a href="url" target="_blank">https://github.com/Xiaoqi-Zhao-DLUT/MSNet</a></p>
<h1>Postprocess</h1>
<p>scipy.ndimage.binary_fill_holes</p>
<h1>What didn't work</h1>
<ul>
<li>Larger model, e.g. res50unet, utnetv2 (CNN combined with transformer techniques), nnFormer, etc. These models may work better with pretrained weights.</li>
<li>RF-Net: A model learning residuals for each layer: <a href="url" target="_blank">https://github.com/mniwk/RF-Net</a>.</li>
<li>3D CapsulesNet: <a href="url" target="_blank">https://github.com/UARK-AICV/3DConvCaps</a>.</li>
<li>Anisotropic Encoder: <a href="url" target="_blank">https://github.com/Shanghai-Aitrox-Technology/EfficientSegmentation</a>.</li>
<li>BPR: Boundary refinement method modified into 3D: <a href="https://github.com/tinyalpha/BPR" target="_blank">https://github.com/tinyalpha/BPR</a>.</li>
<li>Remove small connected components for postprocess. Even I use it alone to each channel, results are always worse than without it, and I guess its threshold should be carefully tuned.</li>
<li>Two stage segmentation: Coarse model to get rough location and a fine model for final seg (not enough time to fix some problems in it)</li>
</ul> |
UW-Madison GI Tract Image Segmentation | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UW-Madison GI Tract Image Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I manually annotated about 200 samples.</p>
<p>Thank you to the competition host, Kaggle, and all participants. And congrats to the winners.</p>
<h1>Overview</h1>
<p>Our solution is the 2-stage pipeline. The first stage model (stage1) predicts positive - negative slices and the second stage model actually performs segmentation on slices that are predicted to be positive by stage1.<br>
The key points of our solution are:</p>
<ul>
<li>2 stage pipeline : positive - negative detection (stage1) → segmentation (stage2)</li>
<li>For the stage2, increasing the backbone size improves val_dice</li>
<li>2.5D - 3D ensemble</li>
</ul>
<h1>Single model</h1>
<h2>2.5D model</h2>
<h3>Input</h3>
<ul>
<li>5slice 2.5D data (s-2, s-1, s, s+1, s+2) / 512 x 512</li>
<li>Crop by YOLOv5<ul>
<li>This step has two purposes<ul>
<li><strong>Background removal</strong>: Although the rule-based method can remove some of the background, there are some patterns that cannot be crop properly due to body motion artifacts and other factors. In such cases, the YOLO-based method works well.</li>
<li><strong>Remove signal from the arms</strong>: Especially in abdominal MRI imaging, the signal around the arms may be unnaturally high due to RF field (B1) inhomogeneity, resulting in a hot spot. When this happens, the min-max normalization of the image does not work well, so the arm is excluded from the annotation phase for YOLO training.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1739577%2Fbd65a6503be956ba834dd98ac0d3a61d%2Fcrop4.png?generation=1657927039321062&alt=media" alt=""><br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F1739577%2F1e2b697b5cfdd80876d89711845f885d%2Fcrop2.png?generation=1657928440501748&alt=media" alt=""></li></ul></li></ul></li>
</ul>
<h3>Model</h3>
<ul>
<li>Backbone<ul>
<li>stage1 = Efficientnet B4, Swin Base</li>
<li>stage2 = Efficientnet L2, ConvNeXt XL, Swin Large<ul>
<li>For the stage2, increasing the backbone size improves val_dice</li></ul></li></ul></li>
<li>Decoder = UperNet</li>
<li>CE / CE and Dice Loss (1:1)</li>
<li>training 20epoch and SWA range(11, 21, 1)</li>
</ul>
<p>Below are the dice scores for each step in stage2, showing that yolo_crop, large_input, and large_backbone are working.</p>
<p>stage2 (training with gt_positive slices) / Baseline = EfficientNet B4, 256x256 </p>
<table>
<thead>
<tr>
<th></th>
<th>Dice @ positive slice</th>
</tr>
</thead>
<tbody>
<tr>
<td>Baseline</td>
<td>0.8011</td>
</tr>
<tr>
<td>+ YOLOv5 crop</td>
<td>0.8162</td>
</tr>
<tr>
<td>+ 512x512</td>
<td>0.8277</td>
</tr>
<tr>
<td>+ backbone=L2</td>
<td>0.8349</td>
</tr>
</tbody>
</table>
<h2>3D model</h2>
<h3>Model</h3>
<ul>
<li>stage1 = Unet3d, DynUnet (1000epochs)</li>
<li>stage2 = Unet3d (500epochs)</li>
<li>CE and Dice Loss (1:1)</li>
<li>SWA range(300, 501, 10)</li>
</ul>
<p>stage2 (training with gt_positive slices)</p>
<table>
<thead>
<tr>
<th></th>
<th>Dice @ positive slice</th>
</tr>
</thead>
<tbody>
<tr>
<td>Unet3d + SWA</td>
<td>0.8271</td>
</tr>
<tr>
<td>+flip TTA (HW)</td>
<td>0.8307</td>
</tr>
</tbody>
</table>
<h2>Each stage model training</h2>
<ul>
<li>The stage1 model was trained using all slices of the patient.</li>
<li>The stage2 model was trained using only the positive slice. That is, for each slice in the train data, <code>is_positive = np.max(mask) > 0</code> and only these slices were used.</li>
</ul>
<h2>Inferece: 2 Stage Pipeline</h2>
<p>First, a segmentation mask is created for all slices of the patient using stage1, and positive slices are determined using threshold values for each channel. Next, the slices predicted as positive by stage1 are segmented by stage2.</p>
<p>Positives are determined for each slice and channel, as shown in the pseudo code below.</p>
<pre><code>def is_positive(result, thr):
return np.max(result) > thr
for slice in slices:
result_stage1 = inference_stage1(slice_img) # (h, w, 3), type = float
result_stage2 = inference_stage2(slice_img) # (h, w, 3), type = float
for i in range(3):
if is_positive(result_stage1[...,i], pos_thrs[i])
result_stage1[...,i] = result_stage2[...,i]
</code></pre>
<h2>Acknowledge</h2>
<p>takuoko is a member of Z by HP Data Science Global Ambassadors. Special Thanks to Z by HP for sponsoring me a Z8G4 Workstation with dual A6000 GPU and a ZBook with RTX5000 GPU.</p> |
UW-Madison GI Tract Image Segmentation | 31st solution (could be in gold) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UW-Madison GI Tract Image Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks to Kaggle and Univ of Wisconsin for such a nice competition. Though I still think that they should have covered this discussion in a more serious manner: <a href="https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/327350" target="_blank">https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/327350</a></p>
<p>Congratulate to all of the participants who either won a prize or medal, or who learned new things in this competition.</p>
<p>We learned a lot from the public notebooks.</p>
<p>Here is the summary of our method (thank you <a href="https://www.kaggle.com/raabidhussain" target="_blank">@raabidhussain</a> you are a very good teammate):</p>
<p>1) Building UNET 2.5D model -2 to +2 slices (thanks to <a href="https://www.kaggle.com/awsaf49" target="_blank">@awsaf49</a>) and using all data to train it. Horizontal flip was used during both training and inferring.<br>
2) Building UNET 3D models in MONAI (thanks to <a href="https://www.kaggle.com/yiheng" target="_blank">@yiheng</a>) on 5 different folds. Hor and Ver flips were used during training (5000 epochs), but no flip was used in the inferring. Here, was our mistake (we were so biased with the public score so we did not participate any flipping in the 3D solution, otherwise we were in gold and our private score could increase from 0.876 to 0.879).<br>
3) Ensemble the two models outputs (half-half).<br>
4) A little bit of ad hoc post processing. The most important: Determining the slice number at which no organ is detected in the bottom of the abdomen. Checking the total number of annotated voxels in slices -1, -2 and -3. If the number of annotated voxels in a slice were less than a threshold (650) we were removing all the annotations in that slice (to improve Hausdorff).</p>
<p>Did not work for us:<br>
UNETR did not work for us. We did not have extra GPUs other than Kaggle's so we could not investigate alot this network. We trained this model for 600 epochs still the dice was about 0.61 and Hausdorff was about 0.91. So we just gave up.<br>
It will be great if you share your experiences on this topic. </p>
<p>Thank you.<br>
1) From scratch without any pre-training. We did not use TF, only Pytorch and Monai<br>
2) We followed the exactly what Awsaf and Yiheng proposed. You may follow their works and public notebooks.<br>
3) Cosine, changing from 1e-4 to 1e-6<br>
4) we did not use HD<br>
5) GPU<br>
6) All</p>
<p>Just to complete what raabid mentiond. The unet as far as I remember was fast. Something around 1-2 min per epoch. For the UNETR I remember it exactly: 3.5 min per epoch.<br>
Maybe it was your loss function that was long. We did not put HD in the loss.<br>
With kaggle gpu resources we just did 1 fold. 2 weeks of two persons. We had access to company gpu for a very limited amount of time. For UNETR we could not succeed as we did not know this network very well and we had only kaggle gpu resources. </p>
<p>I think you and Awsaf had a key role in this competition. And congrats for all the medals 👍</p>
<p>Very good suggestion. Like a UNETR workshop or something like that. We should propose it to kaggle organizations. </p> |
UW-Madison GI Tract Image Segmentation | 3rd Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UW-Madison GI Tract Image Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>sorry for the late reply, I use type2 for classification branch.</p>
<p>I am using the segmentation branch as a classification task, the classification branch is just to assist the training, In my local validation, the segmentation branch get a better accurate.</p>
<p>Thanks to the organizers and congrats to all the winners,I have learned a lot from the discussions and previous competition solutions, thank you for your generous sharing!</p>
<h4>Summary</h4>
<p>My overall solution is similar to the 2nd place solution, but without 3D models. My solution is to use the detector to detect the main area, then use the classifier to judge whether it needs to be segmented, and finally segment the image that needs to be segmented, and only keep the segmentation results that the classifier judges as positive samples.</p>
<h5>Detail</h5>
<ul>
<li>5 Fold cross validation by group spliting case</li>
<li>Only 2.5D Models (slice=3 and slice=5)</li>
<li>Pre-processing(crop with a detection model), refer to <a href="https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation/discussion/107981" target="_blank">[bestfitting]</a>, <a href="https://www.kaggle.com/competitions/rsna-str-pulmonary-embolism-detection/discussion/194145" target="_blank">[Guanshuo Xu]</a></li>
<li>My training pipeline (cls + seg method and swa for models) is based on our siim acr solution (<a href="https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation/discussion/108009" target="_blank">SIIM-ACR Solution</a>)</li>
</ul>
<h3>Models</h3>
<h4>Pre-processing</h4>
<ul>
<li>EfficientDet-D0, size=256, epoch=5, all other parameters use default parameters. This is a very easy task for the model to learn. I generated some training samples using image preprocessing and relabeled the bad bounding box.</li>
</ul>
<h4>Cls part</h4>
<p>Only use Unet(<a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">Smp-Unet</a> and Timm-Unet) with different backbone and classification branch,Timm-Unet refer to <a href="https://github.com/selimsef/xview3_solution" target="_blank">selimsef's xview3_solution</a>, Thanks for his sharing, Then I made some modifications to the network for this competition.</p>
<ul>
<li>Cls branch loss: bce loss </li>
<li>Seg branch loss: bce loss</li>
<li>Using ema (refer to <a href="https://github.com/dungnb1333/SIIM-COVID19-Detection" target="_blank">DungNB</a>) and swa</li>
<li>Epoch=35(7 cycle)</li>
<li>Init lr=3e-4 or 5e-4</li>
<li>Data: use all data</li>
<li>Metric for select model: TP / (TP + FP + FN) and Dice score</li>
</ul>
<h5>models</h5>
<ul>
<li>Smp-Unet efficientnet-b7, size=320, slice=5</li>
<li>Timm-Unet efficientnet-v2-l, size=320, slice=3</li>
<li>Timm-Unet efficientnet-v2-l, size=320, slice=5</li>
<li>Timm-Unet efficientnet-v2-m, size=352, slice=5</li>
<li>Timm-Unet efficientnet-b7ns, size=320, slice=5</li>
</ul>
<h4>Seg part</h4>
<p>For the classification model, I mainly focus on the accuracy of the classification. For the segmentation model, I hope to obtain a better and better segmentation effect, so I train the segmentation model separately.</p>
<ul>
<li>Cls branch loss: bce loss </li>
<li>Seg branch loss: ComboLoss({'bce': 0.5, 'dice': 0.5, 'lovasz': 1}</li>
<li>Data: All positive samples</li>
<li>Using ema and swa</li>
<li>Metric for select model: Dice score</li>
<li>Timm-Unet efficientnet-v2-l-1 and Timm-Unet efficientnet-v2-l-2 use different scale pretraining adjustments</li>
</ul>
<h5>models</h5>
<ul>
<li>Timm-Unet efficientnet-v2-l, size=384, epoch=35, slice=5</li>
<li>Timm-Unet efficientnet-v2-l-1, size=416, epoch=35, slice=5</li>
<li>Timm-Unet efficientnet-v2-l-2, size=416, epoch=35, slice=5</li>
<li>Timm-Unet efficientnet-v2-m, size=416, epoch=35, slice=5</li>
</ul>
<h4>Post-process</h4>
<ul>
<li>Remove 25 pixel part</li>
<li>Take three consecutively predicted positive samples as the starting point and three consecutively predicted negative samples as the end point(very small improve)</li>
</ul>
<h4>Some Results</h4>
<table>
<thead>
<tr>
<th>fold</th>
<th>size/slice</th>
<th>cls</th>
<th>seg</th>
<th>private/pubilc</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>384/3</td>
<td>efb3-unet</td>
<td></td>
<td>0.869/0.875</td>
</tr>
<tr>
<td>0</td>
<td>384/3</td>
<td>efb5-unet</td>
<td></td>
<td>0.869/0.876</td>
</tr>
<tr>
<td>0</td>
<td>320/3</td>
<td>efb7-unet</td>
<td></td>
<td>0.866/0.878</td>
</tr>
<tr>
<td>0</td>
<td>384/3</td>
<td>efb7-unet</td>
<td></td>
<td>0.870/0.877</td>
</tr>
<tr>
<td>0</td>
<td>320/5</td>
<td>efv2-l-unet</td>
<td></td>
<td>0.871/0.878</td>
</tr>
<tr>
<td>0</td>
<td>320/5</td>
<td>efv2-l-unet(swa)</td>
<td></td>
<td>0.875/0.882</td>
</tr>
<tr>
<td>5 folds</td>
<td>320/3</td>
<td>efb7-unet</td>
<td></td>
<td>0.873/0.883</td>
</tr>
<tr>
<td>5 folds</td>
<td>320/3</td>
<td>efb7-unet</td>
<td>efb7-unet</td>
<td>0.877/0.886</td>
</tr>
<tr>
<td>5 folds</td>
<td>320/5</td>
<td>efv2-l-unet(swa)</td>
<td></td>
<td>0.876/0.886</td>
</tr>
<tr>
<td>5 folds</td>
<td>320/5</td>
<td>efv2-l-unet(swa)</td>
<td>efv2-l-unet(swa-384)</td>
<td>0.879/0.890(not submit)</td>
</tr>
</tbody>
</table>
<h4>Work for me</h4>
<ul>
<li>Add a detection model(It greatly reduces the resources used for training)</li>
<li>Add some segmentation models for positive sample(0.886->0.890)</li>
<li>Mixup and cutmix are work for segmentation models</li>
<li>The swa has a relatively large increase in the single fold, and the score increase is not much different when the 5 folds ensemble.</li>
</ul>
<h4>Not work for me</h4>
<ul>
<li>Make some brightness adjustments to the data(spent a lot of time)</li>
<li>Hausdorff distance loss function</li>
<li>External dataset(CT), maybe need some processing with GAN models</li>
<li>Positive and negative sample balance</li>
</ul>
<h4>Final</h4>
<p>Because I don't have a local GPU now, the first two months my model was mainly based on Kaggle GPU development, 30 hours a week gave me more time to think and read previous competition solutions, and worked on Kaggle GPU for two month, the best model (classification only) obtained was effb3-FPN-384 (single fold 0.875), this score was about 90th at the time, which made me very frustrated, the last month, I rented some GPU, trained a lot of large models, luckily, large models work well. Due to time and training resource limit, I have no time to try 3D models.</p>
<h4>Infer code</h4>
<p>Link: <a href="https://www.kaggle.com/code/hesene/3rd-place-winning-solution" target="_blank">https://www.kaggle.com/code/hesene/3rd-place-winning-solution</a></p>
<p>Thanks, it was also a journey worth remembering for me</p>
<p>Thanks 😁😁😁</p>
<p>Hi, I did not relabel the data for the classification model, I removed these data <a href="https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/321979" target="_blank">(More Incorrect Masks w/ GIFs and Questions)</a> during training and relabeled with pseudo labels at the end, but they had little effect on the results</p>
<p>I am cleaning my code recently, you can refer to our previous solution at <a href="https://www.kaggle.com/competitions/siim-acr-pneumothorax-segmentation/discussion/108009" target="_blank">this</a>, the training pipline is similar.</p>
<p>Thanks.</p>
<ol>
<li>Crop out ROIs offline.</li>
<li>Based on some old competitions experience(refer to <a href="https://www.kaggle.com/competitions/siim-acr-pneumothorax-segmentation/overview" target="_blank">this</a>), and some offline experimental results.</li>
<li>In this competition, the "heavy" models performs better than "light " models on both the validation set and the public leaderboard, and the different resolutions are chosen based on the validation set results.</li>
</ol>
<p>In the early training, I remove these samples(case7_day0, case81_day30 etc. ) from the training set, and in the final training, I use the trained model to predict the results of these samples as the ground truth for training</p> |
UW-Madison GI Tract Image Segmentation | 42nd place solution (Shakeup) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UW-Madison GI Tract Image Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the University of Wisconsin and the Kaggle Team for this amazing competition. Congratulation to all winners.</p>
<p>Here is a brief summary of my solution:</p>
<p><strong>Preprocessing</strong></p>
<ul>
<li>Clipping intensity top and bottom 5%</li>
<li>Min-max scaling</li>
</ul>
<p><strong>Models</strong></p>
<ul>
<li>Unet3D 32x64x128x256x320</li>
<li>ResUnet3D 32x64x128x256x320 </li>
<li>Unet3D 64x128x256x512x1024</li>
<li>ResUnet3D 64x128x256x512x1024</li>
<li>All models are trained with <strong>deep supervision</strong>.</li>
</ul>
<p><strong>Data Augmentation</strong></p>
<ul>
<li>RandomCrop patch 160x160x80</li>
<li>Flip, ShiftScaleRotate, GridDistortion, RandomGamma</li>
</ul>
<p><strong>Training</strong></p>
<ul>
<li>Fold: 5 (split by cases to avoid leaks)</li>
<li>Validation Metric: Competition Metric</li>
<li>Optimizer: SGD with momentum 0.9</li>
<li>Batch Size: 4</li>
<li>Epoch: 1000 (Takes ~ 60s/epoch)</li>
<li>PolyLRScheduler</li>
</ul>
<p><strong>Inference</strong></p>
<ul>
<li>Sliding window with window size 160x160x80 and stride 80x80x40. Patches are gaussian center-weighted</li>
<li>No TTA/No post-preprocessing :(</li>
</ul>
<p><strong>Model weaknesses</strong></p>
<ul>
<li>False positives due to truncation in the bottom voxels area -> Affect dice score and hausdorff </li>
<li>Possible solutions: Post-preprocessing, Slice classification model</li>
</ul> |
UW-Madison GI Tract Image Segmentation | 5th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UW-Madison GI Tract Image Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi, congrats to your solo 3rd place and being GM!<br>
I didn't do cleaning or anything else to the generated y, z dataset. Models trained on these datasets are not perform as well as those trained on x dataset but we found it doing very well after ensemble.</p>
<p>just construct every prediction comes from 2.5D models into 3D manner and then do the ensemble with 3D models.</p>
<p>Thanks to the organizers and congrats to all the winners and my wonderful teammates <a href="https://www.kaggle.com/boliu0" target="_blank">@boliu0</a> </p>
<p>Our single-model performance was not very good, and for a long time we even lost the confidence to take a gold. But surprisingly it worked very well after ensemble. </p>
<p>So what I mainly want to share here is how we can lift the ensemble performance by making diversity.</p>
<h1>Summary</h1>
<ul>
<li>5 Fold cross validation by group spliting <strong>case</strong></li>
<li>2.5D Models (x, y, z 3 types in total)</li>
<li>3D Models</li>
<li>Post-processing</li>
</ul>
<h1>2.5D Models</h1>
<p>Used library:</p>
<ul>
<li><a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">https://github.com/qubvel/segmentation_models.pytorch</a></li>
<li><a href="https://github.com/rwightman/pytorch-image-models" target="_blank">https://github.com/rwightman/pytorch-image-models</a></li>
<li><a href="https://github.com/sithu31296/semantic-segmentation" target="_blank">https://github.com/sithu31296/semantic-segmentation</a></li>
</ul>
<p>We first constructed the dataset into 3D format, and then by cutting the x, y, and z axes separately, we can obtain 3 different types of 2.5D datasets. I'll call it 2.5Dx (original), 2.5Dy, 2.5Dz.</p>
<p>Then we trained 9 models on those 2.5D datasets:</p>
<ul>
<li><p>efficientnet v2s unet on 2.5Dx dataset</p></li>
<li><p>nfnet l0 unet on 2.5Dx dataset</p></li>
<li><p>convnext small unet on 2.5Dx dataset</p></li>
<li><p>pvtv2b2 segformer on 2.5Dx dataset</p></li>
<li><p>mitb2 segformer on 2.5Dx dataset</p></li>
<li><p>efficientnet v2s unet on 2.5Dy dataset</p></li>
<li><p>convnext small unet++ on 2.5Dy dataset</p></li>
<li><p>efficientnet v2s unet on 2.5Dz dataset</p></li>
<li><p>convnext small unet++ on 2.5Dz dataset</p></li>
</ul>
<h1>3D Models</h1>
<p>Used library:</p>
<ul>
<li><a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">https://github.com/qubvel/segmentation_models.pytorch</a></li>
<li><a href="https://github.com/rwightman/pytorch-image-models" target="_blank">https://github.com/rwightman/pytorch-image-models</a></li>
<li><a href="https://github.com/Project-MONAI/MONAI" target="_blank">https://github.com/Project-MONAI/MONAI</a></li>
</ul>
<p>In this part I believe that we do not do anything different from what most people do.<br>
we trained :</p>
<ul>
<li>resnet unet on 3D dataset</li>
<li>efficientnet v2s unet on 3D dataset</li>
<li>efficientnet v2m unet on 3D dataset</li>
</ul>
<p>The highest pub LB score of a single model mentioned above is only 0.875<br>
For the final ensemble we simply averaged the output of all models then use threshold 0.3 we get pub LB 0.889</p>
<h1>Post-processing</h1>
<p>For those images with a predicted area less than 50 pixels, we discarded them. By using this we get pub LB 0.890</p>
<h1>Acknowledge</h1>
<p>Thanks to Z by HP for sponsoring Z by HP Z8G4 Workstation with dual A6000 GPU which I trained many models on. The large memory of 48G for a single GPU allowed me to train 3D models with ease!</p>
<p>thx! yes it comes from EDA the training data.</p>
<p>Hi, congrats for your winning!<br>
Yes eff v2 is from timm I write some code to replace every 2D related modules into 3D and it works quite well ;)</p> |
UW-Madison GI Tract Image Segmentation | 8th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UW-Madison GI Tract Image Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all, I would like to thank the organizers for hosting such a great competition. </p>
<p><strong>TLDR</strong> <br>
Final submission is a ensemble of 6 models: three 2.5D models and three 3D models. Such ensemble was able to get 0.880 Private LB score. Interesting, that the Private LB score without last added model could be 0.883</p>
<p><strong>Folding</strong><br>
Standard 5 folds grouped by case. To increase models diversity I used different 5 fold splitting for each model in ensemble.</p>
<p><strong>Segmentation Loss</strong><br>
Focal + Dice</p>
<p><strong>Validation metric</strong><br>
Only Dice was used as a metric on validation. I separately measure Dice on images with mask, Dice on images without masks and weighted mean of them. It has good enough correlation with LB, but not perfect.</p>
<p><strong>Optimization</strong><br>
AdamW with 0.001 lr at start and around 0.0004 lr at the end of the training.<br>
LR decrease with cosine scheduler and full training was about 25 epochs.</p>
<p><strong>Augmentations</strong></p>
<pre><code>albumentations.HorizontalFlip(p=0.5),
albumentations.ShiftScaleRotate(shift_limit=0.03, scale_limit=(0, 0.1), rotate_limit=20,
border_mode=1, p=0.85),
albumentations.OneOf([
albumentations.ElasticTransform(alpha=1, sigma=50, alpha_affine=10,
border_mode=1, p=0.5),
albumentations.GridDistortion(num_steps=5, distort_limit=0.1,
border_mode=1, p=0.5)
], p=0.2),
albumentations.OneOf([
albumentations.GaussNoise(var_limit=(0.0001, 0.004), p=0.7),
albumentations.Blur(blur_limit=3, p=0.3)
], p=0.5),
albumentations.OneOf([
albumentations.RandomBrightness(limit=0.05, p=1),
albumentations.RandomContrast(limit=0.1, p=1)
], p=0.5)
</code></pre>
<p>But for some models more severe augmentations was used</p>
<p><strong>Data processing</strong><br>
All training data, except bad case_days (case7_day0, case81_day30) are used for training. I also found bad annotation for large bowel in case43_day26 and case138_day0 but decided to leave it as is.<br>
I used min-max data normalization ( for 2d and 3d trainings).</p>
<p><strong>Training strategy</strong><br>
Because of specificity of annotation (no clear criteria for start and end of annotated slices) I worked with two problems: segmentation (predict masks) and classification (predict whether a slice have annotation). I handled this in multi-task manner. All my models are Unet variations (both for 2d and for 3d), so I simply add classification head at the end of encoder of Unet to classify presence of annotation. Additional BCE Loss is used for this head. At the end of competition i also tried to not simply predict one value for every slice but three values for every slice, for each class. And it works even better.</p>
<p>The other problem is an ambiguous annotation (absence of annotation) near the first and last annotated slices. In my opinion it actually is not ambiguous - it simply wrong (because we can easily distinguish the organs on these slices). So, for all my models (2d and 3d) i ignore such wrong slices when computing segmentation loss, but classification loss is still used.</p>
<p><strong>Models</strong><br>
<strong>2.5D models:</strong><br>
All 2.5D models are Unet++ from <a href="https://github.com/qubvel/segmentation_models.pytorch" target="_blank">smp</a> , i also implement deep supervision training (from Unet++ paper) for it. </p>
<p>There are 5 folds for each model, no TTA (may be full train training + TTA is better and more time efficient).</p>
<p>1) Unet++ with efficientnet-b8 with classification head for 1 output, (384, 384) resized image<br>
<strong>Public: 0.880 | Private: 0.868</strong></p>
<p>2) Unet++ with efficientnet-b4 with classification head for 3 outputs, (384, 384) resized image<br>
<strong>Public: 0.879 | Private: 0.872</strong></p>
<p>3) Unet++ with hrnet-w44 with classification head for 1 outputs and additional deep supervision loss, also more severe augmentation, (384, 384) resized image<br>
<strong>Public: 0.879 | Private: 0.874</strong></p>
<p><strong>3D models:</strong><br>
All 3D models are ResidualUNet3D from <a href="https://github.com/wolny/pytorch-3dunet" target="_blank">pytorch-3dunet</a></p>
<p>Also 5 folds for each model, no TTA (may be full train training + TTA is better and more time efficient).</p>
<p>4) ResidualUNet3D with <code>f_maps = [32, 64, 128, 256, 512]</code> and (80, 256, 256) input size, no classification head<br>
<strong>Public: 0.879 | Private: 0.876</strong> (classification predicts from other 2.5D model are used for this score)</p>
<p>5) ResidualUNet3D with <code>f_maps = [32, 64, 128, 256]</code> and (80, 224, 224) input size, with classification head for 3 outputwith classification head for 3 outputs<br>
<strong>Public: 0.877 | Private: 0.871</strong> (this score without classification predicts, i estimate that classification output usage could increase about 0.003)</p>
<p>6) ResidualUNet3D with <code>f_maps = [32, 64, 128, 256, 512]</code> and (80, 256, 256) input size, with classification head for 3 outputwith classification head for 3 outputs<br>
<strong>Scores not measured :(</strong> </p>
<p><strong>Ensembling</strong><br>
For submission i choose ensemble of 6 models above, with almost equal weights. Three outputs classification for predicting annotation presence (one output classifiers was converted to 3 output by duplicating outputs)<br>
<strong>Public: 0.886 | Private: 0.880</strong></p>
<p>Without last model, 1-5 models ensembling<br>
<strong>Public: 0.885 | Private: 0.883</strong></p> |
UW-Madison GI Tract Image Segmentation | A Bronze Medal Solution with fast.ai | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UW-Madison GI Tract Image Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Congratulations to all the winners, and big thanks to the hosts for this great competition. It was exciting to participate and easy to approach due to the moderate scope of the data and reliable train/test split.</p>
<p>My solution is rather simple and very similar to many others already shared here: an Efficient Net backbone plus U-Net decoder, nothing fancy. But I wanted to share it here for someone who also uses the <code>fastai</code> library, as this competition helped me to learn more about it by digging deeper into implementation details and applying it in practice.</p>
<p>I created <a href="https://www.kaggle.com/code/purplejester/fast-ai-05-building-a-bronze-medal-solution" target="_blank">a notebook</a> showing basic steps required to build a segmentation model for N-channel images with <code>fastai</code>, <code>segmentation_models_pytorch</code>, and <code>albumentations</code>. It is not difficult to do, but requires some time to figure things out. So this notebook could be a good starting point if you never worked with the library before.</p>
<p>See all of you in one of the next competitions!</p> |
UW-Madison GI Tract Image Segmentation | Transformer-based Solution & What was missing to top spots? | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UW-Madison GI Tract Image Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for the kind words. Let's see. I think winning a gold medal is a different story after all 😅</p>
<p>Thank you :)</p>
<p>Thanks:) I have to see, maybe I'll join another competition soon.</p>
<p>First, I'd like to thank the organizers of this competition. I also want to thank all the fellow Kagglers for sharing their ideas/solutions and for making Kaggle such a nice learning and competition environment.</p>
<p>This competition was my first Kaggle competition, and I am quite happy with the won bronze medal (<strong>103rd place</strong>). But since my goal on this platform is to improve and learn new things, I analyzed the solutions of the top competitors and compared them to my approach. In the following, I will explain my strategy in general and argue how it might be improved to reach a better rank. I hope this is interesting for some of you as well. My code and the following analysis can also be found <a href="https://github.com/fafraob/umwgit" target="_blank">here</a>.</p>
<h3>Architecture</h3>
<p>As my model, I used the SSFormer from the paper <a href="https://arxiv.org/pdf/2203.03635v1.pdf" target="_blank">Stepwise Feature Fusion: Local Guides Global</a>. The encoder of this model comes from the <a href="https://arxiv.org/pdf/2105.15203.pdf" target="_blank">Segformer paper</a> and is a <strong>transformer-based architecture</strong> that also incorporates CNN properties.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F10314471%2F338104c3b7b35193a51ed72505edf9d5%2Fssformer.png?generation=1658071236338913&alt=media" alt=""><br>
I used the SS-Former-S, which is the version with fewer parameters. Moreover, pretrained weights were used to initialize the model in the beginning.</p>
<h3>Settings</h3>
<ul>
<li>5 "stratified-group-fold" ensemble<ul>
<li>group splitting by case</li>
<li>ensembling by averaging over the model outputs </li></ul></li>
<li>2.5D input -> add one slice before and after target slice with stride two (for slice 3, we get [slice 1, slice 3, slice 5])</li>
<li>batch size of 4 (due to memory limit and I wanted to train two models at a time)</li>
<li>30 epochs</li>
<li>polynomial LR scheduler with start LR 1e-4 and end LR 1e-7</li>
<li>resize images to 640x640</li>
<li>random rotation, crop and dropout augmentations during training</li>
<li>horizontal flip test time augmentation</li>
<li>0.5 DiceLoss + 0.5 BCELoss</li>
</ul>
<p>The scores of this 5-fold ensemble are <strong>0.869</strong> on the private and <strong>0.883</strong> on the public leaderboard.</p>
<h3>What was missing to the top places?</h3>
<ul>
<li>do the same thing I did for one model with several models and ensemble them<ul>
<li>this is the <strong>single most important point</strong> (probably giving the largest boost)</li>
<li>my SSFormer is already on par with single 2.5D models of the top solutions (when they were trained on all the data like mine) 🎉</li>
<li>I also had a Unet (EfficientNet encoder) with similar (but slightly worse) performance than my SSformer, which I did not use (because I got tricked by the public leaderboard 😅)</li>
<li>an other example: by only adding an older version of my SSFormer ensemble to the whole ensemble, I could have ended up at rank 63 (I evaluated this ensemble but did not choose it for the final submission (because I got tricked by the public leaderboard again 😵😂))</li></ul></li>
<li>use some kind of classifier to detect if the given slice has a non-zero mask or not (i.e. if stomach or intestines are shown on the image or not)<ul>
<li>train this classifier (better also as ensemble) on the whole data</li>
<li>then, train segmentation models only on the data containing masks so that it can fully focus on this task (since there was not too much data in this competition)</li>
<li>during inference, set mask to zero if the classifier says that there is no mask, otherwise use the segmentation models</li></ul></li>
<li>many top teams also used 3D models, which performed very well, in their ensembles</li>
<li>in preprocessing, use a localizer to focus on the important parts of the image and to reduce the background<ul>
<li>it's not worth to waste computing time on irrelevant parts of the image + on whole images, the model has to learn scale and shift variance</li>
<li>might require manual annotation to train a simple CNN localizer</li></ul></li>
<li>use stronger augmentation; seemed to have helped a lot of competitors but didn't in my tests; maybe check more carefully next time</li>
<li>team up<ul>
<li>everything I just mentioned here takes a quite some time to implement and tune</li>
<li>also gpu times are often limited</li>
<li>hence, team work makes the dream work </li></ul></li>
</ul> |
UW-Madison GI Tract Image Segmentation | [43rd place] Our CV strategy (+77 positions in private) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: UW-Madison GI Tract Image Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>We would like to share the strategy that we have followed to split our training dataset, in case it is of interest to anybody. Our CV score highly correlated with the LB along all the competition (+0.002), and it is partly responsible for our boost on the private leaderboard.</p>
<ol>
<li><p><strong>Grouping</strong> : as many Kagglers mentioned, it was very important to group the different cases by fold. To do this, we can use sklearn <code>GroupKFold</code>. We believe that this point was done by every team.</p></li>
<li><p><strong>Stratifying</strong> : many alternatives can be observed throughout the different public kernels to perform stratification. In our case, we believed it was important to keep the distribution of the different possible combinations of multiple classes labeled in the same slice. By doing so, we wanted to prevent models from always predicting specific combinations. That is, if, for example, the proportion of (small bowel, large bowel) was greater than others, the model would tend to always predict both of them together.</p></li>
<li><p><strong>Metric</strong> : the last thing that we believe is key to getting good CV-LB correlation is implementing the right metric. Since no official version was published, it was a great challenge to reverse-engineer it from the brief description given. The final version of our metric is described in <a href="https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/337388" target="_blank">this post</a>.</p></li>
</ol>
<p>I have uploaded <a href="https://www.kaggle.com/code/atmguille/uw-madison-43rd-cv-strategy-77-in-private" target="_blank">this notebook</a> that implements the CV split as described.</p>
<p>I would love to read other proposals on how to do the CV, as it appeared to be a key part, so please do share them in the comments!</p> |
Ubiquant Market Prediction | 17th Place Solution, 1000+ on public lb | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Ubiquant Market Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It is a little unorthodox. I was attempting to maximize correlation which is much different than other loss functions because MSE for example can be calculated with a single data point. Correlation depends upon the relative predictions of a group of points. So, when I used variance to maximize correlation, I increased the batch size which would increase the number of point in the group that variance was being calculated for. I figured if the group size was too small then correlation might vary too much which would worsen the model. That is why I split each epoch into two. </p>
<p>If I trained two MLP models instead of splitting, it would causes very poor predictions. Because, the model that is using variance will eventually start to learn to predict extremely large numbers. Negative variance can be minimized by simply increasing the magnitude of the predictions. This is why I added l1 loss in conjunction to variance and had a second run through of the data set with a more orthodox l2 loss. This kept the model from predicting too large of numbers while also maximizing correlation.</p>
<p><a href="https://www.kaggle.com/code/petersk20/17th-place-solution/notebook" target="_blank">Link to Model</a></p>
<p><strong>Model Design:</strong></p>
<ul>
<li><p>Used all the features except for ['f_170','f_272','f_182','f_124','f_200','f_175','f_102','f_153','f_108','f_8','f_145', 'f_225', 'f_241', 'f_63', 'f_229', 'f_246', 'f_41', 'f_66', 'f_142', 'f_150', 'f_99', 'f_74', 'f_62', 'f_271'] which were removed using permutation importance.</p></li>
<li><p>Chinese stocks are infamous for having suspensions in trading. So, I engineered a feature called missing which checked if the stock was present one time step before. My model using this feature scored 0.117721 vs 0.115486 without it (my second submission was very similar but without this variable). This is a small difference but would have dropped me to 84th place. </p></li>
<li><p>Only used stocks after the 850th time interval and weighted newer time steps more.</p></li>
<li><p>Scaled the target using the mean and std within each time step independently of other time steps. </p></li>
<li><p>Applied batch norm to all the features except for missing and put the features into a network with two hidden layers (1000, 512).</p></li>
<li><p>I trained for 21 epochs. I went through the training set twice per each epoch. The first run used l2 as the loss and a batch size of 128 where as the second run used l1 loss and variance with a high batch rate of 1000. The two run throughs also had different learning rates. The first one had a decaying lr and the second one had a small lr that didn't decay (.00002 vs .0000006). I got the predictions on the 11th, 16th, and 21st epoch and averaged the predictions to get the final.</p></li>
</ul>
<p><strong>Takeaways:</strong></p>
<ul>
<li><p>I wish I had used a lightgbm model to blend with my NN. I had seen a good improvement in the public lb with this strategy but didn't have the time to tinker with a lightgbm model to get it to not cause memory issues.</p></li>
<li><p>I also learned from the G-Research competition not to run too many experiments to find optimal values for the hyperparameters and other design decisions of the model due to the low signal to noise ratio present in financial markets. I think this helped me not to overthink the design of the model and do a bunch of work for little gain. </p></li>
<li><p>I'm also glad I trusted my model because my score with this notebook on the public lb was 0.1487 while most people were at least getting over 0.150.</p></li>
</ul>
<p>I think it is important to focus on both but if you make very small changes to your model design or hyper parameters and then test your change it typically will not help you much. On one seed, one might perform better than the other. On another seed, it could be the opposite. You end up overfitting to your validation set or at the very least get a model that performs just as good as a model 100 experiments back on new testing data. Financial data is very easy to overfit to a back test. I had .06 correlation in several validation sets in the G-Research competition. I got no where near that in the hold out test set. If I stopped a month before the end of that competition, I would have probably ended with very similar results. </p> |
Ubiquant Market Prediction | 3rd Place Solution - 5 seeds ensemble transformer | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Ubiquant Market Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Yes, your are right.</p>
<p>investment-wise ~</p>
<p>I check mean & std.</p>
<p>Yes~~~~</p>
<p>This code is too long.😂 </p>
<ol>
<li>It should be max_seq_length. Each sample consists 3500 investments.</li>
<li>No. It is sample-level.</li>
</ol>
<p>First, many thanks to the Kaggle team and Ubiquant market team for hosting this competition, especially for all efforts fixing bug at the first update. And congrats to all winners! I am very lucky winning 3rd place in my first financial comp. </p>
<p><strong>Models</strong><br>
6 layers transformer, max_seq_length=3500 investments</p>
<p><strong>Loss</strong><br>
Optimize PCCLoss directly</p>
<p><strong>Training Method</strong><br>
10 epochs on training data and 3 epochs on supplemental data</p>
<p><strong>Feature Engineering</strong><br>
original 300 features</p>
<p><strong>Augmentation</strong><br>
random zero (feature level) + random mask(sequence level)</p>
<p><strong>Validation Strategy</strong><br>
last k (k=100,200,300) validation</p>
<p><strong>Ensemble Strategy</strong><br>
5 seeds ensemble</p>
<p><strong>Rank Journey</strong><br>
900+(public lb)->failed->7->7->4->3</p>
<p><strong>What Didn't Work</strong><br>
feature clipping<br>
avg features group by time_id(Maybe I'm wrong😂)<br>
feature selection by corr<br>
sample selecton or sample weight<br>
target normalization or target clipping<br>
tried lgb, mlp, 1dcnn but transformer outperform these models. I am too lazy to ensemble.</p>
<p>Running same model by different random seeds and average the results.</p> |
Ubiquant Market Prediction | 5th place solution, single NN model | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Ubiquant Market Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>20 models, each model trained with just 20% of all data and validated with 5% </p>
<p>multilayer perceptron (300,256,128,32).</p>
<p>Logaritmit transformation <br>
train['target']= np.log1p(train['target']*0.1)</p>
<p>remove rows with up and down limits of 8*train.target.std() --> just 127 rows</p>
<p>Thanks to Ubiquant and everyone involved in the competition, especially to all the kaggers that shared their knowledge in some amazin notebooks.<br>
The idea for my notebook was made it as simple and stable as possible. </p>
<p>Rank Score Update<br>
1344 0.1481 publicleaderboard<br>
16 0.0865 publicleaderboard_update1<br>
24 0.1141 publicleaderboard_update2<br>
11 0.1304 publicleaderboard_update3<br>
13 0.1239 publicleaderboard_update4<br>
5 0.1198 publicleaderboard_update_final</p>
<ol>
<li>Training Data: train.csv (with time_id >599) + supplemental_train.csv </li>
<li>Target log transformation and removed 127 Target outliers rows.</li>
<li>No Feature Engineering just transform features with sklearn QuantileTransformer.</li>
<li>Simple NN model with four dense layers(optimizer=Adam, loss='mse', metrics=[rmse,wcorr])</li>
<li>Custom Cross Validation for Training with 20 folds and 10 purge time_id.</li>
</ol> |
Ubiquant Market Prediction | 7th place solution, single model, no supplemental data used | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Ubiquant Market Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>No, sorry, these are some heuristics that I have been working on for the past few years.</p>
<p>Hi,<br>
A bit of self-introduction first. I currently work for a small hedge fund in Beijing. I came to this competition to test some of my ideas from work, and see if they apply to a different set of features. My solution is really a baseline model, with no reverse engineering, no ensemble, and even the supplemental training data provided were not used. </p>
<p>I won't disclose my full solution but here are some facts about the model.</p>
<p>Model and hyperparameter:<br>
The model is a single LGB model with hand-tuned parameters, no tunning packages were used. The 'extra_trees' pamameter is set to 'True'. This gives steady improvement when the number of trees goes large.</p>
<p>Feature engineer and selection:<br>
This is kind of my secret. The model takes in 900+ features, which are selected from an even larger feature pool.<br>
As a control, in the second submission I have a similar model but only used the 300 original features. That one scored 0.112 which is not even in the medal range.</p>
<p>Cross-validation:<br>
Standard TimeSerieseSplit applies.</p>
<p>I guess I got lucky, but not in a way of a magical seed but most people are unfirmiliar with financial markets and thus overfitting on the public leaderboard. Thanks to everyone who participated in the competition.</p> |
Ubiquant Market Prediction | 8th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Ubiquant Market Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First of all thanks to all the people organizing and running this competition, it's been a fun experience for me throughout regardless of the result.</p>
<p>My model is a weighted ensemble of 10 of the same LGBM, 30 of the same NN and a handbuilt model. The reason for taking so many of the same model was to try to average out some of the randomness due to initial conditions.</p>
<p>Both the NNs and the LGBM used the same features, but because of the 16GB memory limitations of kaggle notebooks I actually had to split the dataset into two. This, gladly, during testing on smaller subsets of the data, didn't seem to have any noticeable impact on the metric score which could be distinguised from random chance.</p>
<p>The features were:</p>
<ul>
<li>the features as given by ubiquant</li>
<li>the average of each feature per time id</li>
<li>the rolling average of a few hand-selected features over time for the particular investment id minus the current value of the feature</li>
</ul>
<p>The reason for taking the average of the features by time id was to have some pool of performance to compare yourself to, probably better results could be achieved if you sorted this by sector.</p>
<p>the hand selected features were the following:<br>
33, 120, 225, 242, 266, 293<br>
and<br>
15, 31, 83, 157, 164, 189, 197, 215, 226, 231, 237, 239, 243, 250</p>
<p>The process of finding these features was to go through each of the features and look at whether, when the feature value is higher or lower by some amount from the rolling average of the feature until that point, whether that correlates with a higher probability of the target then being either positive or negative.</p>
<p>More specifically I looked for properties I considered to be "good" properties, which basically means that when the value goes up by some amount I want the probability of the target being either negative or positive to go up. Any features that were jumpy, in the sense that they didn't consistently follow the pattern of higher or lower value->higher probability, were discarded. I did this with every feature over many different baskets of investment ids, and when, for any of these batches, the "good" property was lost, I discarded that feature.</p>
<p>Using only these handmade rolling average subtracted features and summing them all up the "good" property was of course preserved and I could predict the sign of the target up to a point of about 70% accuracy… although at that point the amount of times this opportunity would come would be less than 1% of the cases. Which, considering that we're trying to predict future market results, seems like an entirely reasonable number for a well working model.</p>
<p>For me, personally, this would have been my final model because I thought that with purely anonymous features you really couldn't do much better. But this only translated to a metric score of about 0.08 which was far below what could be achieved by just naively putting the data into your favorite machine learning model.</p>
<p>For these ML models however, some of the "good" property got lost! One way to think about this is that when the output of the ML model goes up it should translate to a higher probability of the target being positive and vice versa when the output goes down. The problem is that it just hovers between 50-57% not even breaching 60% let alone getting anywhere near 70% my hand-built model could achieve. The good thing however is that the pearson correlation metric was a lot higher for these ML models. To try to fix that problem a bit I just added my handbuilt model on top of the ML models when my handbuilt model reaches some threshold like 60% probability.</p>
<p>So my final model turned out to be:</p>
<p><strong>((10xLGBM)/2 + (30xNN)/2)x0.92 + (handbuilt model)x0.08</strong></p>
<p>Low weight for the handbuilt model because the predictions tended to be higher in absolute terms. This at least made it such that it manages to breach the 60% probability mark.</p>
<p>For validation I used the last 25% of the data which gave me a CV of around 0.17 </p>
<p>For the final submission I used the entire dataset which translated to a LB of around 0.15.</p>
<p>Finally to make use of the two submission that were given I used one submission for a bear market by multiplying all negative predictions by 1.4 and dividing all positive predictions by 1.4 and vice versa for a bull market.<br>
The reason for choosing the value of 1.4 was to just go through a bunch of different values from 1 to 2 and seeing how that affected the CV. It's been a while but I think on average it was something like:</p>
<p>1.1: +0.001<br>
1.2:+0.002<br>
1.3:+0.003<br>
1.4:+0.0035<br>
1.5:+0.003<br>
1.6:+0.001</p>
<p>In the case of my final submission it seems to have made a difference of something between +0.001 and +0.002.<br>
The ranking over the 3 months was something like <br>
~900 LB ->~100->~20->~30->~20->8</p>
<p>Without knowing what any of the features mean it's hard to tell if anything I did made any sense and I believe that much better things could be done when knowing the features' meanings.</p> |
Ubiquant Market Prediction | [1st Place Solution] - Our Betting Strategy | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Ubiquant Market Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>This is just intuition. <br>
However, in this case, I think that it will be possible to create a new feature by examining the correlation between each target and feature between different investment_ids to determine the presence or absence of features for the overall market flow.</p>
<p>Thanks to Ubiquant and everyone involved for hosting the contest.<br>
And all the competitors who were with us on the six-month journey worked hard.</p>
<p>In particular, the Alphas provided by Ubiquant showed relatively stable and high correlation with the target in the ever-changing financial market.<br>
It was a valuable and enjoyable experience to utilize such high-quality data. Thanks again.</p>
<p>Although we were the winners, this is the result of many Kaggler's discussions and public notebooks.<br>
Also, I think we were lucky enough that the market conditions made our model smile.<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6960421%2F96b53fa87660ab5e1062798347c21529%2Fszse_in_comp_period.png?generation=1658164964908086&alt=media" alt=""><br>
Nevertheless, we want to share the process we went through to increase our chances of getting this luck.</p>
<p>I. Summary</p>
<blockquote>
<p>A. Used Models: LGBM, TABNET<br>
B. Feature Engineering: 300 + 100<br>
C. Data Sampling: (train.csv + supplemental_train.csv)[2400000:]<br>
D. Cross Validation for FE and Parameter Tuning: PurgedGroupTimeSeries, TimeSerieseSplit<br>
E. Cross Validation for Training : KFold</p>
</blockquote>
<p>II. Detailed description</p>
<p>A. Used Models : LGBM, TABNET</p>
<ol>
<li>Sorry if you were expecting a special model, this time it's LGBM and TABNET. :D</li>
<li>LGBM is a powerful model whose performance has been proven in many competitions. It was also the most stable (especially the consistency of CV and LB) and excellent in the experiment on the competition data. (at least for me)</li>
<li>In addition, candidate models to be ensembled in LGBM were found and tested. Among them, TABNET was selected with the best ensemble effect while being relatively stable.</li>
<li>Some Custom MLP models were also candidates, and there were models with a significant ensemble effect even on the LB basis, but as a result, they were excluded because they were not stable in CV.</li>
<li>For loss_fn, rmse and mse are used, respectively, and Pearson Corr. is commonly used for eval_metric.</li>
<li>The ensemble method : Average of (LGBM x 5 Folds) + (TABNET x 5 Folds)</li>
</ol>
<p>B. Feature Engineering: 300 + 100<br>
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-forum-message-attachments/o/inbox%2F6960421%2F0286cd6a82565ccd3c0e62b391b0b7f7%2F300%20_2.jpg?generation=1658254344800422&alt=media" alt=""></p>
<ol>
<li>First, to reduce or remove the influence of unnecessary features, we investigated the importance and corr. of each feature, but did not find any meaningful numerical evidence, and then conversely, we contemplated the features to be added.</li>
<li>The above 300 means built-in features, and the added 100 features showed consistent and significant improvement in CV and LB Score (CV: 0.141 -> 0.154, LB: 0.141 -> 0.149 based on LGBM single model) .</li>
<li>Although we only used 100 additional features, this was the largest compromise within the limits allowed by Kernel Resources (especially RAM), and I guess that there would be an additional score increase if more useful features were included.</li>
<li>The added 100 is calculated in the following way.<br>
[ The average value at each time_id for the top 100 features by obtaining and sorting the corr. of 300 features and each target ]<br>
[ Code ]</li>
</ol>
<blockquote>
<p>features = [f'f_{i}' for i in range(300)]</p>
<p>corr = train_df[features[:] + ['target']].corr()['target'].reset_index()<br>
corr['target'] = abs(corr['target'])<br>
corr.sort_values('target', ascending = False, inplace = True)<br>
best_corr = corr.iloc[3:103, 0].to_list()</p>
<p>time_id_mean_features = []<br>
for col in tqdm(best_corr):<br>
mapper = train_df.groupby(['time_id'])[col].mean().to_dict()<br>
train_df[f'time_id_{col}'] = train_df['time_id'].map(mapper)<br>
train_df[f'time_id_{col}'] = train_df[f'time_id_{col}'].astype(np.float16)<br>
time_id_mean_features.append(f'time_id_{col}')</p>
<p>features += time_id_mean_features</p>
</blockquote>
<p>C. Data Sampling: (train.csv + supplemental_train.csv)[2400000:]</p>
<ol>
<li>The above means the concatenation of train.csv and supplementa_train.csv, meaning that we used the last 2400k rows of it.</li>
<li>The reason for this sampling was related to FE of 2, of course, the added features caused a huge memory increase, and we had to trade off [more features] VS [more data].</li>
<li>For this, various probabilistic measures were performed on Score gains on PurgedGroup and TimeSeriesSplit CV, and we finally decided that additional features have a probabilistic advantage in score improvement.</li>
<li>After that, as a result of testing in memory (RAM 13GB), the Data Row was stable up to approximately 2500k, but in order to pursue more stability, an additional 100k was dropped. :(</li>
</ol>
<p>D. Cross Validation for FE and Parameter Tuning : PurgedGroupTimeSeries, TimeSerieseSplit</p>
<ol>
<li>As already described above, various CVs were used to measure the performance of FE, and the most effective FE was selected from all of these CVs.</li>
<li>This course also includes Hyper Parameter Tuning.</li>
</ol>
<p>E. Cross Validation for Training : KFold, GroupKFold</p>
<ol>
<li>There was another trivial strategy for this choice, which is a self-test data set, which looks like this:<br>
training set: (time_id >= 0) and (time_id <= 1000)
test set: (time_id >= 1001) and (time_id <= 1202)</li>
<li>There were various training methods, but as a result of testing in the above environment, [limited training KFold] was selected as the method that showed the most stable and excellent results.</li>
<li>To elaborate a bit, this has the advantage of being able to include more data in a variety of ways, while at the same time risking overfitting for future references.</li>
<li>So, to reduce the risk of overfitting, we used an early stop for validation and a method of limiting the number of training (num_boost_round or epoch) to a certain value or less.</li>
</ol>
<p>P.S.</p>
<ul>
<li>Second Submission?<br>
I didn't mention it above, but yes, I also want to keep the second submission short.<br>
Our second submission takes a very different direction from the above.<br>
First, supplementa_train.csv was not included, and the above model, mlp, and catboost were included, and there were a little more about 150 features, and it was considerably overfitted with unlimited training (of course, there is an early stop for validation). However, this submission resulted in a silver medal score(0.115796). In this part, I think there was an effect from the exclusion of supplemental data or excessive CV and LB overfitting.<br>
Then, in time series data training, I think that the method such as early stop at the time when the CV score decreases to a few percent or less is also a factor to consider, but it is only a guess.</li>
</ul>
<p>Many Thanks.</p>
<p>=============================================<br>
Answering for this guys > <a href="https://www.kaggle.com/agenlu" target="_blank">@agenlu</a> <a href="https://www.kaggle.com/hdynamics" target="_blank">@hdynamics</a> <a href="https://www.kaggle.com/ygygyv" target="_blank">@ygygyv</a> <br>
How to make a feature from test? > Take the average of each feature in the test set by referring to the list of features stored in best_corr in the code above.<br>
like this,<br>
[Code]</p>
<blockquote>
<p>for col in best_corr:<br>
test_df['time_id'] = test_df['row_id'].str[0:4].astype(np.int64)<br>
mapper = test_df.groupby(['time_id'])[col].mean().to_dict()<br>
test_df[f'time_id_{col}'] = test_df['time_id'].map(mapper)</p>
</blockquote>
<p>Thanks. I have added the content of what you said to the post.<br>
The answer is Average of (LGBM x 5 Folds) + (TABNET x 5 Folds).</p>
<p>Take the average of each feature in the test set by referring to the list of features stored in best_corr in the code above.</p> |
Ubiquant Market Prediction | [2nd Place Solution] - Robust CV and LGBM | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Ubiquant Market Prediction
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>STARTING_NUMERIC_FEAT_NAME is the list of the feature from F_0, …, F_299<br>
`</p>
<pre><code>data[STARTING_NUMERIC_FEAT_NAME].mean(axis=1).astype(np.float16)
data[STARTING_NUMERIC_FEAT_NAME].std(axis=1).astype(np.float16)
data[STARTING_NUMERIC_FEAT_NAME].quantile(q=0.1, axis=1).astype(np.float16)
data[STARTING_NUMERIC_FEAT_NAME].quantile(q=0.5, axis=1).astype(np.float16)
data[STARTING_NUMERIC_FEAT_NAME].quantile(q=0.9, axis=1).astype(np.float16)
</code></pre>
<p>`<br>
They represent a summary of every info about a certain asset a t given time</p>
<p>As metric score i used simple Correlation not mean of Correlation by time ID, because i observed they were quite similar (well correlated).</p>
<p>By saying based on CV correlation i mean that i selected the best number of trees depending on the CV Score. I didn't used early stopping on each validation set.</p>
<p>I don't understand the latest question( "…. how did you get 5 LightGBM?")</p>
<p>Thanks to Ubiquant for hosting this competition!</p>
<p>This competition was really challenging and push me through lot's of computing optimization.</p>
<p><strong>My journey was:</strong></p>
<table>
<thead>
<tr>
<th>Rank</th>
<th>Score</th>
<th>Update</th>
</tr>
</thead>
<tbody>
<tr>
<td>34</td>
<td>0.082800</td>
<td>first update</td>
</tr>
<tr>
<td>12</td>
<td>0.115900</td>
<td>second update</td>
</tr>
<tr>
<td>4</td>
<td>0.133100</td>
<td>third update</td>
</tr>
<tr>
<td>2</td>
<td>0.128200</td>
<td>forth update</td>
</tr>
<tr>
<td>2</td>
<td>0.123175</td>
<td>fifth update</td>
</tr>
</tbody>
</table>
<p>My model is quite simple because i work mainly on making the code robust to bugs and over-fitting.<br>
I used every available data: train.csv + supplemental_train.csv (it was difficult to optimize the pipeline without going in out of memory)</p>
<p><strong>FE</strong><br>
300 basic columns.<br>
100 new columns: average by time id for the most correlated feature with the target on the latest 1000 time_id with more than 31 observation (statistical magic number :D)<br>
5 macro aggregation: for each row (time_id, investment_id) i calculated mean, std, quantile 0.1, quantile 0.5, quantile 0.9 over every numerical feature (f_0, … f_300)</p>
<p><strong>Loss/Metrics:</strong><br>
rmse and correlation (correlates well with competition metrics)</p>
<p>I used a Purged K-FOLD cross validation with embargo so i don't have leakege between Fold, this helps to reduce over fitting. </p>
<p>I train 5 LightGBM with early stopping based on CV Correlation (not on single validation score).</p>
<p><strong>What didn't work:</strong><br>
AE MLP (<a href="https://www.kaggle.com/competitions/jane-street-market-prediction/discussion/224348" target="_blank">https://www.kaggle.com/competitions/jane-street-market-prediction/discussion/224348</a>)<br>
Feature neutralization<br>
PCA</p> |
Ultrasound Nerve Segmentation | Code + ideas for 57th solution (0.691) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Ultrasound Nerve Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hello everyone,</p>
<p>I found some time for a little write-up about my solution on my github project. While it didn't get to the top, the project contains some -hopefully- interesting ideas.</p>
<p>Here it is:
<a href="https://github.com/Simoncarbo/Ultras-Sound-Nerve-Segmentation---Kaggle">https://github.com/Simoncarbo/Ultras-Sound-Nerve-Segmentation---Kaggle</a></p>
<p>I would be very happy to discuss about it and get some feedback from you... Thanks in advance!</p> |
Ultrasound Nerve Segmentation | Here are my thoughts (25th private - 0.70679) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Ultrasound Nerve Segmentation
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hey guys ! Congrats to the winners !</p>
<p>It looks like I'm ranked 25th on the private LB. I wrote a blog post trying to explain my approach :
<a href="http://fhtagn.net/prog/2016/08/19/kaggle-uns.html">http://fhtagn.net/prog/2016/08/19/kaggle-uns.html</a></p>
<p>I have also shared my code on github. It might look messy if you are not me :-)
<a href="https://github.com/julienr/kaggle_uns">https://github.com/julienr/kaggle_uns</a></p>
<p>Some things I did :</p>
<ul>
<li><p>Use Maxout activations</p></li>
<li><p>The models output both a mask and a probability that the image contains a mask</p></li>
<li><p>Use some additional "auxiliary" downsampled mask to train the models</p></li>
<li><p>PCA post-processing to get "realistic" masks</p></li>
<li><p>Binary cross-entropy seemed to perform better than dice-like loss</p></li>
</ul>
<p>Ah yeah I saw your comment in the other thread.</p>
<p>To be honest, I was a bit surprised that I needed so "few" filters. I did some experiments in the beginning with more filters in the deeper layers for the reason you state, but it didn't seem to help much and made dealing with overfitting more difficult. But maybe I should try again with the final architecture I have.</p>
<p>I also got some 0.68+ public LB scores with models with 40-50k parameters. Maybe because it's a one-class problem, forcing the model to reduce dimensionality (by not compensating for spatial dimension shrinking) works well and helps avoiding overfitting.</p>
<p>But hey, you have more parameters and you got a better score :-)</p>
<p>[quote=Stas Sl;131831]</p>
<p>Only 95000 parameters in model is impressive!</p>
<p>[/quote]
How many do you have ?</p>
<p>[quote=Oleg Trott;131830]</p>
<p>You can see both public and private LB scores under "My Submissions"? I can still only see the public scores. I don't even know which of my two choices contributed to the final score.</p>
<p>[/quote]
You can resubmit and you'll get the private LB score. </p> |
Understanding Clouds from Satellite Images | 10th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks for your congratulations!</p>
<p>My model training is in 2 stages. Training a model with separate segmentation and classification.</p>
<h3>stage-1:</h3>
<ul>
<li>segmentation training</li>
</ul>
<p>```
for param in model.model.encoder.parameters():
param.requires_grad = True</p>
<p>for param in model.model.decoder.parameters():
param.requires_grad = True</p>
<p>for param in model.classification.parameters():
param.requires_grad = False <br>
```</p>
<h3>stage-2:</h3>
<ul>
<li>classification training</li>
</ul>
<p>```
for param in model.model.encoder.parameters():
param.requires_grad = False</p>
<p>for param in model.model.decoder.parameters():
param.requires_grad = False</p>
<p>for param in model.classification.parameters():
param.requires_grad = True <br>
```</p>
<p>Thank you for the information.</p>
<p>Thanks for your congratulations!</p>
<p>Thanks for your congratulations!</p>
<p>I added classification after the encoder and AdaptiveAvgPool2d.
Here is my model.</p>
<p>```
import segmentation_models_pytorch as smp</p>
<p>class model_segmentation_classification(nn.Module):</p>
<pre><code>def __init__(self, encoder_name, decoder_name, classes):
super(model_segmentation_classification, self).__init__()
if decoder_name == 'Unet':
self.model = smp.Unet(
encoder_name=encoder_name,
encoder_weights='imagenet',
classes=classes,
activation=None
)
else:
self.model = smp.FPN(
encoder_name=encoder_name,
encoder_weights='imagenet',
classes=classes,
activation=None
)
out_shapes = {
'se_resnext101_32x4d': 2048,
'se_resnext50_32x4d': 2048,
'densenet169': 1664,
'efficientnet-b3': 384,
'efficientnet-b4': 448,
'efficientnet-b5': 512,
}
self.linear_size = out_shapes[encoder_name]
self.avgpool = nn.AdaptiveAvgPool2d((1,1))
self.classification = nn.Sequential(
nn.Linear( self.linear_size, 128 ),
nn.BatchNorm1d( 128 ),
nn.ReLU( inplace=True ),
nn.Linear( 128, classes ),
)
def forward(self, x):
global_features = self.model.encoder(x)
cls_feature = global_features[0]
cls_feature = self.avgpool(cls_feature)
cls_feature = cls_feature.view(cls_feature.size(0), -1)
cls_feature = self.classification(cls_feature)
seg_feature = self.model.decoder(global_features)
return cls_feature, seg_feature
</code></pre>
<p>model = model_segmentation_classification(
encoder_name='densenet169', <br>
decoder_name='Unet',
classes=4
)</p>
<h1>model = model_segmentation_classification(</h1>
<h1>encoder_name='efficientnet-b4',</h1>
<h1>decoder_name='FPN',</h1>
<h1>classes=4</h1>
<h1>)</h1>
<p>```</p>
<p>Thanks for your congratulations!
Thank you for telling me <a href="/pavel92">@pavel92</a> 's latest pytorch repo.</p>
<p>I am not using random crop augmentation.</p>
<p>Here is a simple average classification output and average pixel probability code.</p>
<p>```
model_list = [ <br>
# densenet169
[ 'model1_1.pth', 'densenet169', 'Unet' ],
[ 'model1_2.pth', 'densenet169', 'Unet' ],
[ 'model1_3.pth', 'densenet169', 'Unet' ],
[ 'model1_4.pth', 'densenet169', 'Unet' ],</p>
<pre><code># efficientnet-b4
[ 'model2_1.pth', 'efficientnet-b4', 'FPN' ],
[ 'model2_2.pth', 'efficientnet-b4', 'FPN' ],
[ 'model2_3.pth', 'efficientnet-b4', 'FPN' ],
[ 'model2_4.pth', 'efficientnet-b4', 'FPN' ],
</code></pre>
<p>]</p>
<p>models = []</p>
<p>for info in model_list:
m = model_segmentation_classification(
encoder_name=info[1],
decoder_name=info[2],
classes=4
)</p>
<pre><code>m.load_state_dict( torch.load( info[0] ) )
m.to('cuda')
m.eval()
models.append( m )
</code></pre>
<p>class Model:
def <strong>init</strong>(self, models):
self.models = models</p>
<pre><code>def __call__(self, x):
preds_cls = []
preds = []
x = x.cuda()
with torch.no_grad():
for m in self.models:
pred_cls, pred = m(x)
preds_cls.append( pred_cls )
preds.append( pred )
preds_cls = torch.stack(preds_cls)
preds_cls = torch.mean(preds_cls, dim=0)
preds = torch.stack(preds)
preds = torch.mean(preds, dim=0)
return preds_cls, preds
</code></pre>
<p>model = Model( models )
```</p>
<p>Congratulations to all winners!
Here is my solution (Public 0.67376, Private 0.66765).</p>
<h2>Pre processing:</h2>
<ul>
<li>resize image size to (320, 512)</li>
<li>exclude bad images (removed 21 images)</li>
</ul>
<h2>Augmentations:</h2>
<p>I used <a href="https://github.com/albu/albumentations">albumentations</a>.
- HorizontalFlip, VerticalFlip
- ShiftScaleRotate, GridDistortion
- Blur, MedianBlur, GaussianBlur
- CLAHE, RandomBrightnessContrast, HueSaturationValue, IAASharpen</p>
<h2>Model:</h2>
<p>I used <a href="https://github.com/qubvel/segmentation_models.pytorch">segmentation_models.pytorch</a>, <a href="https://github.com/cadene/pretrained-models.pytorch">pretrained-models.pytorch</a>, <a href="https://github.com/lukemelas/EfficientNet-PyTorch">EfficientNet-PyTorch</a>.</p>
Model-1:
<ul>
<li>densenet169 Unet with classification</li>
<li>image size : 320x480</li>
</ul>
Model-2:
<ul>
<li>efficientnet-b4 FPN with classification</li>
<li>image size : 320x480</li>
</ul>
<h2>Optimizer:</h2>
<ul>
<li><a href="https://github.com/LiyuanLucasLiu/RAdam">RAdam</a></li>
</ul>
<h2>Loss:</h2>
Pre training:
<ul>
<li>segmentation: BCE + Dice</li>
<li>classification: FocalLoss</li>
</ul>
Main training:
<ul>
<li>(classification)( FocalLoss * 0.5 + BCEWithLogits * 0.5 ) * 0.05 + (segmentation)( BCE + Dice ) * 0.95</li>
</ul>
<h2>Ensemble:</h2>
<ul>
<li>simple average of the 2 models(x 4 = total 8 models)</li>
</ul>
<h2>Post processing:</h2>
<ul>
<li>TTA : None, h-flip, v-flip, h-flip and v-flip</li>
<li>Threshold : I use <a href="https://github.com/optuna/optuna">optuna</a> to find the optimal value from the cv score. </li>
</ul>
<h2>GPU:</h2>
<ul>
<li>RTX2080Ti x 1</li>
</ul>
<p>I used <a href="https://github.com/bckenstler/CLR">CyclicScheduler2</a>.</p>
<h3>segmentation training</h3>
<ul>
<li>encoder: CyclicScheduler2( min_lr=0.00005, max_lr=0.0001, period=17, warm_start=3, max_decay=0.9 )</li>
<li>decoder: CyclicScheduler2( min_lr=0.0005, max_lr=0.001, period=17, warm_start=3, max_decay=0.9 )</li>
<li>epoch: 20</li>
</ul>
<h3>classification training</h3>
<ul>
<li>classification: CyclicScheduler2( min_lr=0.00008, max_lr=0.001, period=14, warm_start=1, max_decay=0.9 )</li>
<li>epoch: 15</li>
</ul> |
Understanding Clouds from Satellite Images | 129th Place solution (Classifer Cascading + Segmentation) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks Kaggle and Max Planck Institute for this interesting competition and congrats to all the winners! Here is a brief summary of my solution (Public 0.67698, Private 0.66713).</p>
<ul>
<li><p><strong>No Preprocessing</strong></p></li>
<li><p><strong>Augmentations (by Albumentations)</strong>
ShiftScaleRotate (scale_limit=0.1, rotate_limit=0, shift_limit=0.1, p=0.5, border_mode=0)
horizontal flip (p=0.5)
vertical flip (p=0.5)</p></li>
<li><p><strong>Validation</strong>
StratifiedKFold for the number of masks</p></li>
<li><p><strong>Segmentation Model (ensemble of 7 models x 5folds)</strong>
UNet-SeResNext50
UNet-ResNet50
UNet-ResNet18
UNet-ResNext101
LinkNet-EfficientNet-B5
FPN-EfficientNet-B4 (4 folds)
FPN-EfficientNet-B5
FPN-EfficientNet-B6
UNet-EfficientNet-B6</p></li>
<li><p><strong>Classification Model (ensemble of 7 models x 5folds)</strong>
EfficientNet-B2
EfficientNet-B4
EfficientNet-B5</p></li>
<li><p><strong>Loss</strong>
BCE + Dice</p></li>
<li><p><strong>Ensemble</strong></p></li>
<li>simple average of the 14 segmentation models</li>
<li><p>We also tried voting ensemble but it didn't work for us</p></li>
<li><p><strong>Postprocessing</strong>
TTA : None
pixel threshold = 0.5
small mask threshold = 20000
Cascading of classifier result (Apply each classifier one after another)
Convex Hull PostProcessing</p></li>
<li><p><strong>Final submission</strong>
Our final submission was selected based on best public scores. We believe main reason we scored less on private LB was because of weak segmentation models. Each of our models had a very average CV but our classifiers and postprocessing were really good.</p></li>
</ul> |
Understanding Clouds from Satellite Images | 13th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I compared them as you have mentioned. I also applied conv1x1 after upsample so that the number of channels matched with label's one (=3)</p>
<p>Thanks Kaggle and Max Planck Institute for this interesting competition and congrats to all the winners! Here is a brief summary of my solution (Public 0.67698, Private 0.66713).</p>
<h2>Preprocessing</h2>
<ul>
<li>exclude bad images (removed 13 images)</li>
<li>resize image size to (320, 512)</li>
</ul>
<h2>Augmentations (by Albumentations)</h2>
<ul>
<li>gamma (limit=(50,100), p=0.5)</li>
<li>brightness (limit=0.2, p=0.5)</li>
<li>shift (limit=0.2, border_mode=0, p=0.5)</li>
<li>rotation (limit=30deg, border_mode=0, p=0.5)</li>
<li>horizontal flip (p=0.5)</li>
<li>vertical flip (p=0.5)</li>
</ul>
<h2>Validation</h2>
<ul>
<li>StratifiedKFold for the number of empty masks</li>
</ul>
<h2>Model (ensemble of 7 models x 5folds)</h2>
<ol>
<li>UNet-ResNet34 + CBAM + Hypercolumns</li>
<li>same as 1. but with other seed</li>
<li>UNet-ResNet18 + CBAM + Hypercolumns</li>
<li>UNet-InceptionResNetV2 + CBAM+ Hypercolumns</li>
<li>UNet-SeResNext50 + CBAM + Hypercolumns</li>
<li>UNet-ResNet34 + CBAM + FPA</li>
<li>UNet-ResNet18 + CBAM + FPA
I used the weights of best validation score epochs.</li>
</ol>
<h2>Loss</h2>
<ul>
<li>BCE + LovaszHinge</li>
<li>on top oh that I used deep supervision with BCE+LovaszHinge loss (for only non-empty masks) multiplied by 0.1</li>
</ul>
<h2>Optimizer & Scheduler</h2>
<ul>
<li>Adam & CosineAnnealingWarmRestart (20epoch cycle)</li>
<li>learning rate : 1e-4 to 1e-6</li>
</ul>
<h2>Ensemble</h2>
<ul>
<li>simple average of the 7 models (x 5folds = total 35 models)</li>
</ul>
<h2>Postprocessing</h2>
<ul>
<li>TTA : None + h-flip + v-flip + h- and v- flip</li>
<li>pixel threshold = 0.45</li>
<li>small mask threshold = 18000
Both determined by the 5foldCV for model 1.</li>
</ul>
<h2>Final submission</h2>
<ul>
<li>I checked only Public LB score for ensembles. So I needed some criteria to choose the final submission. I decided to choose two submissions which were good in Public LB and stable against the small mask threshold, although these were not my best Public LB submission. Luckily I survived the shake up and got a gold medal.</li>
</ul>
<p>Thanks Chris !
In deepsupervision, not only the final output mask but also intermediate feature maps are compared with label. My loss function is like below:
<code>
criterion = nn.BCEWithLogitsLoss()
loss = criterion(logits,label)
loss += lovasz_hinge(logits.view(-1,h,w), label.view(-1,h,w))
for x in [x1,x2,x3,x4]: #x1,x2,x3,x4 are upsampled outputs of decoder layers (+ additional conv)
loss += 0.1 * criterion_lovasz_hinge_non_empty(criterion, x, label)
</code></p> |
Understanding Clouds from Satellite Images | 143rd place (bronze) solution with code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><h2>Code and more detail: <a href="https://github.com/btrotta/kaggle-clouds">https://github.com/btrotta/kaggle-clouds</a></h2>
<h2>Pre-processing the images</h2>
<p>I worked with grayscale images shrunken to 25% of original size.</p>
<p>I got a large boost in model accuracy from filtering out the over-exposed areas in the images. Below is a sample image before and after correction (I also changed the missing area to grey).</p>
<p><img src="https://raw.githubusercontent.com/btrotta/kaggle-clouds/master/img/before_after.png" alt="Before and after correction"></p>
<h2>Model</h2>
<p>I used a blend of efficientnetb4 effecientnetb5, both pre-trained from this library: <a href="https://github.com/qubvel/segmentation_models">https://github.com/qubvel/segmentation_models</a>. I trained for 10 epochs with the encoder layers frozen, then fine-tuned the whole model for 10 epochs with a lower learning rate. I did horizontal and vertical flip augmentation; I tried others but found they didn't help.</p>
<h2>Post-processing the model predictions</h2>
<p>The key to post-processing is to observe that the dice metric is not continuous: if a class doesn't exist in an image, there is a huge difference in predicting 1 pixel (dice score 0) and predicting 0 pixels (dice score 1). So, to decide whether to make a non-zero prediction, we need to estimate 2 things: the probability that the class exists in the image, and the expected dice score given that the class does exist. Then we can calculate the expected dice score for a zero and a non-zero prediction, and choose between them accordingly. I built very simple models for these, all just using a single variable: the 95th percentile of the predicted class probabilities for each image. </p>
<p>I didn't attempt to reshape the predicted areas into rectangles or polygons, as in some published kernels. I also didn't enforce a minimum predicted area. My hypothesis is that this information is already built in to the neural network predictions, and that this is why augmentations that change the size or shape of the masks (e.g. skew, rotation, zoom) give poor results.</p>
<p>Yes, <code>get_background</code> finds the over-exposed areas. The lower bound of 2 is to exclude the black "stripes" (i.e. the missing parts in the images). The upper bound of 200 is to exclude areas which are genuinely white (e.g. the centers of "flower" areas). I tested on several different images, and it seems to work pretty well on all of them.</p>
<p>I think the main reason that enforcing minimum predicted area gives a boost is because of the non-continuous dice metric. For images where the model identifies only a small area, there's a high chance the class doesn't exist in the image at all and it's better to predict an empty mask. This is the idea behind the minimum area bound. But in my approach the minimum bound is not needed, because I'm explicitly calculating the probability that the class exists in the image and can decide based on that whether to make an empty prediction.</p> |
Understanding Clouds from Satellite Images | 19th place solution with code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>No, you don't need because it's almost same😹 </p>
<p>Congrats for all the prize winner and who finished in gold zone!</p>
<p>I joined this competition relatively lately, after Severstal competition finished.(I believe same as many people, don't you?)
My purpose was to make sure the segmentation pipeline I made in Severstal works for other competition. And it turned out it actually works, I just modified directory and some small parameters. That means my solution is not so special, honestly.
<br></p>
<h3>Overview</h3>
<ul>
<li>Extremely noisy annotation</li>
<li>Not so imbalanced classes (compared with Severstal)</li>
<li>Relatively small data(number of samples)</li>
<li><p>Good train/test split(cv works)
<br></p>
<h3>What works</h3></li>
<li><p>Unet & FPN</p></li>
<li>not so large encoder</li>
<li>BCE + Dice loss</li>
<li>heavy augmentation(including mixup)</li>
<li>cosine anealing</li>
<li>ensemble many models</li>
<li><p>Triplet thresholding(label threshold/mask threshold/min componet)
<br></p>
<h3>What didn't works</h3></li>
<li><p>PSPNet</p></li>
<li>large image size(over 448*672)</li>
<li>plane BCE</li>
<li>pseudo labeling</li>
</ul>
<h3>Solution</h3>
<ol>
<li>Unet/efficientnet-b3/image size 320x480/5fold</li>
<li>Unet/efficientnet-b0/image size 320x480/cosineanealing/5fold</li>
<li>Unet/efficientnet-b3/image size 384x576/cosineanealing/5fold</li>
<li>FPN/resnet34/image size 384x576/mixup/5fold</li>
<li>Ensemble above 20 models</li>
<li>Triplet thresholding(label threshold/mask threshold/min componet)</li>
</ol>
<hr>
<p>Here is my code. <br>
If you have question, please feel free to ask:)
Thanks!</p>
<p><a href="https://github.com/bamps53/kaggle-cloud-2019">https://github.com/bamps53/kaggle-cloud-2019</a></p> |
Understanding Clouds from Satellite Images | 1st placed solution with code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It's just my trying to separate common codes from competition codes.
Currently, I have no plan. I'll just use it in the next competitions with continous improvement.</p>
<p><a href="/bibek777">@bibek777</a> is right!! :)</p>
<p>Thanks~!</p>
<p>Thanks~</p>
<p>Thanks!! </p>
<p>Thanks for your congratulations. I released my codes for this competition. I hope it will be helpful for you. :)</p>
<p>😨 😭 😏 😁 👍 </p>
<p>Thanks for your congratulations! 😃 </p>
<p>Thanks for your congratulations! 😃 </p>
<p>Thanks for your congratulations!</p>
<p>Thanks for your congratulations! 😃 </p>
<p>Thanks for your congratulations! 😃 </p>
<p>Thanks for your congratulations! 😃 </p>
<p>I simply multiplied mask logits by class labels.
<code>
cls_labels = labels.view(B,C,-1)
cls_labels = torch.sum(cls_labels, dim=2, keepdims=True)
cls_labels = (cls_labels > 0).float()
loss = loss_fn(input=logits.view(B,C,-1)*cls_labels , target=labels.view(B,C,-1))
</code></p>
<p>The above code snippet is for a single image.
For batch version will be ...
<code>
cls_probabilities = np.sort(mask_probabilities.reshape(B, 4, -1), axis=-1)
cls_probabilities = np.mean(cls_probabilities[:,:,-17500:], axis=-1)
</code></p>
<p>Thanks for your congrats!</p>
<p>Thanks for your congrats!</p>
<p>Thanks for your congrats!</p>
<p>Thanks for your congrats!</p>
<p>Thanks~ :)</p>
<p>In my case, deeper models were overfitted easily if I trained more than 15 epochs.
So, I'd scheduled the learning rate to train models within around 15 epochs.</p>
<h2>UPDATE: code available on github</h2>
<p><a href="https://github.com/pudae/kaggle-understanding-clouds">https://github.com/pudae/kaggle-understanding-clouds</a></p>
<hr>
<p>Congrats to all the winners and survivors of the shake-up.
Thanks to Kaggle and the hosting team for the interesting competition.</p>
<p>Except for some tricks, improvements almost have been made by using ensemble. So, in this post, I will briefly describe the track of scores in the last week. The details will be shared as codes.</p>
<h3>Common Settings</h3>
<p><strong>Types of networks</strong>
- Model A: UNet with classification head
- Model B: FPN or UNet, no classification head</p>
<p><strong>Backbones</strong>
- resnet34, efficientnet-b1, resnext101_32x8d_wsl, resnext101_32x16d_wsl</p>
<p><strong>DataSet</strong>
- split: train vs val = 9 vs 2
- Model A: All labels
- Model B: non-empty labels</p>
<p><strong>Loss</strong>
- classification part: BCE
- segmentation part: BCE * 0.75 + DICE * 0.25</p>
<p><strong>Optimizer</strong>
- AdamW, weight decay 0.01
- encoder learning rate 0.000025
- decoder learning rate 0.00025
- OneCycle scheduler, shallow models 30 epochs, deep models 15 epochs</p>
<p><strong>Augmentation</strong>
- Common: hflip, vflip, shift/scale/rotate, grid distortion, channel shuffle, invert, to gray
- Model A: random crop, size 384
- Model B: full-size, size 384, 544, 576, 768</p>
<h3>The track of scores</h3>
<p><strong>train single model</strong>
At first, I’d tried to train a good single network. I’d struggled to improve and stabilize the LB scores for 2 weeks, but I’d failed.
- TTA3: CV 0.6517 / Public LB 0..66951 / Private LB 0.65828</p>
<p><strong>add segmentation models</strong>
I thought the reason for the unstable LB score was because of poor segmentation performance. If we can have a more powerful segmentation model, the effect of poor classification performance can be reduced.</p>
<p>So, I began trying to train good segmentation only model. Because I could filter out negative predictions using the classification model, only positive labels were needed to train.</p>
<p>From this time, CV and LB were correlated well.
I trained several segmentation models with different backbone, image size, etc.
- TTA4, 1 seg with cls + 1 seg: CV 0.6560, Public LB 0.67395, Private LB 0.66495
- TTA4, 1 seg with cls + 3 seg: CV 0.6582, Public LB 0.67482, Private LB 0.66501
- TTA4, 1 seg with cls + 4 seg: CV 0.6587, Public LB 0.67551, Private LB 0.66604
- TTA4, 1 seg with cls + 7 seg: CV 0.6594, Public LB 0.67596, Private LB 0.66663</p>
<p><strong>add more models with classification head</strong>
Now, the segmentation part became enough good. so, I added two more models with classification head.
- TTA4, 3 seg with cls + 7 seg: CV 0.6625, Public LB 0.67678, Private LB 0.66746</p>
<p><strong>use segmentation models as a classifier</strong>
To take advantage of the performance of the segmentation models, I used a mean of top K pixel probabilities as a classification probability.
<code>
cls_probabilities = np.sort(mask_probabilities.reshape(4, -1), axis=1)
cls_probabilities = np.mean(cls_probabilities[:,-17500:], axis=1)
</code></p>
<ul>
<li>TTA4, 3 seg with cls + 7 seg: 0.6629, 0.67822, 0.67046</li>
<li>TTA4, 3 seg with cls + 8 seg: 0.6635, 0.67906, 0.67117</li>
</ul>
<p><strong>use max probability as a positive prediction</strong>
All images in the train set have at least one type of cloud, so I treated the label of max probability in each image as a positive prediction.
<code>
cls_probabilities[np.argmax(cls_probabilities)] = 1
</code>
- TTA4, 3 seg with cls + 8 seg: CV 0.6640, Public LB 0.68031, Private LB 0.67170</p>
<p><strong>use exponential moving average</strong>
Finally, I changed the averaging weights method to the exponential moving average. Before that, the average of the last 5 weights was used.
- TTA4, 3 seg with cls + 8 seg: CV 0.6636, Public LB 0.68130, Private LB 0.67126
- TTA4, 3 seg with cls + 9 seg: CV 0.6637, Public LB 0.68185, Private LB 0.67175 (<strong>Final Submission</strong>)</p> |
Understanding Clouds from Satellite Images | 22th Place - Lessons learned from a beginner | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>First congrats to all the winners.
I would like to thank Kaggle for hosting this competition which was the first one I could dedicate myself and won my first medal.</p>
<p><strong>What didn't work for me</strong>
- Lovasz loss
- Deeper encoders (efficientb7,senet)
- Pseudo labeling</p>
<p><strong>Our solution</strong>
Our solution is basically emsemble of segmentation models with post processing to remove masks</p>
<p>Models (6 folds each):
- ResNet34 - Unet*
- EfficientNetB2 - Unet*
- EfficientNetB2 - FPN
- EfficientNetB2 - LinkNet
- EfficientNetB5 - Unet</p>
<p>Loss: BCE + Dice
* Those models was trained with different image size (320x480, 384x576, 512x512, 704x1056)</p>
<p>Post Processing:
triplet threshold searching for binarization, remove small masks and binarization again for the remaining masks after the first two steps. All this was done with the validation data from all 6 folds.</p>
<p>CV: 0.6651
Public: 0.67556
Private: 0.66498</p>
<p><strong>The Good Lesson</strong>
I didn't know much about image segmentation, so this competition was a great learning.</p>
<ul>
<li><p>Read all comments and try to get the tips.</p></li>
<li><p>Build a good validation set
tuning post processing parameters was only possible without overfitting because of that</p></li>
</ul>
<p><strong>The Bad Lesson</strong>
- Trust in your CV</p>
<p>I had a better model that scored:</p>
<p>CV: 0.6681
Public: 0.66759
Private: 0.66824</p>
<p>Why didn't I choose it? because of the second lesson ...</p>
<ul>
<li>Trust in you
My best model was something different. I trained one model for each mask type, predicted one by one and put it in original format (4 masks stacked) before applying post processing. </li>
</ul>
<p>This allowed me to compare with the same out of fold data I had so far.
A simple blend of ResNet34-Unet + EfficientNetB2-Unet got 0.668 on CV.</p>
<p>But I read that some kagglers didn't get good results with this method, I was afraid of having a leak in my validation and public LB was worse. So I gave up on this idea...</p>
<p><strong>Acknowledgment</strong>
I would like to thank my team and all those who shared in some way.</p>
<p>Sharing is a very good thing, but I think it should be done at the right time. As I said I am a beginner, but also someone who worked hard on this competition reading past competition solutions. So I think everyone can do the same.</p>
<p>Better luck for us next time ;D</p>
<p>Thanks Giba!</p> |
Understanding Clouds from Satellite Images | 240 place with simple model, no kfold, no combining networks | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Hi guys, </p>
<p>For this competition I created and tested single models without any kfolding or combining multiple networks architectures.
The parameters which I tested were:</p>
<p><strong>Convolutional Networks Arhitectures:</strong>
- Unet
- FPN</p>
<p><strong>Pretrained Networks:</strong>
- resnet18
- resnet34
- resnet50
- resnet101
- resnet152
- se _ resnext50_32x4d
- se _ resnext101_32x4d
- efficientnet-b0
- efficientnet-b1
- efficientnet-b2
- efficientnet-b7</p>
<p><strong>Batch sizes</strong>
- Starting from 1 to 9
- Also used accumulation_steps(steps=2 and 3)</p>
<p><strong>Preprocessing</strong>
- Resize to (640, 320) for segmentation input data. Then resized the masks to (525, 350)
- HorizontalFlip(p=0.25),
- VerticalFlip(p=0.25),
- ShiftScaleRotate(scale_limit=0.5, rotate_limit=0, shift_limit=0.1, p=0.5, border_mode=0),
- GridDistortion(p=0.25)</p>
<p><strong>Optimizers</strong>
- Adam
- RAdam
- SGD</p>
<p><strong>Losses</strong>
- BCEDiceLoss
- IoULoss
- FocalLossBinary
- Custom loss(BCEDiceLoss*0.4 + IoULoss*0.2 + FocalLossBinary*0.4)</p>
<p><strong>Post processing</strong>
- Finding optimum threshold between in the interval 0.3-1 with a 0.005 step for each category
- Finding minimum pixels value for considering the mask a non-Zero one (9000-25000) with a 1000 step for each category</p>
<p><strong>BEST MODEL FOUND</strong>
Best model obtain 0.65803 on public score and <strong>0.65038 on private leaderboard</strong></p>
<p>The configuration for this single model without any k-folds or combination with another arhitecture was:</p>
<p><strong>FPN</strong>+
<strong>se _ resnext101_32x4d</strong>+
<strong>batch size 6(accumulate gradient=2)</strong>+
<strong>RAdam</strong>+
<strong>BCEDiceLoss</strong>+</p>
<p><strong>Preprocessing</strong>:
Resize to (640, 320) for segmentation input data. Then resized the masks to (525, 350)
HorizontalFlip(p=0.25)+
VerticalFlip(p=0.25)+
ShiftScaleRotate(scale_limit=0.5, rotate_limit=0, shift_limit=0.1, p=0.5, border_mode=0)+
GridDistortion(p=0.25)</p>
<p><strong>Post processing thresholds</strong>(cat1: thres=0.335, min_pixels=21000, cat2: thres=0.605, min_pixels=15000, cat3:thres=0.640, min_pixels=20000 , cat4: thres=0.565, min_pixels=16000)</p>
<p><strong>Other things that I wish I had tried but not had time:</strong> </p>
<ul>
<li>Instead of resizing initially to (640, 320) as input for segmentation network and then resizing the resulting masks to (525, 350) I would had iniatially resized to (525, 350) and then use padding for creating a (640, 320) imagine. And for submiting the mask, I would had remove the pixels offsets from the padding in that way avoiding loss due to resizing results from (640, 320) to (525, 350) </li>
<li>Insist with more custom weights on my custom loss (BCEDiceLoss*0.4 + IoULoss*0.2 + FocalLossBinary*0.4)</li>
<li>use AdamW (<a href="https://towardsdatascience.com/why-adamw-matters-736223f31b5d">https://towardsdatascience.com/why-adamw-matters-736223f31b5d</a>)</li>
<li>TTA</li>
<li>Mixed precision training (to see how much increased batch size will help compared with accumulate gradient methodology) and also evaluate fp16</li>
<li>Use and evaluate lovasz loss</li>
</ul>
<p>Thank you Chris !</p> |
Understanding Clouds from Satellite Images | 2nd place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I am sorry. Backbone is common name of encoder in segmentation models.</p>
<p>Yes, right.</p>
<p>I used two backbones: efficientnetb1, seresnext50. It's encoder. And Unet as decoder. All part ready to use from that library <a href="https://github.com/qubvel/segmentation_models">https://github.com/qubvel/segmentation_models</a> </p>
<p>Thank you!</p>
<ol>
<li>I didn't freeze the weight of the models</li>
<li>Output layer was 352x544x4 with sigmoid activation</li>
<li>I used fixed two-cycle learning rate profile </li>
</ol>
<p>It's a magic 😃 </p>
<p>Hello to everyone participating in the competition, congratulations to all who won and thanks to kaggle for the excellent competition.</p>
<p>Here I will give a general solution to the problem, I will talk about techniques that helped and those ideas that did not work.</p>
<p>Most recently, I participated in kaggle segmentation contests
<a href="https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation">SIIM-ACR Pneumothorax Segmentation</a> and <a href="https://www.kaggle.com/c/severstal-steel-defect-detection">Severstal: Steel Defect Detection</a>
Therefore, I have gained decent experience in solving such problems. I already had an idea of what could work and what couldn’t.</p>
<hr>
<h3>Idea #1</h3>
<p>Looking at the data, I saw that the images have a dimension of 1400x2100 and it was not a good idea to put such data to the network directly. Of course, it was possible to resize the image to 2 or 4 times, but obviously, we will definitely lose something from the data.
I came up with a compromise. Use a small network - a compressor, that extracts significant features from the data and reduces the image size.
It looks something like this:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F505196%2F953479a06dbb81adbf16320caadf44e7%2Fcompressor.png?generation=1574249239505220&alt=media" alt=""></p>
<p>To build models, i used Keras 2, Tensorflow 1.4 and the library <a href="https://github.com/qubvel/segmentation_models">https://github.com/qubvel/segmentation_models</a> (thank you very much Pavel Yakubovskiy)</p>
<hr>
<h3>Idea #2</h3>
<p>In order to build an effective ensemble, we must use models with the least possible correlation between predictions. I decided to use such combinations of model parameters:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F505196%2Fe0d9fd528dc14968070094b3bd2dd595%2Fmodels_grid.png?generation=1574249329683540&alt=media" alt=""></p>
<p>All models had a Unet decoder.</p>
<hr>
<p><strong>training parameters:</strong>
Optimizer: Adam
Loss Function: FocalLoss
Batch Size: 4</p>
<p>Hard albumentation:
Hflip, VFlip, Equalize, CLAHE, RandomBrightnessContrast, RandomGamma, Cutout
ShiftScaleRotate, GridDistortion, GaussNoise</p>
<p>30 epochs on a two-cycle learning profile. It looks something like this:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F505196%2F2f8acf3439d405a4d13cde0fa28d6378%2FLearning%20profile.png?generation=1574249719812217&alt=media" alt=""></p>
<p>For training models, I used 2xP3.2 Amazon instance</p>
<hr>
<h3>Idea #3</h3>
<p><strong>Postprocessing</strong>.
Mean average all models -> raw probability
All tasks for segmenting objects with a DICE metric are very sensitive to FalsePositive errors. In some cases, training a separate classifier model for detect of a mask in the image very helps. In my case, the classifiers did not help much and I used the Triple rule method, which I first saw in the first place solution about competition <a href="https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation">SIIM-ACR Pneumothorax Segmentation</a>.
Thanks so much for the idea of <strong>Aimoldin Anuar</strong> <a href="https://www.kaggle.com/sneddy">https://www.kaggle.com/sneddy</a>
The description of this approach can be understood from here <a href="https://youtu.be/Wuf0wE3Mrxg">Kaggle SIIM-ACR Pneumothorax Challenge - 1st place solution - Anuar Aimoldin</a></p>
<p>The triple rule parameters (threshold1, minsize, threshold2) were searched by global optimization methods.</p>
<p>Basically, this is all that helped in solving the task.</p>
<p>What didn't work:
- Mask classifiers
- mmdetection / FasterRCNN
- BCE-DICE, lovasz, triple_loss
- Adversarial validation
- Pseudo labeling</p>
<p>>
Thanks for watching</p>
<p>I trained the network for 30 epohs and changed the learning rate according to this schedule <img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F505196%2Fd5d16579ad8c3aa96489bf94341e1394%2FLearning%20profile.png?generation=1587701754018812&alt=media" alt=""></p> |
Understanding Clouds from Satellite Images | 34th Place Solution + Code | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Exactly :)</p>
<p>Hi all,</p>
<p>Congrats to the winners! </p>
<p>Code for my solution <a href="https://github.com/khornlund/understanding-cloud-organization">here</a>.</p>
<p>Explanation copied below.</p>
<h1>Summary</h1>
<h2>Results</h2>
<p>| Rank | Score | Percentile |
| --- | --- | --- |
| 34 | 0.66385 | Top 2.2% |</p>
<h2>Strategy</h2>
<p>Originally I had an idea early on very similar to <a href="https://arxiv.org/pdf/1911.04252.pdf">this</a> recent paper. I was going to train a model on the ground truthed data, and then iteratively create pseudo labels for unlabelled data and train on that. I figured this was a good opportunity for such a strategy because there was very little training data (~5000 images), so there was a lot to be gained by generating more training samples. And, because this was not a synchronous kernel competition, I'd be able to create as large an ensemble as I like.</p>
<p>Then I realised how noisy the image labels were, and wasn't so sure that pseudo labels would work very well. In particular, I noticed that the validation scores of my models was super noisy - using the same configuration with a different random seed resulted in serious metric differences. I figured I would give up on trying to fine tune individual models and instead focus on engineering a system that would allow me to train and ensemble <em>lots</em> of models.</p>
<p>I developed functionality to allow me to automate the configuration, training, and inference of models.</p>
<p>I trained an ensemble of ~120 models, using a variety of encoder/decoder combinations. I first averaged them together by their encoder/decoder combinations (eg. all the efficientnet-b2 FPN get averaged together). Then I averaged these mini-ensembles together using a weighted average.</p>
<p>With about a week of the competition to go, I saw the Noisy Student paper. I was getting decent results on the LB and figured I'd give pseudo labelling a go. I downloaded ~4200 images using the same resolution and locations as the official data, generated pseudo labels for them, and trained a new ensemble of ~50 models.</p>
<p>I only finished training the pseudo labelled models in time to make a few submissions on the final day, and managed to get up to 0.67739 (9th place) on the public LB - but that actually only scored 0.66331 (~45th) on the private LB. My other selected submission was a weighted average of my past 25 submissions, which scored 0.67574 on the public LB and 0.66385 (34th) on the private LB.</p>
<p>I had a few unselected submissions that scored 0.666+ (~18th), the best of which funnily enough came from a mini-ensemble of only efficientnet-b2-Unet models.</p>
<h2>Reflection</h2>
<p>Looking back I realise I made a pretty big mistake not capturing the appropriate metrics for thorough local CV. I was only recording dice coefficient using a threshold of 0.5, and so I wasn't well informed to pick a threshold for my submissions.</p>
<p>Also, while the models were each trained on a random 80% of the data, and evaluated on the remaining 20%, this was only done at a per-model level. I didn't keep a hold-out set to validate the ensembles against. Because we only had ~5000 training samples, I got a bit greedy with training data here.</p>
<p>I was hoping that by keeping logs of all my experiments, after a while I'd be able to identify which randomly generated configurations (eg. learning rate) worked better than others. This didn't turn out to be the case! I should have spent more time fine tuning each model, as the law of diminishing returns was coming into effect as the size of my ensemble grew.</p>
<h1>Details</h1>
<h2>Ensemble Pipeline</h2>
<p>See <code>uco.ensemble.py</code> for implementation.</p>
<p>Each training experiment is configured using a YAML file which gets loaded into a dictionary. I set up a class to randomise these parameters, so I could leave it to run while at work/sleep and it would cycle through different architectures, loss functions, and other parameters.</p>
<p>After each training epoch the model would be evaluated on a 20% validation set. The mean dice score was tracked throughout training, and when the training completed (either after a set number of epochs or early stopping) only the best scoring checkpoint would be saved. I set a cutoff mean dice score, and threw away models that scored under that.</p>
<p>The saved checkpoint would be loaded, and run inference on the test data. I saved out the <em>raw</em> (sigmoid) predictions of each model to HDF5. I scaled by 250 and rounded to integers so I could save as <code>uint8</code> to save disk space.</p>
<p>These raw predictions would be grouped by (encoder, decoder) pair, and averaged together weighted by mean dice scores. Then the groups would be averaged together, with parameterised weights.</p>
<p>By saving out the results at each stage to HDF5 (raw predictions, group averages, and total averages), I could re-run any part of the pipeline with ease.</p>
<p>I did the above for both segmentation and classification models. The details below are just for the segmentation models.</p>
<h2>Models</h2>
<p>I used <a href="https://github.com/qubvel/segmentation_models.pytorch">segmentation_models.pytorch</a>
(SMP) for segmentation, and used <a href="https://github.com/rwightman/pytorch-image-models">pytorch-image-models</a> (TIIM) for classification.</p>
<p><strong>Encoders</strong></p>
<ul>
<li>efficientnet B0, B2, B5, B6</li>
<li>resnext 101_32x8d</li>
<li>se_resnext 101_32x8d</li>
<li>inceptionresnet v2, v4</li>
<li>dpn 131</li>
<li>densenet 161</li>
</ul>
<p><strong>Decoders</strong></p>
<ul>
<li>FPN</li>
<li>Unet</li>
</ul>
<p>I had terrible results with LinkNet and PSPNet.</p>
<h2>Training</h2>
<p><strong>GPU</strong>
RTX 2080Ti.</p>
<p><strong>Loss</strong>
I used BCE + Dice with BCE weight ~U(0.65, 0.75) and dice weight 1 - BCE.</p>
<p>I used BCE + Lovasz with BCE weight ~U(0.83, 0.92) and lovasz 1 - BCE.</p>
<p><strong>Learning Rate</strong>
Encoder ~U(5e-5, 9e-5)
Decoder ~U(3e-3, 5e-3)</p>
<p><strong>Optimizer</strong>
RAdam / <a href="https://github.com/catalyst-team/catalyst/blob/master/catalyst/contrib/optimizers/qhadamw.py">QHAdamW</a></p>
<p><strong>Augmentation</strong>
Compositions are in <code>data_loader.augmentation.py</code>.</p>
<p>I made one custom augmentation - I modified Cutout to apply to masks. I wasn't sure if this would actually be better than only applying Cutout to the image - because the ground truth bounding boxes were large and covered areas that actually weren't very cloudy. It wasn't obvious from my experiments which worked better - but they both helped, so I just added them both to the available random configuration options for training.</p>
<p><strong>Image Sizes</strong>
I wanted to use images sizes divisible by 32 so they would work without rounding effects, so I used the following which maintained the original 1400:2100 aspect ratio:</p>
<ul>
<li>256x384</li>
<li>320x480</li>
<li>384x576</li>
<li>448x672</li>
</ul>
<p>Most models were trained using 320x480. I didn't notice any improvement using larger image sizes, but I figured it might help the ensemble to use diverse sizes.</p>
<p><strong>Pseudo Labels</strong>
I used my ensemble trained on the official training data to predict masks for the ~4000 images I downloaded. I then removed any images without masks, and trained on the rest.</p>
<p>In contrast to some of the other people that used pseudo labels, I did not make my thresholds harsher for selecting pseudo labels. My rationale was that since most images included 2+ classes, increasing the thresholds to be 'safe' would likely mean missing the 2nd class in many images - leading to lots of false negative labels in my pseudo labels.</p>
<p>I used a <a href="https://github.com/khornlund/pytorch-balanced-sampler">balanced sampler</a> to include 4 pseudo labelled samples per batch (typically batch sizes were 10-16).</p>
<h2>Post-Processing</h2>
<p><strong>TTA</strong>
I used flips from <a href="https://github.com/qubvel/ttach">TTAch</a></p>
<p><strong>Segmentation Thresholds</strong>
I experimented with a bunch of different ways to threshold positive predictions, as
the dice metric penalises false positives so heavily.</p>
<p>I started out by using the following threshold rule:</p>
<ol>
<li>Outputs must have N pixels above some <em>top threshold</em>. I started out using N ~ 8000 for each class, and a top threshold of ~0.57.</li>
<li>For predictions that pass (1), produce a binary mask using <em>bot threshold</em> of ~0.4.</li>
</ol>
<p>I used the continuous output of the classifier to modulate these thresholds. Ie. if the classifier was high, I would reduce the min size requirement, or the top threshold.</p>
<p>In the end I simply used maximum pixel prediction and no min size.</p>
<p>The distribution of predictions for the different classes is actually pretty interesting:</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2116899%2Ff665763ea6d3668f7514f747997ad71d%2Faverage-prediction-distribution.png?generation=1574246988235284&alt=media" alt=""></p>
<p>Class 1 has very nice bimodal distribution. This suggests it was the easiest to learn.</p> |
Understanding Clouds from Satellite Images | 3rd silver place key points | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>My models are not special, since I am completely new to computer vision and segmentation in particular. What worked is that I trained a bunch of different encoders, pseudo-label thresholds, then combine around 10 models, search for best postprocessing parameters through CV. That's simply it.</p>
<p>First I would like to thank Max Planck Institute and Kaggle for hosting this interesting competition.</p>
<p>I would like to share some of the key points of my 3rd place (silver) solution :-) It sounds cool right? (well I love to make that top silver medal the most out of it, forgive me :P)</p>
<h3>1) Cutmix augmentation</h3>
<p>Naturally thinking, cutmix is the best way to deal with this competition. We can cut a part of this image and paste to another image. This idea came off from my mind without knowing its existence academically, which I later found an official paper about it.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1829450%2Fb8053a93d2452d3417e3bfe0100ea953%2Fcutmix.png?generation=1574165517927124&alt=media" alt=""></p>
<p>How to do that in code?
I search for some augmentation package, but find it hard to flexibly code it my way. So I decided to do it manually.
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1829450%2Fbc80b1cb83404da2281afe27ec2f7f72%2Fcutmixcode.png?generation=1574165662592266&alt=media" alt="">
The different of doing cutmix or not is just an extra section of <code>__getitem__</code> in data generator. Here, <code>indexes_augment</code> is the random indexes pick from the training data, <code>w_cutmix</code> and <code>h_cutmix</code> are width and height of the crop. So I just get the random starting position of width and height in the original drawn image (<code>X</code>) and insert part of other image (<code>Xc</code>) into it. </p>
<p><strong>Cutmix boosted both LB and CV by 0.004.</strong></p>
<h3>2) Pseudo-label</h3>
<p>Pseudo-label only works if we correctly select good samples, as well as the correct number of samples. I did this by assessing the <code>quality</code> of each predicted validation image by calculating:
<code>quality = (number of pixels with probability &gt; top) + w*(number of pixels with probability &lt; bot)</code>. Here, <code>top</code> can take values from [.7, .75, .8, .85, .9], <code>bot</code> can take values from [.1, .15, .2, .25, .3], and <code>w</code> is the weight of low-value pixels as compared to high-value pixels, which can be taken from, say, [.1, .5, 1, 2, 10]. </p>
<p>I get the <code>quality</code> of all validation data, rank it, and select <code>nb_samples</code> most confident samples from it, and see the score. I search through a full set of validation data and had a result something like this
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1829450%2Fbdb7de2de3781797e891e7eb8ec93508%2Fpseudo.png?generation=1574166277986726&alt=media" alt=""></p>
<p>So, I can manually decide <code>bot</code>, <code>top</code>, <code>w</code>, and <code>nb_samples</code> as long as <code>nb_samples</code> are reasonable with the corresponding score. For example, <code>bot=.1</code>, <code>top=.7</code>, <code>w=1</code>, and <code>nb_samples</code>=1000 (with corresponding <code>dice=0.77xx</code>), which means the most 1000 confident predictions out of 5546 train images can have that good dice. Then I can pick up the same ratio of images from test predictions, which is (1000/5546*3698).</p>
<p><strong>Pseudo labelling boosted around 0.003 on both CV and LB.</strong></p>
<h3>3) Estimating private LB distribution and decide to trust CV</h3>
<p>First, I did a test on private LB, based on <a href="https://www.kaggle.com/c/understanding_cloud_organization/discussion/109793#latest-631950">this topic</a>. </p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1528571%2F43771b2c3fd35e39047f55b552604852%2FLB_probing.png?generation=1569149948996763&alt=media" alt=""></p>
<p>From that probing result, we need to make 1 assumption:
"Train set and full test set should have the same distribution of classes."</p>
<p>Then, by submitting each class as empty (others as 1-pixel masks), we can know the percentage of each class in the public LB. Then the assumption I make will allow us to know the percentage of each class in the private LB. The result is:</p>
<p>Train data: Fish 49.85%, Flower 57.35%, Gravel 47.00%, Sugar 32.36%.
Private test data: Fish 49.94%, Flower 56.76%, Gravel 47.20%, Sugar 31.30%.</p>
<p>As you can see, the distributions of private test and train very similar, allowing me to completely trust CV. Therefore during the whole competition, I never probed LB by submissions, but only stick with full k-fold to search for post-processing parameters. <strong>This is important, as it guides the way we do everyday in the competition</strong>. And you can see that I jumped on private LB, and I also selected my possibly best submission.</p>
<p>Finally, I still would like to emphasize again that late sharing should not be encouraged. I have a bad thought that whenever I see the excessive sharers around in future competitions, I would be very disappointed, and discouraged from competing. In other words I am somehow "scared" of their existence. </p>
<p>Thanks for reading!</p>
<p>Thanks Chris. The jump was just from ensembling, which I did only in the last 2 days. Before jumping, my single models scored around 0.669 public LB, after ensemble public 0.674 LB.</p>
<p>It needs around 20 epochs I think</p> |
Understanding Clouds from Satellite Images | 4th Place Solution: Stabilizing Convergence in Understanding Clouds | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><img src="https://storage.googleapis.com/kaggle-media/competitions/MaxPlanck/Teaser_AnimationwLabels.gif"></p>
<p>First of all, I would like to express my gratitude and appreciation to the following parties for organizing such a great competition:
- <a href="https://www.kaggle.com">Kaggle</a>
- [Max Planck Institute for Meteorology] (<a href="https://www.kaggle.com/MaxPlanckInstitute">https://www.kaggle.com/MaxPlanckInstitute</a>)</p>
<p>Besides, I would like to use this opportunity to thank my fellow kagglers for all the insightful posts in the discussion forum of various competitions. I have also learned a lot of stuffs and gained knowledge by reading from past solutions. There is a good thriving culture of idea sharing and contributions which I have found in every corner of Kaggle and I loved to be part of it.</p>
<h2>The Main Challenge</h2>
<p>The challenge that I have faced initially in this competition is that many models of different architectures tend to overfit easily in early training stage especially for the larger and deeper models, such as SE-ResNext-101 and EfficientNet B5-B7. I have suspected the culprit might be due to the labels given are too noisy and this increases the tendency of model to be overfitted to the noises of training data, as the labels were determined by the union of the areas marked by all annotators. Also, the shape of the label provided is rectangular instead of the exact shape fitted to the boundary of cloud patterns. I understand the reasons behind <a href="https://arxiv.org/pdf/1906.01906.pdf">these decisions made by the competition host</a>, and here goes my whole journey of this competition, which is revolved around stabilizing the convergence in training models.</p>
<h2>Solution Overview</h2>
<p>My solution for this competition is mainly comprised of the followings:</p>
<ul>
<li><p><strong>Pure segmentation models without false positive classifier</strong>
After reaching public LB 0.6752 with segmentation model, I've trained a few classifiers using Resnet34, SE-ResNext-50 and EfficientNet-B4 but the performance is pretty unstable (+/- 0.003 ~ 0.010) in local cross validations of 10 folds. Thus, I discarded the classifiers and decided to stick with segmentation models.</p></li>
<li><p><strong>Network Architectures</strong>
I've used the awesome implementations of various models from <a href="https://github.com/qubvel/segmentation_models.pytorch">segmentation_models.pytorch</a>, <a href="https://github.com/Cadene/pretrained-models.pytorch">pretrained-models.pytorch
</a>, <a href="https://github.com/lukemelas/EfficientNet-PyTorch">EfficientNet-PyTorch</a> and <a href="https://www.kaggle.com/c/understanding_cloud_organization/discussion/115787#671393">Resnet34-ASPP</a> from <a href="/hengck23">@hengck23</a>. My final ensemble used 7 folds of EfficientNet-B4-FPN and 3 folds of Resnet34-ASPP as they have better performance and more stable in error convergence in my case after running rounds of experiments using various network architectures.</p></li>
<li><strong>RAdam Optimizer</strong>
RAdam helped to stabilize training error convergence as it is less sensitive to learning rate change in my case, thus minimizing the variance.</li>
<li><strong>Flat threshold of 0.4 for all classes</strong>
Threshold of 0.4 yielded the highest cross validation DICE score when compared in the range of [0.4, 0.5, 0.6], no further fine-tuning of threshold is done.</li>
<li><strong>Minimum segmentation mask size of 5000 pixels for all classes</strong>
The mask size threshold is set to be just high enough to filter out noises, no any other post-processing methods is used.</li>
<li><strong>Input Size</strong>
Downsized from the raw size of 1400 x 2100 to 700 x 1050. After applying augmentations, it is downsized again from 700 x 1050 to 384 x 576.</li>
<li><strong>Augmentations used in training</strong>
<ul><li>horizontal flip</li>
<li>vertical flip</li>
<li>random shift, scale and rotate</li></ul></li>
<li><p><strong>Test-time Augmentations (TTA)</strong>:</p>
<ul><li>horizontal flip</li>
<li>vertical flip</li>
<li>180 degree flip (horizontal + vertical flip)</li></ul></li>
<li><p><strong>Pseudo-labeling</strong>
I've used two approach for pseudo-labeling, one in which only the confident pseudo-labels are selected and use in training, another in which pseudo-labels are generated from all the test data. In my case, the model training performance of using pseudo-labels from all test data is more robust and stable in terms of error convergence and achieve higher DICE score.</p></li>
<li><strong>Ensemble with equal weight averaging</strong> </li>
<li><strong>Trained initially with BCE Loss, fine-tuned with Symmetric Lovasz Loss originated from this <a href="https://arxiv.org/abs/1705.08790">paper</a> and modified by <a href="/tugstugi">@tugstugi</a></strong>
Below is the PyTorch implementation code of Symmetric Lovasz Loss:
<code>
def symmetric_lovasz_loss(outputs, targets):
batch_size, num_class, H, W = outputs.shape
outputs = outputs.contiguous().view(-1, H, W)
targets = targets.contiguous().view(-1, H, W)
return (lovasz_hinge(outputs, targets)
<ul><li>lovasz_hinge(-outputs, 1 - targets))/2
</code></li></ul>
<li><strong>GPU used</strong>
<ul><li>2 x RTX2080Ti</li></ul></li>
<h2>Conclusion</h2>
<p>I think local cross validation is very important and we should always believe in it despite the score showed on Public LB might be lower or higher as it is only computed based on a minor subset of the test dataset. Besides, the <strong>combination of RAdam optimizer, Symmetric Lovasz Loss, Pseudo-labeling and ensembling</strong> has helped significantly in stabilizing the convergence and improving the score.
<br><br>
Thanks for reading! See you again in upcoming competitions.</p>
<p>Thanks! <a href="/veeralakrishna">@veeralakrishna</a> </p>
<p>Thanks! <a href="/hanjoonchoe">@hanjoonchoe</a> </p>
<p>Thanks! Congrats for getting a silver medal too <a href="/phunghieu">@phunghieu</a> </p>
<p>Thanks! <a href="/corochann">@corochann</a> </p>
<p>Thanks Chris <a href="/cdeotte">@cdeotte</a>! I have learned a lot from your posts too </p>
<p>Thanks! <a href="/titericz">@titericz</a> </p> |
Understanding Clouds from Satellite Images | 5th place solution(single segmentation model private lb 0.66806) | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>I use max mask pixel probability as labels, no classifiers.</p>
<p>Thank you and also congrats your silver.
for single model cv and lb score TTA can improve about 0.002</p>
<ol>
<li>I have tried many differient image sizes to find which one works better, then I would use it.</li>
<li>I trained each fold from scratch, thus I can get a robust cv score and no train data leakage.</li>
<li>I wrote a multi-scale TTA code to do this. Input differient scale to model, then resize the model mask output to same size and mutiply weight </li>
</ol>
<ol>
<li>I use resnet50 to do the experiment, freeze other sets and train with differient augmentations, then see which augmentation sets have a better cv score.</li>
<li>I use threshold search to find this <code>'0.21'</code> like threshold.</li>
</ol>
<p>no</p>
<p>I have no plan to release code. nothing special.
Main tricks I used have been introduced above </p>
<p>Congratulations to all winners in this competition! This is my first gold medal. I feel so happy.</p>
<p>My solution is ensemble of 3 segmentation models</p>
<h2>Augmentation</h2>
<p>In this competition I found image augmentation is very important. I have tried many different augmentation sets and finally found one set that works good for me. I use albumentation to do image augmentation.
<code>
aug = Compose([
ShiftScaleRotate(scale_limit=0.5, rotate_limit=0, shift_limit=0.1, p=0.6, border_mode=0),
OneOf([
ElasticTransform(p=0.5, alpha=50, sigma=120 * 0.02, alpha_affine=120 * 0.02),
GridDistortion(p=0.5),
OpticalDistortion(p=0.5, distort_limit=0.4, shift_limit=0.5)
], p=0.8),
RandomRotate90(p=0.5),
Resize(352, 544),
VerticalFlip(p=0.5),
HorizontalFlip(p=0.5),
OneOf([
IAASharpen(alpha=(0.1, 0.3), p=0.5),
CLAHE(p=0.8),
GaussNoise(var_limit=(10.0, 50.0), p=0.5),
#GaussianBlur(blur_limit=3, p=0.5),
ISONoise(color_shift=(0.01, 0.05), intensity=(0.1, 0.5), p=0.3),
], p=0.8),
RandomBrightnessContrast(p=0.8),
RandomGamma(p=0.8)])
</code></p>
<h2>Models</h2>
<p><strong>Model1: **
<code>
Encoder: efficientnet-b1
Decoder: unet
Image Input Size: 416x608
TTA: hflip, vflip, multi-scale: [(352, 544), (384, 576), (448, 640), (480, 672)]
Threshold: threshold label = [0.85, 0.92, 0.85, 0.85], threshold pixel = [0.21, 0.44, 0.4, 0.3]
Score: 9-fold cv = 0.66002, public lb = 0.67070, private lb = 0.66806
</code>
**Model2:</strong>
<code>
Encoder: efficientnet-b3
Decoder: fpn
Image Input Size: 352x544
TTA: hflip, vflip, multi-scale: [(320, 512), (384, 576)]
Threshold: threshold label = [0.85, 0.9, 0.9, 0.85], threshold pixel = [0.35, 0.4, 0.42, 0.42]
Score: 9-fold cv = 0.65646, public lb = 0.66426, private lb = 0.66687
</code>
<strong>Model3:</strong>
<code>
Encoder: resnet50
Decoder: unet
Image Input Size: 352x544
TTA: hflip, vflip, multi-scale: [(320, 512), (384, 576)]
Threshold: threshold label = [0.9, 0.92, 0.87, 0.82], threshold pixel = [0.35, 0.51, 0.31, 0.3]
Score: 9-fold cv = 0.65715, public lb = 0.66541, private lb = 0.65973
</code></p>
<p>All models use bcedice loss and Adam optimizer. Run threshold search to get the threshold label and threshold pixel</p>
<h2>Ensemble</h2>
<p>I use cv and public lb score to roughly set model weights, and run threshold search to get the threshold.
<code>
Model Weight: model1, model2, model3 = [4, 1, 2]
Threshold: threshold label = [0.84, 0.9, 0.85, 0.8], threshold pixel = [0.25, 0.43, 0.35, 0.35]
Score: 9-fold cv = 0.66449, public lb = 0.67601, private lb = 0.67080
</code></p>
<p>Finally thanks to <a href="/hengck23">@hengck23</a> rKeng, I learn a lot from his code and ideas.</p>
<p>Thanks for your congratulations!</p>
<p>Thanks for your congratulations!</p>
<p>Thanks for your congratulations!</p>
<p>Thanks for your congratulations!</p>
<p>Thanks!</p>
<p>Thanks for your congratulations!</p>
<p>Thanks for your congratulations! and also thanks for your sharing in this competition!</p>
<p>```
sizes = [(352, 544), (416, 608)] # 384x576
weights = [0.25, 0.25]
for i in range(len(sizes)):
input_scale = F.interpolate(input, size=sizes[i], mode='bilinear')
logit = data_parallel(net, input_scale)
probability = torch.sigmoid(logit)</p>
<pre><code>probability_mask += F.interpolate(probability, size=(350, 525), mode='bilinear') * weights[i]
probability_label += F.adaptive_max_pool2d(probability, (1, 1)).view(batch_size, -1) * weights[i]
num_augment += weights[i]
</code></pre>
<p>```</p> |
Understanding Clouds from Satellite Images | 6th simple solution, pre-training, single model private 0.66927 | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks!</p>
<p>Thanks for your congratulations!</p>
<p>Thanks for your congratulations!</p>
<p><code>(1) what are your optimizers and learning rates on the 1st and 2nd stages?
In order to be the most effective, do we have to carefully adjust small learning rates in the 2nd-stage? (or do we have to freeze some layers first?)</code></p>
<p><strong>1st-stage</strong>
optimizer: AdamW
scheduler: WarmupLinearSchedule
learning_rate: 5e-04</p>
<p><strong>2nd-stage</strong>
optimizer: AdamW
scheduler: MultiStepLR (milestones=[5, 10], gamma=0.1)
learning_rate: 5e-04</p>
<p>In my experiments, the small initial learning rate(5e-05) was not good.
(Like the BERT encoder, I think the fine-tuning stage(2nd stage) is better to learn the entire weight of the model.)</p>
<p><code>(2) in the 1st-stage, what is the label of the classification head?</code></p>
<p>Both the 1st and 2nd stages used the same model architecture; therefore, the classification head is not learned at 1st-stage. (no need label)</p>
<p><code>(3) I am re-thinking about the 1st-stage, isn't it should be easy for neural network to learn the cloud mask rule? (pixel &gt;115) . (Even shallow networks should be able to learn this? )</code></p>
<p>Yeah, I think it's easy. So I used only 5 epochs(with WarmupLinearSchedule) to prevent overfitting.</p>
<p>PS. My wife loves cats more than dogs. 😂 </p>
<p>First of all, I would like to thank the hosting organization that hosted this competition and Kaggle. Like any competition, this competition was also hot until the end. So, I want to congratulate Kagglers who struggled until the end of this competition.</p>
<p>I will summarize and write down the part of my solution that you will be interested in. It's <code>pre-training</code></p>
<h1>pre-training</h1>
<p>The challenge of this competition is to segment according to the shape of the cloud. Therefore, I tried to pre-train the model to learn the shape of the cloud.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1658108%2F7e5d1fb4cbda36a5bcad97879a424176%2F1st-training.png?generation=1574148986382141&alt=media" alt=""></p>
<p>Because the clouds are white, I generated <code>cloud_mask</code> with the threshold of "pixel > 115". Then, I used it as a label. (Since the total number of image files is 9244, the cloud_mask also generates 9244.)</p>
<p>After pre-training, I tried a 2nd-stage training.
Pre-trained(1st stage-training) model are used as the initial value of 2nd-stage model weights.</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1658108%2F6ed4418967a6461295eed180c047feda%2F2nd-training.png?generation=1574149060524535&alt=media" alt=""></p>
<p>This training process boosted my CV 0.005~0.01. So, my single model score is as follows.</p>
<p>| model | private | public |
| --- | --- | --- |
| efficientnet-b4, unet | 0.66927 | 0.67437 |
| efficientnet-b4, fpn | 0.66827 | 0.67508 |</p>
<p><img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1658108%2F95009ca5bb429732cb523d971978e0fd%2F.png?generation=1574149018151413&alt=media" alt=""></p>
<p>The rest is not special, so I'll skip the description. 😁
Thanks for your reading!</p>
<p>I congratulate you too :)</p>
<p>I'm sorry I didn't explain it clearly. 😂 </p>
<blockquote>
<p>In stage1, the label is the mask? or the image itself?</p>
</blockquote>
<p>The label is the mask. <br>
One example of mask
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1658108%2F1482ca9ded9080c3200e0607852587ea%2F009e2f3_true%20(1" alt="">.jpg?generation=1574152348586165&alt=media)</p>
<blockquote>
<p>Could you elaborate on this part? this increased the number of training samples for 2nd stage?</p>
</blockquote>
<p>Pre-trained(1st stage-training) model are used as the initial value of 2nd-stage model weights.</p>
<p>Yeah, that's right.</p>
<ul>
<li>1st-stage training: all images, generatedmask </li>
<li>2nd-stage training: train images, realmask</li>
</ul>
<p>Thanks for your congrats! </p>
<p>Thanks for your congrats!
I just chose roughly. 😁 (A rule of thumb, 눈대중)</p>
<p>Thanks for your congratulations! 😊 </p>
<p>Thanks for your congratulations!</p>
<p><a href="/phoenix9032">@phoenix9032</a> No, > 115 doesn't seem to capture all the clouds.</p>
<p>No, I just used this code to generate a cloud_mask.
<code>cloud_mask = (image > 115).astype(int)</code></p>
<ul>
<li><p>image
![image](<a href="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1658108%2Fbfb36270c4b0ef8ff920d3d633519fea%2F009e2f3.jpg?generation=1574176746316088&alt=media">https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1658108%2Fbfb36270c4b0ef8ff920d3d633519fea%2F009e2f3.jpg?generation=1574176746316088&alt=media</a> =200x300)</p></li>
<li><p>cloud_mask = (image > 115).astype(int)
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1658108%2F1482ca9ded9080c3200e0607852587ea%2F009e2f3_true%20(1" alt="image">.jpg?generation=1574152348586165&alt=media =200x300)</p></li>
</ul>
<p>wow~ thanks!</p>
<p>Thanks. I congratulate you too :)</p>
<p><a href="/cdeotte">@cdeotte</a> Thank you for your clarification! 👍 </p>
<p>Yep, you're right.</p>
<p>Thanks for your congratulations!</p>
<p>Thanks for your congratulations!</p>
<p><a href="/hengck23">@hengck23</a> Thanks for your congratulations, and I didn’t think like that. Interesting!</p> |
Understanding Clouds from Satellite Images | 86-th(bronze) writeup - solution description and lessons learned. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p><strong>solution summary</strong>
My final submit is a voting average of 5 best submissions by public LB. Each of this 5 submissions is a 5-fold voting average of an efficientnet-b4 Unet or FPN,
trained with BCE Dice/only bce loss with or without tta on image sizes 512x352, 640x320.
Voting average settings - minimum 4 of 5 nonempty masks to consider mask non-empty, minimum 2 positive pixels to consider a pixel positive for non-empty mask.</p>
<p><strong>Pipeline</strong>
The code is based on this great <a href="https://www.kaggle.com/artgor/segmentation-in-pytorch-using-convenient-tools">kernel </a> with few important changes that allowed to get single model
public score 0.66+:
- efficientnet-b4 backbone
- border mode cv2.BORDER_REFLECT_101(default) in albumentations ShiftScaleRotate
- Radam optimizer
- Customer learning rate scheduler(decoder starts at 1e-3 and decleaning to 1e-4 in 10 epochs, encoder starts at 0 and increasing to 1e-4 in 10 epochs, then both
decleaning in steps to 1e-5 for next 30 epochs everu 3 epochs). Usually the best checkpoint is at epoch ~22.</p>
<p>Side note: I switched from keras to pytorch in this competition and quite happy with that. The most important reasons are:
1. Mixed precision training that can be enabled with 1 line of code in catalyst
2. Parameter groups in optimizer that allow fine control over model learning.</p>
<p><strong>Here are some highlights of the lessons learned.</strong>
1. Even in competition like this where CV/LB discrepancy is small, single model score improvement after a hyperparameter change doesn't prove anything. N-fold should be used for validation every time.
2. Too much parameter fitting on validation set(threshold, min_size) easily leads to overfitting to validation. Using constant values eventually is better.
3. Models with tta inference don't improve public LB most of the time, but consistently better on private.</p>
<p><strong>Things that didn't work for me:</strong>
1. Pseudo labeling
2. Loss functions beyond BCEDice/BCE.
3. Classification(stopped adding value after LB reached 0.66)</p>
<p><strong>What things I wish I've tried:</strong>
1. Implement final thresholded dice as a metric and use it for checkpointing
2. triplet thresholding(<a href="https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation/discussion/107824">https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation/discussion/107824</a>), or double flat thresholds like Heng did.
3. implement good k-fold validation scheme early and make a grid search for image size/loss/augmentations/tta
4. pretraining - <a href="https://www.kaggle.com/c/understanding_cloud_organization/discussion/118017#latest-676633">https://www.kaggle.com/c/understanding_cloud_organization/discussion/118017#latest-676633</a></p>
<p><strong>Conclusion</strong>
I got my first kaggle medal, was quite close to silver zone and didn't suffer a major shakeup( actually enjoyed it with getting +16 positions) so result is quite positive for me. But still have to learn a lot.</p>
<p>Thanks, Vlad.
Yes, almost correct. I train 5 models as you describe, validate and create 5 submissions.
Then I have to blend 5 binary masks for each image/label. This is done the following way:
1. If less then 4 masks are non-empty(2 or more empty) - I submit an empty mask.
2. If I have 4+ non-empty masks - then I set each pixel to 1 if it is 1 in two or more masks. </p>
<p>This way performed consistently better then prediction averaging for me.</p>
<p>From scratch.
Using checkpoint from previous fold would introduce a leak and break validation.</p> |
Understanding Clouds from Satellite Images | 95th Place Solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>It has been great to work with <a href="/muhakabartay">@muhakabartay</a>, <a href="/carlolepelaars">@carlolepelaars</a>, <a href="/roydatascience">@roydatascience</a>, <a href="/lightnezzofbeing">@lightnezzofbeing</a> . We have open sourced the code in github @ <a href="https://github.com/LightnessOfBeing/Cl0uds">https://github.com/LightnessOfBeing/Cl0uds</a>. </p> |
Understanding Clouds from Satellite Images | Easy silver in last days [55th] | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>step 2 - separate binary segmentation model for each class</p>
<p>p.s. merged to master</p>
<p>You are right, but also decoder output is multiplied by 'label' output to generate mask (look at the pic.).</p>
<h2>Easy silver in last days</h2>
<p>I have adopted my pipeline from Severstal Defect Detection and was able to get silver medal in last two days with just 6 submissions, here is a short description of 55th place solution.</p>
<p>2 step pileline
1) Multi-task network (classification + segmentation) as classifier to remove empty masks
2) Binary segmentation for each class</p>
<h3>1st step.</h3>
<p>I have trained 5-fold <code>FPN(resnet34) + aux classfication output</code> on 480x640 images using <code>Flip</code>, <code>RandomBrightness</code> as augmentations. Model trained just 6-7 epochs and than starts to overfit, I do nothing with that, just save top 5 checkpoints according to metric.</p>
<p>Loss (segmentation head): bce+dice
Loss (classification head): bce
Optimizer: AdamW
Postprocessing: remove masks less than 10000 pixels
Thresholds: [0.6, 0.6, 0.6, 0.6]
<img src="https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F940556%2Faec538c6a536cc9dd070697b4e416870%2F2019-11-19%2010-58-58.png?generation=1574150366439784&alt=media" alt=""></p>
<h3>2nd step.</h3>
<p>For each class trained <code>2 x Unet(se_resnext50_34x4d)</code> only on images with masks of that class!
with same optimizer, image size and augmentations.</p>
<p>Loss: bce+dice
Thresholds: [0.4, 0.4, 0.4, 0.4]</p>
<h3>Ensemble</h3>
<p>For all models made checkpoints weights! averaging (+0.005-0.01 on validation).
Models over each stage have been just averaged with Flip TTA.</p>
<h3>Useful links</h3>
<ul>
<li>Segmentation Models: <a href="https://github.com/qubvel/segmentation_models.pytorch">https://github.com/qubvel/segmentation_models.pytorch</a></li>
<li>Test Time Augmentation for PyTorch: <a href="https://github.com/qubvel/ttach">https://github.com/qubvel/ttach</a></li>
</ul>
<p><strong>And congratulations to winners!</strong></p>
<p>I have prepared new feature for SMP library that add aux output for models (look at <a href="https://github.com/qubvel/segmentation_models.pytorch/tree/models-refactoring">https://github.com/qubvel/segmentation_models.pytorch/tree/models-refactoring</a>)</p>
<p>According to this implementation my multi-task network defined as follows:
```
class GatedFPN(smp.FPN):</p>
<pre><code>def forward(self, x):
mask, label = super().forward(x)
return dict(
mask=mask*label.reshape(*label.size(), 1, 1),
label=label,
)
</code></pre>
<p>aux_params = dict(classes=4, activation='sigmoid', dropout=0.5, pooling='avg')
model = GatedFPN('resnet34', encoder_weights='imagenet', classes=4, activation='sigmoid', aux_params=aux_params)
```</p>
<p>```
import torch
from collections import OrderedDict
from typing import List</p>
<p>checkpoints_weights_paths: List[str] = ... # sorted in descending order by score
model: torch.nn.Module = ...</p>
<p>def average_weights(state_dicts: List[dict]):
everage_dict = OrderedDict()
for k in state_dicts[0].keys():
everage_dict[k] = sum([state_dict[k] for state_dict in state_dicts]) / len(state_dicts)
return everage_dict</p>
<p>all_weights = [torch.load(path) for path in checkpoints_weights_paths]</p>
<p>best_score = 0
best_weights = []</p>
<p>for w in all_weights:
current_weights = best_weights + [w]
average_dict = average_weights(current_weights)
model.load_state_dict(average_dict)
score = evaluate_model(model, ...)
if score > best_score:
best_score = score
best_weights.append(w)
```</p>
<p><a href="https://gist.github.com/qubvel/70c3d5e4cddcde731408f478e12ef87b">https://gist.github.com/qubvel/70c3d5e4cddcde731408f478e12ef87b</a></p>
<p>Thanks!
To select images for the second stage I use max pixel value from <code>mask</code> output on first stage. Actually, at first, I was think that first stage will be enough and create a submission with only these models, but get not satisfied result. After that I decided to train stage 2 models and replace masks from that submission with new ones (so, may be using label output can give better score, but I did not have time and submissions to test it).
In step two I used just simple unet without classification head.</p>
<p>I use pytorch optimizers</p> |
Understanding Clouds from Satellite Images | Finally GM & 1st time won prize money! And 3rd place solution. | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>Thanks. </p>
<p>Thanks. </p>
<p>Thanks. </p>
<p>Same to you. </p>
<p>Thanks. I haven't decided for the code, it is messy. </p>
<p>I have mentioned the batch size, 16 for all models. </p>
<p>The very first version of seg2 is 4 binary segmentation models trained on non-empty images only (channel wise, so different channels will have different numbers of train images). </p>
<p>The non-empty predictions in sub1 is replaced by those from seg2, since seg2 is better in giving accurate masks. </p>
<blockquote>
<p>What do you think made such an improvement possible?</p>
</blockquote>
<p>My 2-stage segmentation pipeline. It is much easier to improve empty and non-empty predictions respectively. </p>
<p>The soft-dice loss is basically soft-f1 loss at pixel level. </p>
<p>Yeah, you are so close! This technique makes improving models much easier, since you only need to focus on one thing at a time. </p>
<p>UPDATE: code is now available <a href="https://github.com/naivelamb/kaggle-cloud-organization">here</a>.</p>
<p>Thanks for Max Planck Institute for Meteorology and Kaggle for hosting such an interesting competition. Congrats to all the winners.</p>
<p>The key in my solution is training two segmentation models: <strong>seg1</strong> trained on all data with BCE loss, and <strong>seg2</strong> trained on non-empty images only with soft DICE loss. I think it works because this competition basically has two tasks: 1) detect the empty images; 2) predict accurate masks for the non-empty images. The two segmentation models address these two tasks respectively. </p>
<h2>How I come up with this.</h2>
<p>I started the competition with resnet34-FPN using BCE loss (<strong>seg1</strong>). This model achieves ~0.608 on LB and the major contribution comes from capturing the empty mask: it captures ~80% empty masks. I tried a lot to improve the non-empty part, like using combo loss of BCE and DICE, but it is hard to improve the neg-dice (dice score for the empty masks) and pos-dice (dice score for the non-empty makes) simultaneously.</p>
<p>To predict the non-empty mask accurately, I decided to train 4 individual segmentation models for the non-empty images and then ensemble them together. Since all the train images are non-empty, we can use soft DICE loss directly and the model would focus on predicting accurate masks. I used exactly the same network structure, resnet34-FPN (<strong>seg2</strong>). Then I simply replace all the non-empty predictions from <strong>seg1</strong> model using the predictions from ‘seg2’. Only 1 fold of this 2-stage segmentation pipeline, no TTA, no min-size remover, no classifier, no threshold adjustment (all 0.5) could achieve LB 0.652. After including a resnet34 classifier (0.5 threshold), I got LB 0.655. </p>
<p>Later on, I managed to train all 4 classes in one model by implementing pos-only soft DICE loss. The code looks like:</p>
<p><code>python
def dice_only_pos(logits, labels, labels_fc):
# logits -&gt; pixel level predictions
# labels -&gt; pixel level labels
# labels_fc -&gt; image/channel level labels
pos_idx = (labels_fc &gt; 0.5)
neg_idx = (labels_fc &lt; 0.5)
loss = SoftDiceLoss()(logits[pos_idx], labels[pos_idx])
return loss
</code>
This loss only counts the non-empty channels and ignores all the empty channels.</p>
<p>In summary the pipeline looks like:
>1. <strong>seg1</strong>: a multi-label segmentation model trained with BCE loss
>2. <strong>seg2</strong>: a multi-label segmentation model trained with pos-only soft DICE loss
>3. <strong>cls</strong>: a multi-label classifier trained with BCE loss. </p>
<p>The final submission is achieved by the following steps:
>1. Get predictions using <strong>seg1</strong>
>2. Replacing the non-empty masks from <strong>seg1</strong> by predictions from <strong>seg2</strong>
>3. Removing more empty masks using <strong>cls</strong></p>
<p>Both pixel-level (segmentation) and image-level (classifier) thresholds are 0.5. </p>
<h2>Baseline results for the 2-stage segmentation</h2>
<p>Model summary:
>Network: Resnet34-FPN
>Image size: 384x576
>Batch size: 16
>Optimizer: Adam
>Scheduler: reduceLR for seg1, warmRestart for seg2.
>Augmentations: H/V flip, ShiftScalerRotate and GridDistortion
>TTA: raw, Horizontal Flip, Vertical Flip</p>
<p>Results:
>1-fold: 0.664
>5-fold + TTA3: 0.669
>5-fold + TTA3 + classifier: 0.670. </p>
<p><em>TTA1 means only raw images; TTA3 means raw + H/V flip.</em></p>
<p>The rest of my work is just trying different backbones to find the best one. My final models are:</p>
<p>>seg1: densenet121-FPN, TTA1
>seg2: b7-FPN, TTA3
>cls: b1, TTA1</p>
<p>Results:
>1-fold LB: 0.673
>5-fold LB: 0.6788</p>
<h2>Ensemble</h2>
<p>I ensembled multiple seg2 models using major vote. By including 4 models (b5-Unet, InceptionResnetV2-FPN, b7-FPN and b7-Unet), I achieved 0.6792 on LB. </p>
<h2>Pseudo Labeling</h2>
<p>I selected the pseudo labels based a LB 0.6790 submission with the following rules:
>1. Empty channels with classifier prediction < 0.3
>2. Non-empty channels with classifier prediction > 0.7</p>
<p>An image is selected when all the 4 channels satisfy one of the conditions. 835 images are selected. I retrained the b7-FPN and b1-classifier including the pseudo labeling samples, and the final models are:
>seg1: densenet121-FPN, TTA1
>seg2: b5-Unet + InceptionResnetV2-FPN + b7-Unet + b7-FPN + b7-FPN-PL, TTA3
>cls: b1-PL, TTA3</p>
<p><em>PL means the model is retrained with pseudo labels</em></p>
<p>This model achieves 0.6794 LB. </p>
<p>On the last day, I decide to optimize the classifier threshold channel wise to achieve the best local CV, which gives me 0.6805 LB. </p>
<h2>Other things worth mentioning</h2>
<ol>
<li>My CV aligns pretty well with the LB. 1-fold CV = LB +- 0.005. 5-fold CV = LB - (0.010 ~ 0.012). This helps a lot during the model development.</li>
<li>Resizing the image before training could significantly reduce the training time. My resnet34-FPN could finish 1 epoch of training and validation in around 1 mins on a 2080Ti. </li>
<li>For <strong>seg1</strong> and <strong>cls</strong>, complicated networks do not work. This is probably due to the noisy labels. For <strong>seg2</strong>, I cannot make seresnext50 and seresnext101 work and I have no idea why. </li>
</ol>
<blockquote>
<p>That is key for Kaggle competitions with their discontinuous Dice metric.</p>
</blockquote>
<p>Yes, I realized it in steel, but was unable to overcome it there. There were too many empty masks in steel, and this approach would make each batch very unstable. I was thinking about training 4 segmentation models but gave up at last considering the inference time limit. </p>
<p>Good luck to your image competition journal, wish you get your first CV gold medal soon! </p>
<p>apex is your good friend. </p>
<p>Thanks Giba! </p>
<p>I used my own GPU. </p> |
Understanding Clouds from Satellite Images | Finally GM & 7th place solution | Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge | <|system|>You can successfully explain and teach others topics and concepts from Kaggle competition solution write-ups. You are a highly talented data scientist who has shown exceptional performance in this data-science Kaggle competition: Understanding Clouds from Satellite Images
<|user|>Explain and teach me concepts from Kaggle competition solution write-ups that solved this challenge
<|assistant|><p>yep, I used it.</p>
<p>the most import part is classfication.</p>
<p>thank you.</p>
<p><a href="/naivelamb">@naivelamb</a> 曹老师最🐄🍺</p>
<p>yes, I use qubvel/segmentation_models.pytorch.</p>
<p>yep, it is loss</p>
<p>thank you, Giba.</p>
<p>thank you. The gold medal will come soon💪 </p>
<p>hah.. thank you so much, you remember my old head sculpture, I like both Corgi and cats. </p>
<p>congratulations for all kagglers.</p>
<h1>Small talk</h1>
<p>After the failure of steel, I had no time to feel sad, so I immediately went to this competition, made efforts for my GM. One of my former teammates <a href="https://www.kaggle.com/naivelamb">Xuan Cao</a> choose solo. Of course, he made the right choice because he win a solo gold and became GM. Congratulations to him.
And then I choose my friends who had been suffered from steel like me, <a href="https://www.kaggle.com/mdlszhengli">Zheng Li</a>, <a href="https://www.kaggle.com/lanjunyelan">yelan</a>, <a href="https://www.kaggle.com/hesene">Jhui He</a> and <a href="https://www.kaggle.com/strideradu">Strideradu</a> to team up.</p>
<h1>Solution</h1>
<p>> Our solution is very simple, just ensemble.</p>
<h2>Segmentation v1:</h2>
<p>Model: efficientnet e5/7-FPN,se101-FPN, se101-linket
Loss: dice loss</p>
<h2>Segmentation v2:</h2>
<p>Model: efficientnet e5-FPN
Loss: SymmetricLovaszLoss+dice loss</p>
<h2>Classification:</h2>
<p>> We have tried some pure classifiers, but the dice improvement of oof of our segmentation is limited, so we turn to multi-task learning, a segmentation model with fc head.</p>
<p>Model: efficientnet e5-fpn, se50-unet, se50-fpn
Loss: 0.1 * bce (classification) +(bce + lovasz + dice)(segmentation)</p>
<h2>Ensemble:</h2>
<h3>v1:</h3>
<p>Averaged probability from classification model for removing fp, and Segmentation v1 for tp, then we can got the around 0.670 oof cv, and the threshold for classfication is [0.65, 0.65, 0.65 ,0.65], then remove the small size mask(the threshold is [21000, 21000, 21000, 10000]), finally got the 0.6783 lb.</p>
<h3>v2:.</h3>
<p>we averaged probability from classification and the max pixel probability from Segmentation v2 for removing fp, but the lb was bad, and the threshold is low(0.55, the low threshold is not good in steel), so we abandoned this.</p>
<h2>Post processing:</h2>
<ol>
<li>From training set, each image has at least one label. Then, I extracted 4 channels with empty samples From my 6783 sub above, and extracted the maximum prediction probability of the classifier on this sample, and then restore the pixel mask if the prediction probability for a category > 0.55</li>
<li>I did a mask union of the samples that both Segmentation v2 and 6783sub predicted as postive.
Combined with the above post-treatment, we can get 0.6800 lb.</li>
</ol>
<h1>Conclusion</h1>
<ol>
<li>Unfortunately, we did not select the best submission(0.67254), which was from ensemble v2. but the public lb was 0.67360, so we did not select this submission, and of course we had a lot of submissions in the top3. Fortunately, Our submission which we choose, still allows us to go into the gold zone.</li>
<li>Thanks to my teammates for their efforts and I congratulate myself on becoming GM.</li>
</ol>
<p>thanks chris. for Segmentation v2, I just used positive masks, but our best score did not use that.</p>
<p>oh, thanks, <a href="/limerobot">@limerobot</a>, When the ASHRAE is finished, you will become a GM👍 </p>
<p>更新一下版本试试。</p> |