File size: 4,743 Bytes
615ea5b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
import streamlit as st
import pandas as pd
from backend.util import import_fig, fastai_model, plot


def main():
    st.title("Fastai for classifying")
    st.markdown(
        """ 

        Hi! This is the demo for the [espejelomar/fastai-pet-breeds-classification](https://huggingface.co./espejelomar/fastai-pet-breeds-classification) fastai model. Created following the **amazing work of Jeremy Howard and Sylvain Gugger 🤗**. 

        ## Pet breeds classification model
        Finetuned model on The Oxford-IIIT Pet Dataset. It was introduced in
        [this paper](https://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/) and first released in
        [this webpage](https://www.robots.ox.ac.uk/~vgg/data/pets/).
        The pretrained model was trained on the ImageNet dataset, a dataset that has 100,000+ images across 200 differentclasses.  It   was   introduced in [this paper](https://image-net.org/static_files/papers/imagenet_cvpr09.pdf) andavailable [in this  webpage]   (https://    image-net.org/download.php)
        
        
        Disclaimer: The model was fine-tuned after [Chapter 5](https://github.com/fastai/fastbook/blob/master/05_pet_breeds.ipynb)  of   [Deep     Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD (2020)](https:/github.com/fastai/fastbook) written by   Jeremy Howard and Sylvain Gugger. 

        """
    )

    image = import_fig()
    outputs_df = fastai_model(image)

    if outputs_df is not None:
        # st.dataframe(outputs_df)
        st.markdown(
            """### Congratulations!!! Most likely you have uploaded a photo of a..."""
        )
        plot(outputs_df)

    st.markdown(
        """
        ## Model description
        The model was finetuned using the `cnn_learner` method of the fastai library suing a Resnet 34 backbone pretrained on the ImageNet      dataset. The fastai library uses PyTorch for the undelying operations. `cnn_learner` automatically gets a pretrained model from a       given architecture with a custom head that is suitable for the target data.
        Resnet34 is a 34 layer convolutional neural network. It takes residuals from each layer and uses them in the subsequent connected       layers.
        Specifically the model was obtained:
        ```
        learn = cnn_learner(dls, resnet34, metrics=error_rate)
        learn.fine_tune(2)
        ```

        ## Training data
        The Resnet34 model was pretrained on [ImageNet](https://image-net.org/static_files/papers/imagenet_cvpr09.pdf), a dataset that has      100,000+ images across 200 different classes, and fine-tuned on [The Oxford-IIIT Pet Dataset](https://www.robots.ox.ac.uk/~vgg/data/    pets/).
        ## Preprocessing
        For more detailed information on the preprocessing procedure, refer to the [Chapter 5](https://github.com/fastai/fastbook/blob/     master/05_pet_breeds.ipynb) of [Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD (2020)](https://    github.com/fastai/fastbook).
        Two main strategies are followed to presizing the images:
        - Resize images to relatively "large" dimensions—that is, dimensions significantly larger than the target training dimensions.
        - Compose all of the common augmentation operations (including a resize to the final target size) into one, and perform the         combined operation on the GPU only once at the end of processing, rather than performing the operations individually and        interpolating multiple times.
        "The first step, the resize, creates images large enough that they have spare margin to allow further augmentation transforms on        their inner regions without creating empty zones. This transformation works by resizing to a square, using a large crop size. On    the training set, the crop area is chosen randomly, and the size of the crop is selected to cover the entire width or height of the      image, whichever is smaller.
        In the second step, the GPU is used for all data augmentation, and all of the potentially destructive operations are done together,         with a single interpolation at the end." ([Howard and Gugger, 2020](https://github.com/fastai/fastbook))

        ### BibTeX entry and citation info
        ```bibtex
        @book{howard2020deep,
          author    = {Howard, J. and Gugger, S.},
          title     = {Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD},
          isbn      = {9781492045526},
          year      = {2020},
          url       = {https://books.google.no/books?id=xd6LxgEACAAJ},
          publisher = {O'Reilly Media, Incorporated},
        }
        ```

        """
    )


if __name__ == "__main__":
    main()