You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co./docs/hub/model-cards#model-card-metadata)

license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: iraqi_foods102 results: [] datasets: - Falah/food102-iraqi-rice-meal language: - en author: Falah G. Salieh location: Iraq, Baghdad

iraqi_foods102

This model is a fine-tuned version of microsoft/swin-base-patch4-window7-224 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5399
  • Accuracy: 0.8548

Dataset for Food-102 (Food101+Iraqi-rice-male)

Dataset Name: Food-102

Dataset Summary: Food-102 is an updated version of the Food-101 dataset, now expanded to include 102 food categories. It consists of a total of 102,000 images, with 750 training images and 250 manually reviewed test images provided for each category. The dataset aims to enable food classification tasks and provide a diverse range of food images for research and development purposes. The training images in Food-102 have intentionally not been cleaned, allowing for some level of noise, such as intense colors and occasional mislabeled images. All images in the dataset have been rescaled to have a maximum side length of 512 pixels.

Additional Information:

  • Number of Categories: 102
  • Total Images: 101,100
  • Training Images per Category: 75,825
  • Test Images per Category: 25,275
  • Image Noise: The training images may contain some noise, including intense colors and occasional mislabeled images.
  • Image Rescaling: All images in the dataset have been resized to have a maximum side length of 512 pixels.

Note:

The newly added category "Iraqi rice male food" is not specifically mentioned as part of the Food-101 dataset. If you require further details or have any specific questions about the dataset, please let me know.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.1273 1.0 592 0.7230 0.8165
0.7414 2.0 1185 0.5696 0.8478
0.5882 3.0 1776 0.5399 0.8548

Framework versions

  • Transformers 4.27.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.9.0
  • Tokenizers 0.13.3

Citation

If you use this model in your research, please cite the following paper:

Downloads last month
181
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.