gianlab commited on
Commit
339b4b0
1 Parent(s): 8f8d22f

Model save

Browse files
README.md CHANGED
@@ -22,7 +22,7 @@ model-index:
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 1.0
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,30 +32,12 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.0000
36
- - Accuracy: 1.0
37
 
38
  ## Model description
39
 
40
- This model was created by importing the dataset of the photos of ECG image into Google Colab from kaggle here: https://www.kaggle.com/datasets/erhmrai/ecg-image-data/data . I then used the image classification tutorial here: https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb
41
-
42
- obtaining the following notebook:
43
-
44
- https://colab.research.google.com/drive/1KC6twirtsc7N1kmlwY3IQKVUmSuK7zlh?usp=sharing
45
-
46
- The possible classified data are:
47
- <ul>
48
- <li>N: Normal beat</li>
49
- <li>S: Supraventricular premature beat</li>
50
- <li>V: Premature ventricular contraction</li>
51
- <li>F: Fusion of ventricular and normal beat</li>
52
- <li>Q: Unclassifiable beat</li>
53
- <li>M: myocardial infarction</li>
54
- </ul>
55
-
56
- ### ECG example:
57
-
58
- ![Screenshot](N1.png)
59
 
60
  ## Intended uses & limitations
61
 
@@ -85,12 +67,12 @@ The following hyperparameters were used during training:
85
 
86
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
87
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
88
- | 0.0361 | 1.0 | 697 | 0.0000 | 1.0 |
89
 
90
 
91
  ### Framework versions
92
 
93
- - Transformers 4.34.0
94
- - Pytorch 2.0.1+cu118
95
- - Datasets 2.14.5
96
- - Tokenizers 0.14.1
 
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.6985040276179517
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.7308
36
+ - Accuracy: 0.6985
37
 
38
  ## Model description
39
 
40
+ More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
  ## Intended uses & limitations
43
 
 
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
+ | 0.7715 | 1.0 | 183 | 0.7308 | 0.6985 |
71
 
72
 
73
  ### Framework versions
74
 
75
+ - Transformers 4.35.2
76
+ - Pytorch 2.1.0+cu121
77
+ - Datasets 2.16.1
78
+ - Tokenizers 0.15.0
all_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "epoch": 1.0,
3
- "eval_accuracy": 1.0,
4
- "eval_loss": 1.7099075932947017e-07,
5
- "eval_runtime": 90.1655,
6
- "eval_samples_per_second": 110.02,
7
- "eval_steps_per_second": 3.438
8
  }
 
1
  {
2
  "epoch": 1.0,
3
+ "eval_accuracy": 0.6985040276179517,
4
+ "eval_loss": 0.7308106422424316,
5
+ "eval_runtime": 65.3069,
6
+ "eval_samples_per_second": 39.919,
7
+ "eval_steps_per_second": 1.256
8
  }
eval_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
  "epoch": 1.0,
3
- "eval_accuracy": 1.0,
4
- "eval_loss": 1.7099075932947017e-07,
5
- "eval_runtime": 90.1655,
6
- "eval_samples_per_second": 110.02,
7
- "eval_steps_per_second": 3.438
8
  }
 
1
  {
2
  "epoch": 1.0,
3
+ "eval_accuracy": 0.6985040276179517,
4
+ "eval_loss": 0.7308106422424316,
5
+ "eval_runtime": 65.3069,
6
+ "eval_samples_per_second": 39.919,
7
+ "eval_steps_per_second": 1.256
8
  }
runs/Jan15_05-27-24_754e893eb757/events.out.tfevents.1705297400.754e893eb757.393.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d586d6aad65e55b48b92a2c4ca524fb66ccb0f12143fdc674bf0f8b6fb97863a
3
+ size 411