File size: 4,046 Bytes
f83e076
 
5c7fb92
f83e076
 
5c7fb92
 
 
 
3987fe2
f83e076
2814d8f
 
f83e076
 
bed38ee
ac88be7
bed38ee
f83e076
b76b768
f83e076
5a95652
f83e076
5eb43d0
6882e1b
5eb43d0
 
3987fe2
5a95652
3987fe2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5c7fb92
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
---
tags:
- Instance Segmentation
- Vision Transformers
- CNN
- Optical Space Missions
- Artefact Mapping
- ESA

pretty_name: XAMI-model
license: mit
datasets:
- iulia-elisa/XAMI-dataset
---

<div align="center">
<h1> XAMI-model: XMM-Newton optical Artefact Mapping for astronomical Instance segmentation </h1>
</div>

🚀 Check the **[XAMI model](https://github.com/ESA-Datalabs/XAMI-model)** on Github. 

<!-- ## Model Checkpoints

| Model Name | Link |
| :---: | :---: |
| YOLOv8 |[yolov8_segm](https://huggingface.co./iulia-elisa/XAMI/blob/main/yolo_weights/best.pt) |
| MobileSAM |[sam_vit](https://huggingface.co./iulia-elisa/XAMI/blob/main/sam_weights/sam_0_best.pth) |
| XAMI |[xami_model](https://huggingface.co./iulia-elisa/XAMI/blob/main/yolo_sam_final.pth) |
 -->
## 💫 Introduction
The code uses images from the XAMI dataset (available on [Github](https://github.com/ESA-Datalabs/XAMI-dataset) and [HuggingFace🤗](https://huggingface.co./datasets/iulia-elisa/XAMI-dataset)). The images represent observations from the XMM-Newton's Opical Monitor (XMM-OM). Information about the XMM-OM can be found here: 

- XMM-OM User's Handbook: https://www.mssl.ucl.ac.uk/www_xmm/ukos/onlines/uhb/XMM_UHB/node1.html.
- Technical details: https://www.cosmos.esa.int/web/xmm-newton/technical-details-om.
- The article https://ui.adsabs.harvard.edu/abs/2001A%26A...365L..36M/abstract.

## 📂 Cloning the repository

```bash
git clone https://github.com/ESA-Datalabs/XAMI-model.git
cd XAMI-model

# creating the environment
conda env create -f environment.yaml
conda activate xami_model_env
```

## 📊 Downloading the dataset and model checkpoints from HuggingFace🤗

Check [dataset_and_model.ipynb](https://github.com/ESA-Datalabs/XAMI-model/blob/main/dataset_and_model.ipynb) for downloading the dataset and model weights. 

The dataset is splited into train and validation categories and contains annotated artefacts in COCO format for Instance Segmentation. We use multilabel Stratified K-fold (k=4) to balance class distributions across splits. We choose to work with a single dataset splits version (out of 4) but also provide means to work with all 4 versions.

To better understand our dataset structure, please check the [Dataset-Structure.md](https://github.com/ESA-Datalabs/XAMI-dataset/blob/main/Datasets-Structure.md) for more details. We provide the following dataset formats: COCO format for Instance Segmentation (commonly used by [Detectron2](https://github.com/facebookresearch/detectron2) models) and YOLOv8-Seg format used by [ultralytics](https://github.com/ultralytics/ultralytics).

<!-- 1. **Downloading** the dataset archive from [HuggingFace](https://huggingface.co./datasets/iulia-elisa/XAMI-dataset/blob/main/xami_dataset.zip).

```bash
DEST_DIR='.' # destination folder for the dataset (should usually be set to current directory)

huggingface-cli download iulia-elisa/XAMI-dataset xami_dataset.zip --repo-type dataset --local-dir "$DEST_DIR" && unzip "$DEST_DIR/xami_dataset.zip" -d "$DEST_DIR" && rm "$DEST_DIR/xami_dataset.zip"
``` -->

## 💡 Model Inference

After cloning the repository and setting up the environment, use the following Python code for model loading and inference:

```python
import sys
from inference.xami_inference import Xami

detr_checkpoint = './train/weights/yolo_weights/yolov8_detect_300e_best.pt'
sam_checkpoint = './train/weights/sam_weights/sam_0_best.pth'

# the SAM checkpoint and model_type (vit_h, vit_t, etc.) must be compatible
detr_sam_pipeline = Xami(
    device='cuda:0',
    detr_checkpoint=detr_checkpoint, #YOLO(detr_checkpoint)
    sam_checkpoint=sam_checkpoint,
    model_type='vit_t',
    use_detr_masks=True)
    
# prediction example
masks = yolo_sam_pipeline.run_predict(
    './example_images/S0743200101_V.jpg', 
    yolo_conf=0.2, 
    show_masks=True)
```

## 🚀 Training the model

Check the training [README.md](https://github.com/ESA-Datalabs/XAMI-model/blob/main/train/README.md).

## © Licence 

This project is licensed under [MIT license](LICENSE).