bibtex_url
null | proceedings
stringlengths 58
58
| bibtext
stringlengths 511
974
| abstract
stringlengths 92
2k
| title
stringlengths 30
207
| authors
sequencelengths 1
22
| id
stringclasses 1
value | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 14
values | n_linked_authors
int64 -1
1
| upvotes
int64 -1
1
| num_comments
int64 -1
0
| n_authors
int64 -1
10
| Models
sequencelengths 0
4
| Datasets
sequencelengths 0
1
| Spaces
sequencelengths 0
0
| old_Models
sequencelengths 0
4
| old_Datasets
sequencelengths 0
1
| old_Spaces
sequencelengths 0
0
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values | unique_id
int64 0
855
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://papers.miccai.org/miccai-2024/paper/2009_paper.pdf | @InProceedings{ Zha_Disease_MICCAI2024,
author = { Zhang, Jin and Shang, Muheng and Yang, Yan and Guo, Lei and Han, Junwei and Du, Lei },
title = { { Disease Progression Prediction Incorporating Genotype-Environment Interactions: A Longitudinal Neurodegenerative Disorder Study } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Disease progression prediction is a fundamental yet challenging task in neurodegenerative disorders. Despite extensive research endeavors, disease progression fitting on brain imaging data alone may yield suboptimal performance due to the effect of potential interactions between genetic variations, proteomic expressions, and environmental exposures on the disease progression. To fill this gap, we draw on the idea of mutual-assistance (MA) learning and accordingly propose a fresh and powerful scheme, referred to as Mutual-Assistance Disease Progression fitting and Genotype-by-Environment interaction identification approach (MA-DPxGE). Specifically, our model jointly performs disease progression fitting using longitudinal imaging phenotypes and identification of genotype-by-environment interaction factors. To ensure stability and interpretability, we employ innovative penalties to discern significant risk factors. Moreover, we meticulously design adaptive mechanisms for loss-term reweighting, ensuring fair adjustments for each prediction task. Furthermore, due to high-dimensional genotype-by-environment interactions, we devise a rapid and efficient strategy to reduce runtime, ensuring practical availability and applicability. Experimental results on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset reveal that MA-DPxGE demonstrates superior performance compared to state-of-the-art approaches while maintaining exceptional interpretability. This outcome is pivotal in elucidating disease progression patterns and establishing effective strategies to mitigate or halt disease advancement. | Disease Progression Prediction Incorporating Genotype-Environment Interactions: A Longitudinal Neurodegenerative Disorder Study | [
"Zhang, Jin",
"Shang, Muheng",
"Yang, Yan",
"Guo, Lei",
"Han, Junwei",
"Du, Lei"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 0 |
||
null | https://papers.miccai.org/miccai-2024/paper/1241_paper.pdf | @InProceedings{ Jeo_BrainWaveNet_MICCAI2024,
author = { Jeong, Ah-Yeong and Heo, Da-Woon and Kang, Eunsong and Suk, Heung-Il },
title = { { BrainWaveNet: Wavelet-based Transformer for Autism Spectrum Disorder Diagnosis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | The diagnosis of Autism Spectrum Disorder (ASD) using resting-state functional Magnetic Resonance Imaging (rs-fMRI) is commonly analyzed through functional connectivity (FC) between Regions of Interest (ROIs) in the time domain. However, the time domain has limitations in capturing global information. To overcome this problem, we propose a wavelet-based Transformer, BrainWaveNet, that leverages the frequency domain and learns spatial-temporal information for rs-fMRI brain diagnosis. Specifically, BrainWaveNet learns inter-relations between two different frequency-based features (real and imaginary parts) by crossattention mechanisms, which allows for a deeper exploration of ASD. In our experiments using the ABIDE dataset, we validated the superiority of BrainWaveNet by comparing it with competing deep learning methods. Furthermore, we analyzed significant regions of ASD for neurological interpretation.In our experiments using the ABIDE dataset, we validated the superiority of BrainWaveNet by comparing with competing deep learning methods. Furthermore, we analyzed significant regions of ASD for neurological interpretation. | BrainWaveNet: Wavelet-based Transformer for Autism Spectrum Disorder Diagnosis | [
"Jeong, Ah-Yeong",
"Heo, Da-Woon",
"Kang, Eunsong",
"Suk, Heung-Il"
] | Conference | [
"https://github.com/ku-milab/BrainWaveNet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Oral | 1 |
||
null | https://papers.miccai.org/miccai-2024/paper/2115_paper.pdf | @InProceedings{ Kho_Unified_MICCAI2024,
author = { Khor, Hee Guan and Yang, Xin and Sun, Yihua and Wang, Jie and Huang, Sijuan and Wang, Shaobin and Lu, Bai and Ma, Longfei and Liao, Hongen },
title = { { Unified Prompt-Visual Interactive Segmentation of Clinical Target Volume in CT for Nasopharyngeal Carcinoma with Prior Anatomical Information } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | The delineation of the Clinical Target Volume (CTV) is a crucial step in the radiotherapy (RT) planning process for patients with nasopharyngeal carcinoma (NPC). However, manual delineation is labor-intensive, and automatic CTV contouring for NPC is difficult due to the nasopharyngeal complexity, tumor variability, and judgement-based criteria. To address the above-mentioned problems, we introduce SAM-RT, the first large vision model (LVM) designed for CTV contouring in NPC. Given the anatomical dependency required for CTV contouring—which encapsulates the Gross Tumor Volume (GTV) while minimizing exposure to Organs-at-Risk (OAR)—our approach begins with the fine-tuning of the Segment Anything Model (SAM), using a Low-Rank Adaptation (LoRA) strategy for segmenting GTV and OAR across multi-center and multi-modality datasets. This step ensures SAM-RT initially integrates with anatomical prior knowledge for CTV contouring. To optimize the use of previously acquired knowledge, we introduce Sequential LoRA (SeqLoRA) to improve knowledge retention in SAM-RT during the fine-tuning for CTV contouring. We further introduce the Prompt-Visual Cross Merging Attention (ProViCMA) for enhanced image and prompt interaction, and the Gate-Regulated Prompt Adjustment (GaRPA) strategy, utilizing learnable gates to direct prompts for effective CTV task adaptation. Efficient utilization of knowledge across relevant datasets is essential due to sparse labeling of medical images for specific tasks. To achieve this, SAM-RT is trained using an information-querying approach. SAM-RT incorporates various prior knowledge: 1) Reliance of CTV on GTV and OAR, and 2) Eliciting expert knowledge in CTV contouring. Extensive quantitative and qualitative experiments validate our designs. | Unified Prompt-Visual Interactive Segmentation of Clinical Target Volume in CT for Nasopharyngeal Carcinoma with Prior Anatomical Information | [
"Khor, Hee Guan",
"Yang, Xin",
"Sun, Yihua",
"Wang, Jie",
"Huang, Sijuan",
"Wang, Shaobin",
"Lu, Bai",
"Ma, Longfei",
"Liao, Hongen"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 2 |
||
null | https://papers.miccai.org/miccai-2024/paper/3619_paper.pdf | @InProceedings{ Che_Medical_MICCAI2024,
author = { Chen, Wenting and Wang, Pengyu and Ren, Hui and Sun, Lichao and Li, Quanzheng and Yuan, Yixuan and Li, Xiang },
title = { { Medical Image Synthesis via Fine-Grained Image-Text Alignment and Anatomy-Pathology Prompting } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Data scarcity and privacy concerns limit the availability of high-quality medical images for public use, which can be mitigated through medical image synthesis. However, current medical image synthesis methods often struggle to accurately capture the complexity of detailed anatomical structures and pathological conditions. To address these challenges, we propose a novel medical image synthesis model that leverages fine-grained image-text alignment and anatomy-pathology prompts to generate highly detailed and accurate synthetic medical images. Our methodology integrates advanced natural language processing techniques with image generative modeling, enabling precise alignment between descriptive text prompts and the synthesized images’ anatomical and pathological details. The proposed approach consists of two key components: an anatomy-pathology prompting module and a fine-grained alignment-based synthesis module. The anatomy-pathology prompting module automatically generates descriptive prompts for high-quality medical images. To further synthesize high-quality medical images from the generated prompts, the fine-grained alignment-based synthesis module pre-defines a visual codebook for the radiology dataset and performs fine-grained alignment between the codebook and generated prompts to obtain key patches as visual clues, facilitating accurate image synthesis. We validate the superiority of our method through experiments on public chest X-ray datasets and demonstrate that our synthetic images preserve accurate semantic information, making them valuable for various medical applications. | Medical Image Synthesis via Fine-Grained Image-Text Alignment and Anatomy-Pathology Prompting | [
"Chen, Wenting",
"Wang, Pengyu",
"Ren, Hui",
"Sun, Lichao",
"Li, Quanzheng",
"Yuan, Yixuan",
"Li, Xiang"
] | Conference | 2403.06835 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 3 |
|
null | https://papers.miccai.org/miccai-2024/paper/0453_paper.pdf | @InProceedings{ Wen_Biophysicsbased_MICCAI2024,
author = { Wen, Zheyu and Ghafouri, Ali and Biros, George },
title = { { Biophysics-based data assimilation of longitudinal tau and amyloid-β PET scans } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Misfolded tau and amyloid-beta (Abeta) are hallmark proteins of Alzheimer’s Disease (AD). Due to their clinical significance, rich datasets that track their temporal evolution have been created. For example, ADNI has hundreds of subjects with PET imaging of both these two proteins. Interpreting and combining this data beyond statistical correlations remains a challenge. Biophysical models offer a complementary avenue to assimilating such complex data and eventually helping us better understand disease progression. To this end, we introduce a mathematical model that tracks the dynamics of four species (normal and abnormal tau and Abeta) and uses a graph to approximate their spatial coupling. The graph nodes represent gray matter regions of interest (ROI), and the edges represent tractography-based connectivity between ROIs. We model interspecies interactions, migration, proliferation, and clearance. Our biophysical model has seven unknown scalar parameters plus unknown initial conditions for tau and Abeta. Using imaging scans, we can calibrate these parameters by solving an inverse problem. The scans comprise longitudinal tau and Abeta PET scans, along with MRI for subject specific anatomy. We propose an inversion algorithm that stably reconstructs the unknown parameters. We verify and test its numerical stability in the presence of noise using synthetic data. We discovered that the inversion is more stable when using multiple scans. Finally, we apply the overall methodology on 334 subjects from the ADNI dataset and compare it to a commonly used tau-only model calibrated by a single PET scan. We report the R2 and relative fitting error metrics. The proposed method achieves R2 = 0.82 compared to R2 = 0.64 of the tau-only single-scan reconstruction. | Biophysics-based data assimilation of longitudinal tau and amyloid-β PET scans | [
"Wen, Zheyu",
"Ghafouri, Ali",
"Biros, George"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 4 |
||
null | https://papers.miccai.org/miccai-2024/paper/0336_paper.pdf | @InProceedings{ Xue_WSSADN_MICCAI2024,
author = { Xue, Pengcheng and Nie, Dong and Zhu, Meijiao and Yang, Ming and Zhang, Han and Zhang, Daoqiang and Wen, Xuyun },
title = { { WSSADN: A Weakly Supervised Spherical Age-Disentanglement Network for Detecting Developmental Disorders with Structural MRI } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Structural magnetic resonance imaging characterizes the morphology and anatomical features of the brain and has been widely utilized in the diagnosis of developmental disorders. Given the dynamic nature of developmental disorder progression with age, existing methods for disease detection have incorporated age as either prior knowledge to be integrated or as a confounding factor to be disentangled through supervised learning. However, the excessive focus on age information in these methods restricts their capability to unearth disease-related features, thereby affecting the subsequent disease detection performance. To address this issue, this work introduces a novel weakly supervised learning-based method, namely, the Weakly Supervised Spherical Age Disentanglement Network (WSSADN). WSSADN innovatively combines an attention-based disentangler with the Conditional Generative Adversarial Network (CGAN) to remove normal developmental information from the brain representation of the patient with developmental disorder in a weakly supervised manner. By reducing the focus on age information during the disentanglement process, the effectiveness of the extracted disease-related features is enhanced, thereby increasing the accuracy of downstream disease identification. Moreover, to ensure effective convergence of the disentanglement and age information learning modules, we design a consistency regularization loss to align the age-related features generated by the disentangler and CGAN. We evaluated our method on three different tasks, including the detection of preterm neonates, infants with congenital heart disease, and autism spectrum disorders. The experimental results demonstrate that our method significantly outperforms existing state-of-the-art methods across all tasks. | WSSADN: A Weakly Supervised Spherical Age-Disentanglement Network for Detecting Developmental Disorders with Structural MRI | [
"Xue, Pengcheng",
"Nie, Dong",
"Zhu, Meijiao",
"Yang, Ming",
"Zhang, Han",
"Zhang, Daoqiang",
"Wen, Xuyun"
] | Conference | [
"https://github.com/xuepengcheng1231/WSSADN"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 5 |
||
null | https://papers.miccai.org/miccai-2024/paper/0867_paper.pdf | @InProceedings{ Cho_MedFormer_MICCAI2024,
author = { Chowdary, G. Jignesh and Yin, Zhaozheng },
title = { { Med-Former: A Transformer based Architecture for Medical Image Classification } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | In recent years, transformer-based image classification methods have demonstrated remarkable effectiveness across various image classification tasks. However, their application to medical images presents challenges, especially in the feature extraction capability of the network. Additionally, these models often struggle with the efficient propagation of essential information throughout the network, hindering their performance in medical imaging tasks. To overcome these challenges, we introduce a novel framework comprising Local-Global Transformer module and Spatial Attention Fusion module, collectively referred to as Med-Former. These modules are specifically designed to enhance the feature extraction capability at both local and global levels and improve the propagation of vital information within the network. To evaluate the efficacy of our proposed Med-Former framework, we conducted experiments on three publicly available medical image datasets: NIH Chest X-ray14, DermaMNIST, and BloodMNIST. Our results demonstrate that Med-Former outperforms state-of-the-art approaches underscoring its superior generalization capability and effectiveness in medical image classification. | Med-Former: A Transformer based Architecture for Medical Image Classification | [
"Chowdary, G. Jignesh",
"Yin, Zhaozheng"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 6 |
||
null | https://papers.miccai.org/miccai-2024/paper/0556_paper.pdf | @InProceedings{ Ye_Enabling_MICCAI2024,
author = { Ye, Shuchang and Meng, Mingyuan and Li, Mingjian and Feng, Dagan and Kim, Jinman },
title = { { Enabling Text-free Inference in Language-guided Segmentation of Chest X-rays via Self-guidance } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Segmentation of infected areas in chest X-rays is pivotal for facilitating the accurate delineation of pulmonary structures and pathological anomalies. Recently, multi-modal language-guided image segmentation methods have emerged as a promising solution for chest X-rays where the clinical text reports, depicting the assessment of the images, are used as guidance. Nevertheless, existing language-guided methods require clinical reports alongside the images, and hence, they are not applicable for use in image segmentation in a decision support context, but rather limited to retrospective image analysis after clinical reporting has been completed. In this study, we propose a self-guided segmentation framework (SGSeg) that leverages language guidance for training (multi-modal) while enabling text-free inference (uni-modal), which is the first that enables text-free inference in language-guided segmentation. We exploit the critical location information of both pulmonary and pathological structures depicted in the text reports and introduce a novel localization-enhanced report generation (LERG) module to generate clinical reports for self-guidance. Our LERG integrates an object detector and a location-based attention aggregator, weakly-supervised by a location-aware pseudo-label extraction module. Extensive experiments on a well-benchmarked QaTa-COV19 dataset demonstrate that our SGSeg achieved superior performance than existing uni-modal segmentation methods and closely matched the state-of-the-art performance of multi-modal language-guided segmentation methods. | Enabling Text-free Inference in Language-guided Segmentation of Chest X-rays via Self-guidance | [
"Ye, Shuchang",
"Meng, Mingyuan",
"Li, Mingjian",
"Feng, Dagan",
"Kim, Jinman"
] | Conference | [
"https://github.com/ShuchangYe-bib/SGSeg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 7 |
||
null | https://papers.miccai.org/miccai-2024/paper/0531_paper.pdf | @InProceedings{ Cho_SliceConsistent_MICCAI2024,
author = { Choo, Kyobin and Jun, Youngjun and Yun, Mijin and Hwang, Seong Jae },
title = { { Slice-Consistent 3D Volumetric Brain CT-to-MRI Translation with 2D Brownian Bridge Diffusion Model } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | In neuroimaging, generally, brain CT is more cost-effective and accessible imaging option compared to MRI. Nevertheless, CT exhibits inferior soft-tissue contrast and higher noise levels, yielding less precise structural clarity. In response, leveraging more readily available CT to construct its counterpart MRI, namely, medical image-to-image translation (I2I), serves as a promising solution. Particularly, while diffusion models (DMs) have recently risen as a powerhouse, they also come with a few practical caveats for medical I2I. First, DMs’ inherent stochasticity from random noise sampling cannot guarantee consistent MRI generation that faithfully reflects its CT. Second, for 3D volumetric images which are prevalent in medical imaging, naively using 2D DMs leads to slice inconsistency, e.g., abnormal structural and brightness changes. While 3D DMs do exist, significant training costs and data dependency bring hesitation. As a solution, we propose novel style key conditioning (SKC) and inter-slice trajectory alignment (ISTA) sampling for the 2D Brownian bridge diffusion model. Specifically, SKC ensures a consistent imaging style (e.g., contrast) across slices, and ISTA interconnects the independent sampling of each slice, deterministically achieving style and shape consistent 3D CT-to-MRI translation. To the best of our knowledge, this study is the first to achieve high-quality 3D medical I2I based only on a 2D DM with no extra architectural models. Our experimental results show superior 3D medical I2I than existing 2D and 3D baselines, using in-house CT-MRI dataset and BraTS2023 FLAIR-T1 MRI dataset. | Slice-Consistent 3D Volumetric Brain CT-to-MRI Translation with 2D Brownian Bridge Diffusion Model | [
"Choo, Kyobin",
"Jun, Youngjun",
"Yun, Mijin",
"Hwang, Seong Jae"
] | Conference | 2407.05059 | [
"https://github.com/MICV-yonsei/CT2MRI"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 8 |
|
null | https://papers.miccai.org/miccai-2024/paper/1741_paper.pdf | @InProceedings{ Zha_DSCENet_MICCAI2024,
author = { Zhang, Yuan and Qi, Yaolei and Qi, Xiaoming and Wei, Yongyue and Yang, Guanyu },
title = { { DSCENet: Dynamic Screening and Clinical-Enhanced Multimodal Fusion for MPNs Subtype Classification } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | The precise subtype classification of myeloproliferative neoplasms
(MPNs) based on multimodal information, which assists clinicians
in diagnosis and long-term treatment plans, is of great clinical significance. However, it remains a great challenging task due to the
lack of diagnostic representativeness for local patches and the absence of diagnostic-relevant features from a single modality. In this paper, we propose a Dynamic Screening and Clinical-Enhanced Network (DSCENet) for the subtype classification of MPNs on the multimodal fusion of whole slide images (WSIs) and clinical information. (1) A dynamic screening module is proposed to flexibly adapt the feature learning of local patches, reducing the interference of irrelevant features and enhancing their diagnostic representativeness. (2) A clinical-enhanced fusion module is proposed to integrate clinical indicators to explore complementary
features across modalities, providing comprehensive diagnostic information. Our approach has been validated on the real clinical data, achieving an increase of 7.91% AUC and 16.89% accuracy compared with the previous state-of-the-art (SOTA) methods. The code is available at https://github.com/yuanzhang7/DSCENet. | DSCENet: Dynamic Screening and Clinical-Enhanced Multimodal Fusion for MPNs Subtype Classification | [
"Zhang, Yuan",
"Qi, Yaolei",
"Qi, Xiaoming",
"Wei, Yongyue",
"Yang, Guanyu"
] | Conference | 2407.08167 | [
"https://github.com/yuanzhang7/DSCENet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Oral | 9 |
|
null | https://papers.miccai.org/miccai-2024/paper/0263_paper.pdf | @InProceedings{ Zeg_LaTiM_MICCAI2024,
author = { Zeghlache, Rachid and Conze, Pierre-Henri and El Habib Daho, Mostafa and Li, Yihao and Le Boité, Hugo and Tadayoni, Ramin and Massin, Pascale and Cochener, Béatrice and Rezaei, Alireza and Brahim, Ikram and Quellec, Gwenolé and Lamard, Mathieu },
title = { { LaTiM: Longitudinal representation learning in continuous-time models to predict disease progression } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | This work proposes a novel framework for analyzing disease progression using time-aware neural ordinary differential equations (NODE). We introduce a “time-aware head” in a framework trained through self-supervised learning (SSL) to leverage temporal information in latent space for data augmentation. This approach effectively integrates NODEs with SSL, offering significant performance improvements compared to traditional methods that lack explicit temporal integration. We demonstrate the effectiveness of our strategy for diabetic retinopathy progression prediction using the OPHDIAT database. Compared to the baseline, all NODE architectures achieve statistically significant improvements in area under the ROC curve (AUC) and Kappa metrics, highlighting the efficacy of pre-training with SSL-inspired approaches. Additionally, our framework promotes stable training for NODEs, a commonly encountered challenge in time-aware modeling. | LaTiM: Longitudinal representation learning in continuous-time models to predict disease progression | [
"Zeghlache, Rachid",
"Conze, Pierre-Henri",
"El Habib Daho, Mostafa",
"Li, Yihao",
"Le Boité, Hugo",
"Tadayoni, Ramin",
"Massin, Pascale",
"Cochener, Béatrice",
"Rezaei, Alireza",
"Brahim, Ikram",
"Quellec, Gwenolé",
"Lamard, Mathieu"
] | Conference | 2404.07091 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 10 |
|
null | https://papers.miccai.org/miccai-2024/paper/0219_paper.pdf | @InProceedings{ Zhe_ABayesian_MICCAI2024,
author = { Zheng, Zhou and Hayashi, Yuichiro and Oda, Masahiro and Kitasaka, Takayuki and Mori, Kensaku },
title = { { A Bayesian Approach to Weakly-supervised Laparoscopic Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | In this paper, we study weakly-supervised laparoscopic image segmentation with sparse annotations. We introduce a novel Bayesian deep learning approach designed to enhance both the accuracy and interpretability of the model’s segmentation, founded upon a comprehensive Bayesian framework, ensuring a robust and theoretically validated method. Our approach diverges from conventional methods that directly train using observed images and their corresponding weak annotations. Instead, we estimate the joint distribution of both images and labels given the acquired data. This facilitates the sampling of images and their high-quality pseudo-labels, enabling the training of a generalizable segmentation model. Each component of our model is expressed through probabilistic formulations, providing a coherent and interpretable structure. This probabilistic nature benefits accurate and practical learning from sparse annotations and equips our model with the ability to quantify uncertainty. Extensive evaluations with two public laparoscopic datasets demonstrated the efficacy of our method, which consistently outperformed existing methods. Furthermore, our method was adapted for scribble-supervised cardiac multi-structure segmentation, presenting competitive performance compared to previous methods. The code is available at https://github.com/MoriLabNU/Bayesian_WSS. | A Bayesian Approach to Weakly-supervised Laparoscopic Image Segmentation | [
"Zheng, Zhou",
"Hayashi, Yuichiro",
"Oda, Masahiro",
"Kitasaka, Takayuki",
"Mori, Kensaku"
] | Conference | 2410.08509 | [
"https://github.com/MoriLabNU/Bayesian_WSS"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 11 |
|
null | https://papers.miccai.org/miccai-2024/paper/0470_paper.pdf | @InProceedings{ Li_Endora_MICCAI2024,
author = { Li, Chenxin and Liu, Hengyu and Liu, Yifan and Feng, Brandon Y. and Li, Wuyang and Liu, Xinyu and Chen, Zhen and Shao, Jing and Yuan, Yixuan },
title = { { Endora: Video Generation Models as Endoscopy Simulators } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Generative models hold promise for revolutionizing medical education, robot-assisted surgery, and data augmentation for machine learning. Despite progress in generating 2D medical images, the complex domain of clinical video generation has largely remained untapped. This paper introduces Endora, an innovative approach to generate medical videos to simulate clinical endoscopy scenes. We present a novel generative model design that integrates a meticulously crafted video transformer with advanced 2D vision foundation model priors, explicitly modeling spatial-temporal dynamics during video generation. We also pioneer the first public benchmark for endoscopy simulation with video generation models, adapting existing state-of-the-art methods for this endeavor. Endora demonstrates exceptional visual quality in generating endoscopy videos, surpassing state-of-the-art methods in extensive testing. Moreover, we explore how this endoscopy simulator can empower downstream video analysis tasks and even generate 3D medical scenes with multi-view consistency. In a nutshell, Endora marks a notable breakthrough in the deployment of generative AI for clinical endoscopy research, setting a substantial stage for further advances in medical content generation. Project page: https://endora-medvidgen.github.io/. | Endora: Video Generation Models as Endoscopy Simulators | [
"Li, Chenxin",
"Liu, Hengyu",
"Liu, Yifan",
"Feng, Brandon Y.",
"Li, Wuyang",
"Liu, Xinyu",
"Chen, Zhen",
"Shao, Jing",
"Yuan, Yixuan"
] | Conference | 2403.11050 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 12 |
|
null | https://papers.miccai.org/miccai-2024/paper/2441_paper.pdf | @InProceedings{ Tei_Towards_MICCAI2024,
author = { Teichmann, Marvin Tom and Datar, Manasi and Kratzke, Lisa and Vega, Fernando and Ghesu, Florin C. },
title = { { Towards Integrating Epistemic Uncertainty Estimation into the Radiotherapy Workflow } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | The precision of contouring target structures and organs-at-risk (OAR) in radiotherapy planning is crucial for ensuring treatment efficacy and patient safety. Recent advancements in deep learning (DL) have significantly improved OAR contouring performance, yet the reliability of these models, especially in the presence of out-of-distribution (OOD) scenarios, remains a concern in clinical settings. This application study explores the integration of epistemic uncertainty estimation within the OAR contouring workflow to enable OOD detection in clinically relevant scenarios, using specifically compiled data. Furthermore, we introduce an advanced statistical method for OOD detection to enhance the methodological framework of uncertainty estimation. Our empirical evaluation demonstrates that epistemic uncertainty estimation is effective in identifying instances where model predictions are unreliable and may require an expert review. Notably, our approach achieves an AUC-ROC of 0.95 for OOD detection, with a specificity of 0.95 and a sensitivity of 0.92 for implant cases, underscoring its efficacy. This study addresses significant gaps in the current research landscape, such as the lack of ground truth for uncertainty estimation and limited empirical evaluations. This study addresses significant gaps in the current research landscape, such as the lack of ground truth for uncertainty estimation and limited empirical evaluations. Additionally, it provides a clinically relevant application of epistemic uncertainty estimation in an FDA-approved and widely used clinical solution for OAR segmentation from Varian, a Siemens Healthineers company, highlighting its practical benefits. | Towards Integrating Epistemic Uncertainty Estimation into the Radiotherapy Workflow | [
"Teichmann, Marvin Tom",
"Datar, Manasi",
"Kratzke, Lisa",
"Vega, Fernando",
"Ghesu, Florin C."
] | Conference | 2409.18628 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 13 |
|
null | https://papers.miccai.org/miccai-2024/paper/2916_paper.pdf | @InProceedings{ Maa_CoReEcho_MICCAI2024,
author = { Maani, Fadillah Adamsyah and Saeed, Numan and Matsun, Aleksandr and Yaqub, Mohammad },
title = { { CoReEcho: Continuous Representation Learning for 2D+time Echocardiography Analysis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Deep learning (DL) models have been advancing automatic medical image analysis on various modalities, including echocardiography, by offering a comprehensive end-to-end training pipeline. This approach enables DL models to regress ejection fraction (EF) directly from 2D+time echocardiograms, resulting in superior performance. However, the end-to-end training pipeline makes the learned representations less explainable. The representations may also fail to capture the continuous relation among echocardiogram clips, indicating the existence of spurious correlations, which can negatively affect the generalization. To mitigate this issue, we propose CoReEcho, a novel training framework emphasizing continuous representations tailored for direct EF regression. Our extensive experiments demonstrate that CoReEcho: 1) outperforms the current state-of-the-art (SOTA) on the largest echocardiography dataset (EchoNet-Dynamic) with MAE of 3.90 & R2 of 82.44, and 2) provides robust and generalizable features that transfer more effectively in related downstream tasks. The code is publicly available at https://github.com/BioMedIA-MBZUAI/CoReEcho. | CoReEcho: Continuous Representation Learning for 2D+time Echocardiography Analysis | [
"Maani, Fadillah Adamsyah",
"Saeed, Numan",
"Matsun, Aleksandr",
"Yaqub, Mohammad"
] | Conference | 2403.10164 | [
"https://github.com/BioMedIA-MBZUAI/CoReEcho"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Oral | 14 |
|
null | https://papers.miccai.org/miccai-2024/paper/4161_paper.pdf | @InProceedings{ Ina_FewShot_MICCAI2024,
author = { Inayat, Sumayya and Dilawar, Nimra and Sultani, Waqas and Ali, Mohsen },
title = { { Few-Shot Domain Adaptive Object Detection for Microscopic Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Currently, unsupervised domain adaptive strategies proposed to overcome domain shift, are handicapped by the requirement of large amount of target data. On the other hand medical imaging problems and datasets are often characterized not only by scarcity of labeled and unlabeled data but also class imbalance. Few-shot domain adaptive object detection (FSDAOD) addresses the challenge of adapting object detectors to target domains with limited labeled data. However, existing FSDAOD works struggle with randomly selected target domain images which might not represent the target distribution, resulting in overfitting and poor generalization. We propose a novel FSDAOD strategy for microscopic imaging to tackle high-class imbalance and localization errors due to foreground-background similarity. Our contributions include: a domain adaptive class balancing strategy for few shot scenario and label dependent cross domain feature alignment. Specifically, multi-layer instance-level inter and intra-domain feature alignment is performed by enhancing similarity between the instances of classes regardless of the domain and increasing dissimilarity between instances of different classes. In order to retain the features necessary for localizing and detecting minute texture variations in microscopic objects across the domain, the classification loss was applied at feature-map before the detection head. Extensive experimental results with competitive baselines indicate the effectiveness of our proposed approach, achieving state-of-the-art results on two public microscopic datasets, M5 [12] and Raabin-WBC [10]. Our method outperformed both datasets, increasing average mAP@50 by 8.3 points and 14.6 points, respectively. The project page is available here. | Few-Shot Domain Adaptive Object Detection for Microscopic Images | [
"Inayat, Sumayya",
"Dilawar, Nimra",
"Sultani, Waqas",
"Ali, Mohsen"
] | Conference | 2407.07633 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 15 |
|
null | https://papers.miccai.org/miccai-2024/paper/4098_paper.pdf | @InProceedings{ Zha_ANew_MICCAI2024,
author = { Zhang, Jiansong and Wu, Shengnan and Liu, Peizhong and Shen, Linlin },
title = { { A New Dataset and Baseline Model for Rectal Cancer Risk Assessment in Endoscopic Ultrasound Videos } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Early diagnosis of rectal cancer is essential to improve patient survival. Existing diagnostic methods mainly rely on complex MRI as well as pathology-level co-diagnosis. In contrast, in this paper, we collect and annotate for the first time a rectal cancer ultrasound en- doscopy video dataset containing 207 patients for rectal cancer video risk assessment. Additionally, we introduce the Rectal Cancer Video Risk Assessment Network (RCVA-Net), a temporal logic-based framework designed to tackle the classification of rectal cancer ultrasound endoscopy videos. In RCVA-Net, we propose a novel adjacent frames fusion module that effectively integrates the temporal local features from the original video with the global features of the sampled video frames. The intra-video fusion module is employed to capture and learn the temporal dynamics between neighbouring video frames, enhancing the network’s ability to discern subtle nuances in video sequences. Furthermore, we enhance the classification of rectal cancer by randomly incorporating video-level features extracted from the original videos, thereby significantly boosting the performance of rectal cancer classification using ultrasound endoscopic videos. Experimental results on our labelled dataset show that our RCVA-Net can serve as a scalable baseline model with leading performance. The code of this paper can be accessed at https://github.com/JsongZhang/RCVA-Net. | A New Dataset and Baseline Model for Rectal Cancer Risk Assessment in Endoscopic Ultrasound Videos | [
"Zhang, Jiansong",
"Wu, Shengnan",
"Liu, Peizhong",
"Shen, Linlin"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 16 |
||
null | https://papers.miccai.org/miccai-2024/paper/3762_paper.pdf | @InProceedings{ Ji_DeformMamba_MICCAI2024,
author = { Ji, Zexin and Zou, Beiji and Kui, Xiaoyan and Vera, Pierre and Ruan, Su },
title = { { Deform-Mamba Network for MRI Super-Resolution } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | In this paper, we propose a new architecture, called Deform-Mamba, for MR image super-resolution. Unlike conventional CNN or Transformer-based super-resolution approaches which encounter challenges related to the local respective field or heavy computational cost, our approach aims to effectively explore the local and global information of images. Specifically, we develop a Deform-Mamba encoder which is composed of two branches, modulated deform block and vision Mamba block. We also design a multi-view context module in the bottleneck layer to explore the multi-view contextual content. Thanks to the extracted features of the encoder, which include content-adaptive local and efficient global information, the vision Mamba decoder finally generates high-quality MR images. Moreover, we introduce a contrastive edge loss to promote the reconstruction of edge and contrast related content. Quantitative and qualitative experimental results indicate that our approach on IXI and fastMRI datasets achieves competitive performance. | Deform-Mamba Network for MRI Super-Resolution | [
"Ji, Zexin",
"Zou, Beiji",
"Kui, Xiaoyan",
"Vera, Pierre",
"Ruan, Su"
] | Conference | 2407.05969 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 17 |
|
null | https://papers.miccai.org/miccai-2024/paper/0814_paper.pdf | @InProceedings{ Wan_Dynamic_MICCAI2024,
author = { Wang, Ziyue and Zhang, Ye and Wang, Yifeng and Cai, Linghan and Zhang, Yongbing },
title = { { Dynamic Pseudo Label Optimization in Point-Supervised Nuclei Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Deep learning has achieved impressive results in nuclei segmentation, but the massive requirement for pixel-wise labels remains a significant challenge. To alleviate the annotation burden, existing methods generate pseudo masks for model training using point labels. However, the generated masks are inevitably different from the ground truth, and these dissimilarities are not handled reasonably during the network training, resulting in the subpar performance of the segmentation model. To tackle this issue, we propose a framework named DoNuSeg, enabling Dynamic pseudo label Optimization in point-supervised Nuclei Segmentation. Specifically, DoNuSeg takes advantage of class activation maps (CAMs) to adaptively capture regions with semantics similar to annotated points. To leverage semantic diversity in the hierarchical feature levels, we design a dynamic selection module to choose the optimal one among CAMs from different encoder blocks as pseudo masks. Meanwhile, a CAM-guided contrastive module is proposed to further enhance the accuracy of pseudo masks. In addition to exploiting the semantic information provided by CAMs, we consider location priors inherent to point labels, developing a task-decoupled structure for effectively differentiating nuclei. Extensive experiments demonstrate that DoNuSeg outperforms state-of-the-art point-supervised methods. | Dynamic Pseudo Label Optimization in Point-Supervised Nuclei Segmentation | [
"Wang, Ziyue",
"Zhang, Ye",
"Wang, Yifeng",
"Cai, Linghan",
"Zhang, Yongbing"
] | Conference | 2406.16427 | [
"https://github.com/shinning0821/MICCAI24-DoNuSeg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 18 |
|
null | https://papers.miccai.org/miccai-2024/paper/3296_paper.pdf | @InProceedings{ Cui_Multilevel_MICCAI2024,
author = { Cui, Xiaoxiao and Jiang, Shanzhi and Sun, Baolin and Li, Yiran and Cao, Yankun and Li, Zhen and Lv, Chaoyang and Liu, Zhi and Cui, Lizhen and Li, Shuo },
title = { { Multilevel Causality Learning for Multi-label Gastric Atrophy Diagnosis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | No studies have formulated endoscopic grading (EG) of gastric atrophy (GA) as a multi-label classification (MLC) problem, which requires the simultaneous detection of GA and its gastric sites during an endoscopic examination. Accurate EG of GA is crucial for assessing the progression of early gastric cancer. However, the strong visual interference in endoscopic images is caused by various inter-image differences and subtle intra-image differences, leading to confounding contexts and hindering the causalities between class-aware features (CAFs) and multi-label predictions. We propose a multilevel causality learning approach for multi-label gastric atrophy diagnosis for the first time, to learn robust causal CAFs by de-confounding multilevel confounders. Our multilevel causal model is built based on a transformer to construct a multilevel confounder set and implement a progressive causal intervention (PCI) on it. Specifically, the confounder set is constructed by a dual token path sampling module that leverages multiple class tokens and different hidden states of patch tokens to stratify various visual interference. PCI involves attention-based sample-level re-weighting and uncertainty-guided logit-level modulation. Comparative experiments on an endoscopic dataset demonstrate the significant improvement of our model, such as IDA (0.95% on OP, and 0.65% on mAP) and TS-Former (1.11% on OP, and 1.05% on mAP).
\keywords{Multi-label Classification \and Causal Intervention \and Gastric Atrophy Detection.} | Multilevel Causality Learning for Multi-label Gastric Atrophy Diagnosis | [
"Cui, Xiaoxiao",
"Jiang, Shanzhi",
"Sun, Baolin",
"Li, Yiran",
"Cao, Yankun",
"Li, Zhen",
"Lv, Chaoyang",
"Liu, Zhi",
"Cui, Lizhen",
"Li, Shuo"
] | Conference | [
"https://github.com/rabbittsui/Multilevel-Causal"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 19 |
||
null | https://papers.miccai.org/miccai-2024/paper/2700_paper.pdf | @InProceedings{ Chi_LowShot_MICCAI2024,
author = { Chikontwe, Philip and Kang, Myeongkyun and Luna, Miguel and Nam, Siwoo and Park, Sang Hyun },
title = { { Low-Shot Prompt Tuning for Multiple Instance Learning based Histology Classification } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | In recent years, prompting pre-trained visual-language (VL) models has shown excellent generalization to various downstream tasks in both natural and medical images. However, VL models are sensitive to the choice of input text prompts, requiring careful selection of templates. Moreover, prompt tuning in the weakly supervised/multiple-instance (MIL) setting is fairly under-explored, especially in the field of computational pathology. In this work, we present a novel prompt tuning framework leveraging frozen VL encoders with (i) residual visual feature adaptation, and (ii) text-based context prompt optimization for whole slide image (WSI) level tasks i.e., classification. In contrast with existing approaches using variants of attention-based instance pooling for slide-level representations, we propose synergistic prompt-based pooling of multiple instances as the weighted sum of learnable-context and slide features. By leveraging the mean learned-prompt vectors and pooled slide features, our design facilitates different slide-level tasks. Extensive experiments on public WSI benchmark datasets reveal significant gains over existing prompting methods, including standard baseline multiple instance learners. | Low-Shot Prompt Tuning for Multiple Instance Learning based Histology Classification | [
"Chikontwe, Philip",
"Kang, Myeongkyun",
"Luna, Miguel",
"Nam, Siwoo",
"Park, Sang Hyun"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 20 |
||
null | https://papers.miccai.org/miccai-2024/paper/3783_paper.pdf | @InProceedings{ Leg_Eddeep_MICCAI2024,
author = { Legouhy, Antoine and Callaghan, Ross and Stee, Whitney and Peigneux, Philippe and Azadbakht, Hojjat and Zhang, Hui },
title = { { Eddeep: Fast eddy-current distortion correction for diffusion MRI with deep learning } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Modern diffusion MRI sequences commonly acquire a large number of volumes with diffusion sensitization gradients of differing strengths or directions. Such sequences rely on echo-planar imaging (EPI) to achieve reasonable scan duration. However, EPI is vulnerable to off-resonance effects, leading to tissue susceptibility and eddy-current induced distortions. The latter is particularly problematic because it causes misalignment between volumes, disrupting downstream modelling and analysis. The essential correction of eddy distortions is typically done post-acquisition, with image registration. However, this is non-trivial because correspondence between volumes can be severely disrupted due to volume-specific signal attenuations induced by varying directions and strengths of the applied gradients. This challenge has been successfully addressed by the popular FSL Eddy tool but at considerable computational cost. We propose an alternative approach, leveraging recent advances in image processing enabled by deep learning (DL). It consists of two convolutional neural networks: 1) An image translator to restore correspondence between images; 2) A registration model to align the translated images. Results demonstrate comparable distortion estimates to FSL Eddy, while requiring only modest training sample sizes. This work, to the best of our knowledge, is the first to tackle this problem with deep learning. Together with recently developed DL-based susceptibility correction techniques, they pave the way for real-time preprocessing of diffusion MRI, facilitating its wider uptake in the clinic. | Eddeep: Fast eddy-current distortion correction for diffusion MRI with deep learning | [
"Legouhy, Antoine",
"Callaghan, Ross",
"Stee, Whitney",
"Peigneux, Philippe",
"Azadbakht, Hojjat",
"Zhang, Hui"
] | Conference | 2405.10723 | [
"github.com/CIG-UCL/eddeep"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 21 |
|
null | https://papers.miccai.org/miccai-2024/paper/0077_paper.pdf | @InProceedings{ Bie_XCoOp_MICCAI2024,
author = { Bie, Yequan and Luo, Luyang and Chen, Zhixuan and Chen, Hao },
title = { { XCoOp: Explainable Prompt Learning for Computer-Aided Diagnosis via Concept-guided Context Optimization } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Utilizing potent representations of the large vision-language models (VLMs) to accomplish various downstream tasks has attracted increasing attention. Within this research field, soft prompt learning has become a representative approach for efficiently adapting VLMs such as CLIP, to tasks like image classification. However, most existing prompt learning methods learn text tokens that are unexplainable, which cannot satisfy the stringent interpretability requirements of Explainable Artificial Intelligence (XAI) in high-stakes scenarios like healthcare. To address this issue, we propose a novel explainable prompt learning framework that leverages medical knowledge by aligning the semantics between images, learnable prompts, and clinical concept-driven prompts at multiple granularities. Moreover, our framework addresses the lack of valuable concept annotations by eliciting knowledge from large language models and offers both visual and textual explanations for the prompts. Extensive experiments and explainability analyses conducted on various datasets, with and without concept labels, demonstrate that our method simultaneously achieves superior diagnostic performance, flexibility, and interpretability, shedding light on the effectiveness of foundation models in facilitating XAI. | XCoOp: Explainable Prompt Learning for Computer-Aided Diagnosis via Concept-guided Context Optimization | [
"Bie, Yequan",
"Luo, Luyang",
"Chen, Zhixuan",
"Chen, Hao"
] | Conference | 2403.09410 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 22 |
|
null | https://papers.miccai.org/miccai-2024/paper/1738_paper.pdf | @InProceedings{ Tan_Clinicalgrade_MICCAI2024,
author = { Tan, Jing Wei and Kim, SeungKyu and Kim, Eunsu and Lee, Sung Hak and Ahn, Sangjeong and Jeong, Won-Ki },
title = { { Clinical-grade Multi-Organ Pathology Report Generation for Multi-scale Whole Slide Images via a Semantically Guided Medical Text Foundation Model } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Vision language models (VLM) have achieved success in both natural language comprehension and image recognition tasks. However, their use in pathology report generation for whole slide images (WSIs) is still limited due to the huge size of multi-scale WSIs and the high cost of WSI annotation. Moreover, in most of the existing research on pathology report generation, sufficient validation regarding clinical efficacy has not been conducted. Herein, we propose a novel Patient-level Multi-organ Pathology Report Generation (PMPRG) model, which utilizes the multi-scale WSI features from our proposed MR-ViT model and their real pathology reports to guide VLM training for accurate pathology report generation. The model then automatically generates a report based on the provided key features-attended regional features. We assessed our model using a WSI dataset consisting of multiple organs, including the colon and kidney. Our model achieved a METEOR score of 0.68, demonstrating the effectiveness of our approach. This model allows pathologists to efficiently generate pathology reports for patients, regardless of the number of WSIs involved. | Clinical-grade Multi-Organ Pathology Report Generation for Multi-scale Whole Slide Images via a Semantically Guided Medical Text Foundation Model | [
"Tan, Jing Wei",
"Kim, SeungKyu",
"Kim, Eunsu",
"Lee, Sung Hak",
"Ahn, Sangjeong",
"Jeong, Won-Ki"
] | Conference | 2409.15574 | [
"https://github.com/hvcl/Clinical-grade-Pathology-Report-Generation/tree/main"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 23 |
|
null | https://papers.miccai.org/miccai-2024/paper/1524_paper.pdf | @InProceedings{ Liu_SemanticsAware_MICCAI2024,
author = { Liu, Kechun and Wu, Wenjun and Elmore, Joann G. and Shapiro, Linda G. },
title = { { Semantics-Aware Attention Guidance for Diagnosing Whole Slide Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Accurate cancer diagnosis remains a critical challenge in digital pathology, largely due to the gigapixel size and complex spatial relationships present in whole slide images. Traditional multiple instance learning (MIL) methods often struggle with these intricacies, especially in preserving the necessary context for accurate diagnosis. In response, we introduce a novel framework named Semantics-Aware Attention Guidance (SAG), which includes 1) a technique for converting diagnostically relevant entities into attention signals, and 2) a flexible attention loss that efficiently integrates various semantically significant information, such as tissue anatomy and cancerous regions. Our experiments on two distinct cancer datasets demonstrate consistent improvements in accuracy, precision, and recall with two state-of-the-art baseline models. Qualitative analysis further reveals that the incorporation of heuristic guidance enables the model to focus on regions critical for diagnosis. SAG is not only effective for the models discussed here, but its adaptability extends to any attention-based diagnostic model. This opens up exciting possibilities for further improving the accuracy and efficiency of cancer diagnostics. | Semantics-Aware Attention Guidance for Diagnosing Whole Slide Images | [
"Liu, Kechun",
"Wu, Wenjun",
"Elmore, Joann G.",
"Shapiro, Linda G."
] | Conference | 2404.10894 | [
"https://github.com/kechunl/SAG"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 24 |
|
null | https://papers.miccai.org/miccai-2024/paper/1877_paper.pdf | @InProceedings{ Di_Interpretable_MICCAI2024,
author = { Di Folco, Maxime and Bercea, Cosmin I. and Chan, Emily and Schnabel, Julia A. },
title = { { Interpretable Representation Learning of Cardiac MRI via Attribute Regularization } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | Interpretability is essential in medical imaging to ensure that clinicians can comprehend and trust artificial intelligence models. Several approaches have been recently considered to encode attributes in the latent space to enhance its interpretability. Notably, attribute regularization aims to encode a set of attributes along the dimensions of a latent representation. However, this approach is based on Variational AutoEncoder and suffers from blurry reconstruction. In this paper, we propose an Attributed-regularized Soft Introspective Variational Autoencoder that combines attribute regularization of the latent space within the framework of an adversarially trained variational autoencoder. We demonstrate on short-axis cardiac Magnetic Resonance images of the UK Biobank the ability of the proposed method to address blurry reconstruction issues of variational autoencoder methods while preserving the latent space interpretability. | Interpretable Representation Learning of Cardiac MRI via Attribute Regularization | [
"Di Folco, Maxime",
"Bercea, Cosmin I.",
"Chan, Emily",
"Schnabel, Julia A."
] | Conference | 2406.08282 | [
"https://github.com/compai-lab/2024-miccai-di-folco"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 25 |
|
null | https://papers.miccai.org/miccai-2024/paper/0714_paper.pdf | @InProceedings{ Xin_Crossconditioned_MICCAI2024,
author = { Xing, Zhaohu and Yang, Sicheng and Chen, Sixiang and Ye, Tian and Yang, Yijun and Qin, Jing and Zhu, Lei },
title = { { Cross-conditioned Diffusion Model for Medical Image to Image Translation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Multi-modal magnetic resonance imaging (MRI) provides rich, complementary information for analyzing diseases.
However, the practical challenges of acquiring multiple MRI modalities, such as cost, scan time, and safety considerations, often result in incomplete datasets. This affects both the quality of diagnosis and the performance of deep learning models trained on such data.
Recent advancements in generative adversarial networks (GANs) and denoising diffusion models have shown promise in natural and medical image-to-image translation tasks. However, the complexity of training GANs and the computational expense associated with diffusion models hinder their development and application in this task.
To address these issues, we introduce a Cross-conditioned Diffusion Model (CDM) for medical image-to-image translation.
The core idea of CDM is to use the distribution of target modalities as guidance to improve synthesis quality, while achieving higher generation efficiency compared to conventional diffusion models.
First, we propose a Modality-specific Representation Model (MRM) to model the distribution of target modalities. Then, we design a Modality-decoupled Diffusion Network (MDN) to efficiently and effectively learn the distribution from MRM. Finally, a Cross-conditioned UNet (C-UNet) with a Condition Embedding module is designed to synthesize the target modalities with the source modalities as input and the target distribution for guidance. Extensive experiments conducted on the BraTS2023 and UPenn-GBM benchmark datasets demonstrate the superiority of our method. | Cross-conditioned Diffusion Model for Medical Image to Image Translation | [
"Xing, Zhaohu",
"Yang, Sicheng",
"Chen, Sixiang",
"Ye, Tian",
"Yang, Yijun",
"Qin, Jing",
"Zhu, Lei"
] | Conference | 2409.08500 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 26 |
|
null | https://papers.miccai.org/miccai-2024/paper/3332_paper.pdf | @InProceedings{ Li_Exploiting_MICCAI2024,
author = { Li, Yueheng and Guan, Xianchao and Wang, Yifeng and Zhang, Yongbing },
title = { { Exploiting Supervision Information in Weakly Paired Images for IHC Virtual Staining } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Immunohistochemical (IHC) staining plays a pivotal role in the evaluation of numerous diseases. However, the standard IHC staining process involves a series of time-consuming and labor-intensive steps, which severely hinders its application in histopathology. With the rapid advancement of deep learning techniques, virtual staining has promising potential to address this issue. But it has long been challenging to determine how to effectively provide supervision information for networks by utilizing consecutive tissue slices. To this end, we propose a weakly supervised pathological consistency constraint acting on multiple layers of GAN. Due to variations of receptive fields in different layers of the network, weakly paired consecutive slices have different degrees of alignment. Thus we allocate adaptive weights to different layers in order to dynamically adjust the supervision strengths of the pathological consistency constraint. Additionally, as an effective deep generative model, GAN can generate high-fidelity images, but it suffers from the issue of discriminator failure. To tackle this issue, a discriminator contrastive regularization method is proposed. It compels the discriminator to contrast the differences between generated images and real images from consecutive layers, thereby enhancing its capability to distinguish virtual images. The experimental results demonstrate that our method generates IHC images from H&E images robustly and identifies cancer regions accurately. Compared to previous methods, our method achieves superior results. | Exploiting Supervision Information in Weakly Paired Images for IHC Virtual Staining | [
"Li, Yueheng",
"Guan, Xianchao",
"Wang, Yifeng",
"Zhang, Yongbing"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 27 |
||
null | https://papers.miccai.org/miccai-2024/paper/0281_paper.pdf | @InProceedings{ Guo_Trimodal_MICCAI2024,
author = { Guo, Diandian and Lin, Manxi and Pei, Jialun and Tang, He and Jin, Yueming and Heng, Pheng-Ann },
title = { { Tri-modal Confluence with Temporal Dynamics for Scene Graph Generation in Operating Rooms } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | A comprehensive understanding of surgical scenes allows for monitoring of the surgical process, reducing the occurrence of accidents and enhancing efficiency for medical professionals. Semantic modeling within operating rooms, as a scene graph generation (SGG) task, is challenging since it involves consecutive recognition of subtle surgical actions over prolonged periods. To address this challenge, we propose a Tri-modal (i.e., images, point clouds, and language) confluence with Temporal dynamics framework, termed TriTemp-OR. Diverging from previous approaches that integrated temporal information via memory graphs, our method embraces two advantages: 1) we directly exploit bi-modal temporal information from the video streaming for hierarchical feature interaction, and 2) the prior knowledge from Large Language Models (LLMs) is embedded to alleviate the class-imbalance problem in the operating theatre. Specifically, our model performs temporal interactions across 2D frames and 3D point clouds, including a scale-adaptive multi-view temporal interaction (ViewTemp) and a geometric-temporal point aggregation (PointTemp). Furthermore, we transfer knowledge from the biomedical LLM, LLaVA-Med, to deepen the comprehension of intraoperative relations. The proposed TriTemp-OR enables the aggregation of tri-modal features through relation-aware unification to predict relations to generate scene graphs. Experimental results on the 4D-OR benchmark demonstrate the superior performance of our model for long-term OR streaming. Codes are available at https://github.com/RascalGdd/TriTemp-OR. | Tri-modal Confluence with Temporal Dynamics for Scene Graph Generation in Operating Rooms | [
"Guo, Diandian",
"Lin, Manxi",
"Pei, Jialun",
"Tang, He",
"Jin, Yueming",
"Heng, Pheng-Ann"
] | Conference | 2404.09231 | [
"https://github.com/RascalGdd/TriTemp-OR"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 28 |
|
null | https://papers.miccai.org/miccai-2024/paper/0668_paper.pdf | @InProceedings{ Par_BlackBox_MICCAI2024,
author = { Paranjape, Jay N. and Sikder, Shameema and Vedula, S. Swaroop and Patel, Vishal M. },
title = { { Black-Box Adaptation for Medical Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | In recent years, various large foundation models have been proposed for image segmentation. These models are often trained on large amounts of data corresponding to general computer vision tasks. Hence, these models do not perform well on medical data. There have been some attempts in the literature to perform parameter-efficient finetuning of such foundation models for medical image segmentation. However, these approaches assume that all the parameters of the model are available for adaptation. But, in many cases, these models are released as APIs or Black-Boxes, with no or limited access to the model parameters and data. In addition, finetuning methods also require a significant amount of compute, which may not be available for the downstream task. At the same time, medical data can’t be shared with third-party agents for finetuning due to privacy reasons. To tackle these challenges, we pioneer a Black-Box adaptation technique for prompted medical image segmentation, called BAPS. BAPS has two components - (i) An Image-Prompt decoder (IP decoder) module that generates visual prompts given an image and a prompt, and (ii) A Zero Order Optimization (ZOO) Method, called SPSA-GC that is used to update the IP decoder without the need for backpropagating through the foundation model. Thus, our method does not require any knowledge about the foundation model’s weights or gradients. We test BAPS on four different modalities and show that our method can improve the original model’s performance by around 4%. The code is available at https://github.com/JayParanjape/Blackbox. | Black-Box Adaptation for Medical Image Segmentation | [
"Paranjape, Jay N.",
"Sikder, Shameema",
"Vedula, S. Swaroop",
"Patel, Vishal M."
] | Conference | [
"https://github.com/JayParanjape/Blackbox"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 29 |
||
null | https://papers.miccai.org/miccai-2024/paper/2831_paper.pdf | @InProceedings{ Zho_Efficient_MICCAI2024,
author = { Zhou, Lingyu and Yi, Zhang and Zhou, Kai and Xu, Xiuyuan },
title = { { Efficient and Gender-adaptive Graph Vision Mamba for Pediatric Bone Age Assessment } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Bone age assessment (BAA) is crucial for evaluating the skeletal maturity of children in pediatric clinics. The decline in assessment accuracy is attributed to the existence of inter-gender disparity. Current automatic methods bridge this gap by relying on bone regions of interest and gender, resulting in high annotation costs. Meanwhile, the models still grapple with efficiency bottleneck for lightweight deployment. To address these challenges, this study presents Gender-adaptive Graph Vision Mamba (GGVMamba) framework with only raw X-ray images. Concretely, a region augmentation process, called directed scan module, is proposed to integrate local context from various directions of bone X-ray images. Then we construct a novel graph Mamba encoder with linear complexity, fostering robust modelling for both within and among region features. Moreover, a gender adaptive strategy is proposed to improve gender consistency by dynamically selecting gender-specific graph structures. Experiments demonstrate that GGVMamba obtains state-of-the-art results with MAE of 3.82, 4.91, and 4.14 on RSNA, RHPE, and DHA, respectively. Notably, GGVMamba shows exceptional gender consistency and optimal efficiency with minimal GPU load. The code is available at https://github.com/SCU-zly/GGVMamba. | Efficient and Gender-adaptive Graph Vision Mamba for Pediatric Bone Age Assessment | [
"Zhou, Lingyu",
"Yi, Zhang",
"Zhou, Kai",
"Xu, Xiuyuan"
] | Conference | [
"https://github.com/SCU-zly/GGVMamba"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 30 |
||
null | https://papers.miccai.org/miccai-2024/paper/1909_paper.pdf | @InProceedings{ Gui_CAVM_MICCAI2024,
author = { Gui, Lujun and Ye, Chuyang and Yan, Tianyi },
title = { { CAVM: Conditional Autoregressive Vision Model for Contrast-Enhanced Brain Tumor MRI Synthesis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Contrast-enhanced magnetic resonance imaging (MRI) is pivotal in the pipeline of brain tumor segmentation and analysis. Gadolinium-based contrast agents, as the most commonly used contrast agents, are expensive and may have potential side effects, and it is desired to obtain contrast-enhanced brain tumor MRI scans without the actual use of contrast agents. Deep learning methods have been applied to synthesize virtual contrast-enhanced MRI scans from non-contrast images. However, as this synthesis problem is inherently ill-posed, these methods fall short in producing high-quality results. In this work, we propose Conditional Autoregressive Vision Model (CAVM) for improving the synthesis of contrast-enhanced brain tumor MRI. As the enhancement of image intensity grows with a higher dose of contrast agents, we assume that it is less challenging to synthesize a virtual image with a lower dose, where the difference between the contrast-enhanced and non-contrast images is smaller. Thus, CAVM gradually increases the contrast agent dosage and produces higher-dose images based on previous lower-dose ones until the final desired dose is achieved. Inspired by the resemblance between the gradual dose increase and the Chain-of-Thought approach in natural language processing, CAVM uses an autoregressive strategy with a decomposition tokenizer and a decoder. Specifically, the tokenizer is applied to obtain a more compact image representation for computational efficiency, and it decomposes the image into dose-variant and dose-invariant tokens. Then, a masked self-attention mechanism is developed for autoregression that gradually increases the dose of the virtual image based on the dose-variant tokens. Finally, the updated dose-variant tokens corresponding to the desired dose are decoded together with dose-invariant tokens to produce the final contrast-enhanced MRI. CAVM was validated on the BraSyn-2023 dataset with brain tumor MRI, where it outperforms state-of-the-art methods. | CAVM: Conditional Autoregressive Vision Model for Contrast-Enhanced Brain Tumor MRI Synthesis | [
"Gui, Lujun",
"Ye, Chuyang",
"Yan, Tianyi"
] | Conference | 2406.16074 | [
"https://github.com/Luc4Gui/CAVM"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 31 |
|
null | https://papers.miccai.org/miccai-2024/paper/3334_paper.pdf | @InProceedings{ Cha_Forecasting_MICCAI2024,
author = { Chakravarty, Arunava and Emre, Taha and Lachinov, Dmitrii and Rivail, Antoine and Scholl, Hendrik P. N. and Fritsche, Lars and Sivaprasad, Sobha and Rueckert, Daniel and Lotery, Andrew and Schmidt-Erfurth, Ursula and Bogunović, Hrvoje },
title = { { Forecasting Disease Progression with Parallel Hyperplanes in Longitudinal Retinal OCT } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Predicting future disease progression risk from medical images is challenging due to patient heterogeneity, and subtle or unknown imaging biomarkers. Moreover, deep learning (DL) methods for survival analysis are susceptible to image domain shifts across scanners. We tackle these issues in the task of predicting late dry Age-related Macular Degeneration (dAMD) onset from retinal OCT scans. We propose a novel DL method for survival prediction to jointly predict from the current scan a risk score, inversely related to time-to-conversion, and the probability of conversion within a time interval t. It uses a family of parallel hyperplanes generated by parameterizing the bias term as a function of t. In addition, we develop unsupervised losses based on intra-subject image pairs to ensure that risk scores increase over time and that future conversion predictions are consistent with AMD stage prediction using actual scans of future visits. Such losses enable data efficient fine-tuning of the trained model on new unlabeled datasets acquired with a different scanner. Extensive evaluation on two large datasets acquired with different scanners resulted in a mean AUROCs of 0.82 for Dataset-1 and 0.83 for Dataset-2, across prediction intervals of 6,12 and 24 months. | Forecasting Disease Progression with Parallel Hyperplanes in Longitudinal Retinal OCT | [
"Chakravarty, Arunava",
"Emre, Taha",
"Lachinov, Dmitrii",
"Rivail, Antoine",
"Scholl, Hendrik P. N.",
"Fritsche, Lars",
"Sivaprasad, Sobha",
"Rueckert, Daniel",
"Lotery, Andrew",
"Schmidt-Erfurth, Ursula",
"Bogunović, Hrvoje"
] | Conference | 2409.20195 | [
"https://github.com/arunava555/Forecast_parallel_hyperplanes"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 32 |
|
null | https://papers.miccai.org/miccai-2024/paper/2112_paper.pdf | @InProceedings{ Nae_Trexplorer_MICCAI2024,
author = { Naeem, Roman and Hagerman, David and Svensson, Lennart and Kahl, Fredrik },
title = { { Trexplorer: Recurrent DETR for Topologically Correct Tree Centerline Tracking } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Tubular structures with tree topology such as blood vessels, lung airways, and more are abundant in human anatomy. Tracking these structures with correct topology is crucial for many downstream tasks that help in early detection of conditions such as vascular and pulmonary diseases. Current methods for centerline tracking suffer from predicting topologically incorrect centerlines and complex model pipelines. To mitigate these issues we propose Trexplorer, a recurrent DETR based model that tracks topologically correct centerlines of tubular tree objects in 3D volumes using a simple model pipeline. We demonstrate the model’s performance on a publicly available synthetic vessel centerline dataset and show that our model outperforms the state-of-the-art on centerline topology and graph-related metrics, and performs well on detection metrics. The code is available at https://github.com/RomStriker/Trexplorer. | Trexplorer: Recurrent DETR for Topologically Correct Tree Centerline Tracking | [
"Naeem, Roman",
"Hagerman, David",
"Svensson, Lennart",
"Kahl, Fredrik"
] | Conference | [
"https://github.com/RomStriker/Trexplorer"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 33 |
||
null | https://papers.miccai.org/miccai-2024/paper/2717_paper.pdf | @InProceedings{ Che_FedEvi_MICCAI2024,
author = { Chen, Jiayi and Ma, Benteng and Cui, Hengfei and Xia, Yong },
title = { { FedEvi: Improving Federated Medical Image Segmentation via Evidential Weight Aggregation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15010 },
month = {October},
pages = { pending },
} | Federated learning enables collaborative knowledge acquisition among clinical institutions while preserving data privacy. However, feature heterogeneity across institutions can compromise the global model’s performance and generalization capability. Existing methods often adjust aggregation weights dynamically to improve the global model’s generalization but rely heavily on the local models’ performance or reliability, excluding an explicit measure of the generalization gap arising from deploying the global model across varied local datasets. To address this issue, we propose FedEvi, a method that adjusts the aggregation weights based on the generalization gap between the global model and each local dataset and the reliability of local models. We utilize a Dirichlet-based evidential model to disentangle the uncertainty representation of each local model and the global model into epistemic uncertainty and aleatoric uncertainty. Then, we quantify the global generalization gap using the epistemic uncertainty of the global model and assess the reliability of each local model using its aleatoric uncertainty. Afterward, we design aggregation weights using the global generalization gap and local reliability. Comprehensive experimentation reveals that FedEvi consistently surpasses 12 state-of-the-art methods across three real-world multi-center medical image segmentation tasks, demonstrating the effectiveness of FedEvi in bolstering the generalization capacity of the global model in heterogeneous federated scenarios. The code will be available at
https://github.com/JiayiChen815/FedEvi. | FedEvi: Improving Federated Medical Image Segmentation via Evidential Weight Aggregation | [
"Chen, Jiayi",
"Ma, Benteng",
"Cui, Hengfei",
"Xia, Yong"
] | Conference | [
"https://github.com/JiayiChen815/FedEvi"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 34 |
||
null | https://papers.miccai.org/miccai-2024/paper/1439_paper.pdf | @InProceedings{ Tei_CTbased_MICCAI2024,
author = { Teimouri, Reihaneh and Kersten-Oertel, Marta and Xiao, Yiming },
title = { { CT-based brain ventricle segmentation via diffusion Schrödinger Bridge without target domain ground truths } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Efficient and accurate brain ventricle segmentation from clinical CT scans is critical for emergency surgeries like ventriculostomy. With the challenges in poor soft tissue contrast and a scarcity of well-annotated databases for clinical brain CTs, we introduce a novel uncertainty-aware ventricle segmentation technique without the need of CT segmentation ground truths by leveraging diffusion-model-based domain adaptation. Specifically, our method employs the diffusion Schrödinger Bridge and an attention recurrent residual U-Net to capitalize on unpaired CT and MRI scans to derive automatic CT segmentation from those of the MRIs, which are more accessible. Importantly, we propose an end-to-end, joint training framework of image translation and segmentation tasks, and demonstrate its benefit over training individual tasks separately. By comparing the proposed method against similar setups using two different GAN models for domain adaptation (CycleGAN and CUT), we also reveal the advantage of diffusion models towards improved segmentation and image translation quality. With a Dice score of 0.78±0.27, our proposed method outperformed the compared methods, including SynSeg-Net, while providing intuitive uncertainty measures to further facilitate quality control of the automatic segmentation outcomes. The code is available at: https://github.com/HealthX-Lab/DiffusionSynCTSeg. | CT-based brain ventricle segmentation via diffusion Schrödinger Bridge without target domain ground truths | [
"Teimouri, Reihaneh",
"Kersten-Oertel, Marta",
"Xiao, Yiming"
] | Conference | [
"https://github.com/HealthX-Lab/DiffusionSynCTSeg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 35 |
||
null | https://papers.miccai.org/miccai-2024/paper/3180_paper.pdf | @InProceedings{ Kam_Is_MICCAI2024,
author = { Kampen, Peter Johannes Tejlgaard and Christensen, Anders Nymark and Hannemose, Morten Rieger },
title = { { Is this hard for you? Personalized human difficulty estimation for skin lesion diagnosis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Predicting the probability of human error is an important problem with applications ranging from optimizing learning environments to distributing cases among doctors in a clinic. In both of these instances, predicting the probability of error is equivalent to predicting the difficulty of the assignment, e.g., diagnosing a specific image of a skin lesion. However, the difficulty of a case is subjective since what is difficult for one person is not necessarily difficult for another. We present a novel approach for personalized estimation of human difficulty, using a transformer-based neural network that looks at previous cases and if the user answered these correctly. We demonstrate our method on doctors diagnosing skin lesions and on a language learning data set showing generalizability across domains. Our approach utilizes domain representations by first encoding each case using pre-trained neural networks and subsequently using these as tokens in a sequence modeling task. We significantly outperform all baselines, both for cases that are in the training set and for unseen cases. Additionally, we show that our method is robust towards the quality of the embeddings and how the performance increases as more answers from a user are available. Our findings suggest that this approach could pave the way for truly personalized learning experiences in medical diagnostics, enhancing the quality of patient care. | Is this hard for you? Personalized human difficulty estimation for skin lesion diagnosis | [
"Kampen, Peter Johannes Tejlgaard",
"Christensen, Anders Nymark",
"Hannemose, Morten Rieger"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 36 |
||
null | https://papers.miccai.org/miccai-2024/paper/1010_paper.pdf | @InProceedings{ Din_CrossModality_MICCAI2024,
author = { Ding, Zhengyao and Hu, Yujian and Li, Ziyu and Zhang, Hongkun and Wu, Fei and Xiang, Yilang and Li, Tian and Liu, Ziyi and Chu, Xuesen and Huang, Zhengxing },
title = { { Cross-Modality Cardiac Insight Transfer: A Contrastive Learning Approach to Enrich ECG with CMR Features } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Cardiovascular diseases are the leading cause of death worldwide, and accurate diagnostic tools are crucial for their early detection and treatment. Electrocardiograms (ECG) offer a non-invasive and widely accessible diagnostic method. Despite their convenience, they are limited in providing in-depth cardiovascular information. On the other hand, Cardiac Magnetic Resonance Imaging (CMR) can reveal detailed structural and functional heart information; however, it is costly and not widely accessible. This study aims to bridge this gap through a contrastive learning framework that deeply integrates ECG data with insights from CMR, allowing the extraction of cardiovascular information solely from ECG. We developed an innovative contrastive learning algorithm trained on a large-scale paired ECG and CMR dataset, enabling ECG data to map onto the feature space of CMR data. Experimental results demonstrate that our method significantly improves the accuracy of cardiovascular disease diagnosis using only ECG data. Furthermore, our approach enhances the correlation coefficient for predicting cardiac traits from ECG, revealing potential connections between ECG and CMR. This study not only proves the effectiveness of contrastive learning in cross-modal medical image analysis but also offers a low-cost, efficient way to leverage existing ECG equipment for a deeper understanding of cardiovascular health conditions. Our code is available at https://github.com/Yukui-1999/ECCL. | Cross-Modality Cardiac Insight Transfer: A Contrastive Learning Approach to Enrich ECG with CMR Features | [
"Ding, Zhengyao",
"Hu, Yujian",
"Li, Ziyu",
"Zhang, Hongkun",
"Wu, Fei",
"Xiang, Yilang",
"Li, Tian",
"Liu, Ziyi",
"Chu, Xuesen",
"Huang, Zhengxing"
] | Conference | [
"https://github.com/Yukui-1999/ECCL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 37 |
||
null | https://papers.miccai.org/miccai-2024/paper/3269_paper.pdf | @InProceedings{ Esh_Representation_MICCAI2024,
author = { Eshraghi Dehaghani, Mehrdad and Sabour, Amirhossein and Madu, Amarachi B. and Lourentzou, Ismini and Moradi, Mehdi },
title = { { Representation Learning with a Transformer-Based Detection Model for Localized Chest X-Ray Disease and Progression Detection } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Medical image interpretation often encompasses diverse tasks, yet prevailing AI approaches predominantly favor end-to-end image-to-text models for automatic chest X-ray reading and analysis, often overlooking critical components of radiology reports. At the same time, employing separate models for related but distinct tasks leads to computational over-head and the inability to harness the benefits of shared data abstractions. In this work, we introduce a framework for chest X-Ray interpretation, utilizing a Transformer-based object detection model trained on abundant data for learning localized representations. Our model achieves a mean average precision of ∼ 94% in identifying semantically meaningful anatomical regions, facilitating downstream tasks, namely localized disease detection and localized progression monitoring. Our approach yields competitive results in localized disease detection, with an average ROC 89.1% over 9 diseases. In addition, to the best of our knowledge, our work is the first to tackle localized disease progression monitoring, with the proposed model being able to track changes in specific regions of interest (RoIs) with an average accuracy ∼ 67% and average F1 score of ∼ 71%. Code is available at https://github.com/McMasterAIHLab/CheXDetector. | Representation Learning with a Transformer-Based Detection Model for Localized Chest X-Ray Disease and Progression Detection | [
"Eshraghi Dehaghani, Mehrdad",
"Sabour, Amirhossein",
"Madu, Amarachi B.",
"Lourentzou, Ismini",
"Moradi, Mehdi"
] | Conference | [
"https://github.com/McMasterAIHLab/CheXDetector"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 38 |
||
null | https://papers.miccai.org/miccai-2024/paper/1194_paper.pdf | @InProceedings{ Che_BIMCVR_MICCAI2024,
author = { Chen, Yinda and Liu, Che and Liu, Xiaoyu and Arcucci, Rossella and Xiong, Zhiwei },
title = { { BIMCV-R: A Landmark Dataset for 3D CT Text-Image Retrieval } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | The burgeoning integration of 3D medical imaging into healthcare has led to a substantial increase in the workload of medical professionals. To assist clinicians in their diagnostic processes and alleviate their workload, the development of a robust system for retrieving similar case studies presents a viable solution.
While the concept holds great promise, the field of 3D medical text-image retrieval is currently limited by the absence of robust evaluation benchmarks and curated datasets. To remedy this, our study presents a groundbreaking dataset, {BIMCV-R}, which includes an extensive collection of 8,069 3D CT volumes, encompassing over 2 million slices, paired with their respective radiological reports.
Expanding upon the foundational work of our dataset, we craft a retrieval strategy, MedFinder. This approach employs a dual-stream network architecture, harnessing the potential of large language models to advance the field of medical image retrieval beyond existing text-image retrieval solutions. It marks our preliminary step towards developing a system capable of facilitating text-to-image, image-to-text, and keyword-based retrieval tasks. Our project is available at \url{https://huggingface.co./datasets/cyd0806/BIMCV-R}. | BIMCV-R: A Landmark Dataset for 3D CT Text-Image Retrieval | [
"Chen, Yinda",
"Liu, Che",
"Liu, Xiaoyu",
"Arcucci, Rossella",
"Xiong, Zhiwei"
] | Conference | 2403.15992 | [
""
] | https://huggingface.co./papers/2403.15992 | 0 | 0 | 0 | 5 | [] | [
"cyd0806/BIMCV-R"
] | [] | [] | [
"cyd0806/BIMCV-R"
] | [] | 1 | Poster | 39 |
null | https://papers.miccai.org/miccai-2024/paper/1151_paper.pdf | @InProceedings{ Liu_MRScore_MICCAI2024,
author = { Liu, Yunyi and Wang, Zhanyu and Li, Yingshu and Liang, Xinyu and Liu, Lingqiao and Wang, Lei and Zhou, Luping },
title = { { MRScore: Evaluating Medical Report with LLM-based Reward System } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | We propose MRScore, an innovative automatic evaluation metric specifically tailored for the generation of radiology reports. Traditional (natural language generation) NLG metrics like BLEU are inadequate for accurately assessing reports, particularly those generated by Large Language Models (LLMs). Our experimental findings give systematic evidence of these inadequacies within this paper. To overcome this challenge, we have developed a unique framework intended to guide LLMs in evaluating radiology reports, which was created in collaboration with radiologists adhering to standard human report evaluation procedures. Using this as a prompt can ensure that the LLMs’ output closely mirrors human analysis. We then used the data generated by LLMs to establish a human-labeled dataset by pairing them with accept and reject samples, subsequently training the MRScore model as the reward model with this dataset. MRScore has demonstrated a higher correlation with human judgments and superior performance in model selection when compared with traditional metrics. Our code is available on GitHub at: https://github.com/yunyiliu/MRScore. | MRScore: Evaluating Medical Report with LLM-based Reward System | [
"Liu, Yunyi",
"Wang, Zhanyu",
"Li, Yingshu",
"Liang, Xinyu",
"Liu, Lingqiao",
"Wang, Lei",
"Zhou, Luping"
] | Conference | [
"https://github.com/yunyiliu/MRScore"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 40 |
||
null | https://papers.miccai.org/miccai-2024/paper/3127_paper.pdf | @InProceedings{ Che_CausalCLIPSeg_MICCAI2024,
author = { Chen, Yaxiong and Wei, Minghong and Zheng, Zixuan and Hu, Jingliang and Shi, Yilei and Xiong, Shengwu and Zhu, Xiao Xiang and Mou, Lichao },
title = { { CausalCLIPSeg: Unlocking CLIP’s Potential in Referring Medical Image Segmentation with Causal Intervention } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Referring medical image segmentation targets delineating lesions indicated by textual descriptions. Aligning visual and textual cues is challenging due to their distinct data properties. Inspired by large-scale pre-trained vision-language models, we propose CausalCLIPSeg, an end-to-end framework for referring medical image segmentation that leverages CLIP. Despite not being trained on medical data, we enforce CLIP’s rich semantic space onto the medical domain by a tailored cross-modal decoding method to achieve text-to-pixel alignment. Furthermore, to mitigate confounding bias that may cause the model to learn spurious correlations instead of meaningful causal relationships, CausalCLIPSeg introduces a causal intervention module which self-annotates confounders and excavates causal features from inputs for segmentation judgments. We also devise an adversarial min-max game to optimize causal features while penalizing confounding ones. Extensive experiments demonstrate the state-of-the-art performance of our proposed method. Code is available at https://github.com/WUTCM-Lab/CausalCLIPSeg. | CausalCLIPSeg: Unlocking CLIP’s Potential in Referring Medical Image Segmentation with Causal Intervention | [
"Chen, Yaxiong",
"Wei, Minghong",
"Zheng, Zixuan",
"Hu, Jingliang",
"Shi, Yilei",
"Xiong, Shengwu",
"Zhu, Xiao Xiang",
"Mou, Lichao"
] | Conference | [
"https://github.com/WUTCM-Lab/CausalCLIPSeg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 41 |
||
null | https://papers.miccai.org/miccai-2024/paper/0373_paper.pdf | @InProceedings{ Hay_Online_MICCAI2024,
author = { Hayoz, Michel and Hahne, Christopher and Kurmann, Thomas and Allan, Max and Beldi, Guido and Candinas, Daniel and Márquez-Neila, Pablo and Sznitman, Raphael },
title = { { Online 3D reconstruction and dense tracking in endoscopic videos } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | 3D scene reconstruction from stereo endoscopic video data is crucial for advancing surgical interventions. In this work, we present an online framework for real-time, dense 3D scene reconstruction and tracking, aimed at enhancing surgical scene understanding and assisting interventions. Our method dynamically extends a canonical scene representation using Gaussian splatting, while modeling tissue deformations through a sparse set of control points. We introduce an efficient online fitting algorithm that optimizes the scene parameters, enabling consistent tracking and accurate reconstruction. Through experiments on the StereoMIS dataset, we demonstrate the effectiveness of our approach, outperforming state-of-the-art tracking methods and achieving comparable performance to offline reconstruction techniques. Our work enables various downstream applications thus contributing to advancing the capabilities of surgical assistance systems. | Online 3D reconstruction and dense tracking in endoscopic videos | [
"Hayoz, Michel",
"Hahne, Christopher",
"Kurmann, Thomas",
"Allan, Max",
"Beldi, Guido",
"Candinas, Daniel",
"Márquez-Neila, Pablo",
"Sznitman, Raphael"
] | Conference | 2409.06037 | [
"https://github.com/mhayoz/online_endo_track"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 42 |
|
null | https://papers.miccai.org/miccai-2024/paper/2570_paper.pdf | @InProceedings{ Che_Accelerated_MICCAI2024,
author = { Chen, Qi and Xing, Xiaohan and Chen, Zhen and Xiong, Zhiwei },
title = { { Accelerated Multi-Contrast MRI Reconstruction via Frequency and Spatial Mutual Learning } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | To accelerate Magnetic Resonance (MR) imaging procedures, Multi-Contrast MR Reconstruction (MCMR) has become a prevalent trend that utilizes an easily obtainable modality as an auxiliary to support high-quality reconstruction of the target modality with under-sampled k-space measurements. The exploration of global dependency and complementary information across different modalities is essential for MCMR. However, existing methods either struggle to capture global dependency due to the limited receptive field or suffer from quadratic computational complexity. To tackle this dilemma, we propose a novel Frequency and Spatial Mutual Learning Network (FSMNet), which efficiently explores global dependencies across different modalities. Specifically, the features for each modality are extracted by the Frequency-Spatial Feature Extraction (FSFE) module, featuring a frequency branch and a spatial branch. Benefiting from the global property of the Fourier transform, the frequency branch can efficiently capture global dependency with an image-size receptive field, while the spatial branch can extract local features. To exploit complementary information from the auxiliary modality, we propose a Cross-Modal Selective fusion (CMS-fusion) module that selectively incorporate the frequency and spatial features from the auxiliary modality to enhance the corresponding branch of the target modality. To further integrate the enhanced global features from the frequency branch and the enhanced local features from the spatial branch, we develop a Frequency-Spatial fusion (FS-fusion) module, resulting in a comprehensive feature representation for the target modality. Extensive experiments on the BraTS and fastMRI datasets demonstrate that the proposed FSMNet achieves state-of-the-art performance for the MCMR task with different acceleration factors. | Accelerated Multi-Contrast MRI Reconstruction via Frequency and Spatial Mutual Learning | [
"Chen, Qi",
"Xing, Xiaohan",
"Chen, Zhen",
"Xiong, Zhiwei"
] | Conference | 2409.14113 | [
"https://github.com/qic999/fsmnet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 43 |
|
null | https://papers.miccai.org/miccai-2024/paper/2372_paper.pdf | @InProceedings{ Kon_MetaStain_MICCAI2024,
author = { Konwer, Aishik and Prasanna, Prateek },
title = { { MetaStain: Stain-generalizable Meta-learning for Cell Segmentation and Classification with Limited Exemplars } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Deep learning models excel when evaluated on test data that share similar attributes and/or distribution with the training data. However, their ability to generalize may suffer when there are discrepancies in distributions between the training and testing data i.e. domain shift. In this work, we utilize meta-learning to introduce MetaStain, a stain-generalizable representation learning framework that performs cell segmentation and classification in histopathology images. Owing to the designed episodical meta-learning paradigm, MetaStain can adapt to unseen stains and/or novel classes through finetuning even with limited annotated samples. We design a stain-aware triplet loss that clusters stain-agnostic class-specific features, as well as separates intra-stain features extracted from different classes. We also employ a consistency triplet loss to preserve the spatial correspondence between tissues under different stains. During test-time adaptation, a refined class weight generator module is optionally introduced if the unseen testing data also involves novel classes. MetaStain significantly outperforms state-of-the-art segmentation and classification methods on the multi-stain MIST dataset under various experimental settings. | MetaStain: Stain-generalizable Meta-learning for Cell Segmentation and Classification with Limited Exemplars | [
"Konwer, Aishik",
"Prasanna, Prateek"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 44 |
||
null | https://papers.miccai.org/miccai-2024/paper/4020_paper.pdf | @InProceedings{ Din_HRDecoder_MICCAI2024,
author = { Ding, Ziyuan and Liang, Yixiong and Kan, Shichao and Liu, Qing },
title = { { HRDecoder: High-Resolution Decoder Network for Fundus Image Lesion Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | High resolution is crucial for precise segmentation in fundus images, yet handling high-resolution inputs incurs considerable GPU memory costs, with diminishing performance gains as overhead increases. To address this issue while tackling the challenge of segmenting tiny objects, recent studies have explored local-global feature fusion methods. These methods preserve fine details using local regions and capture context information from downscaled global images. However, the necessity of multiple forward passes inevitably incurs significant computational overhead, greatly affecting inference speed. In this paper, we propose HRDecoder, a simple High-Resolution Decoder network for fundus image segmentation. It integrates a High-resolution Representation Learning (HRL) module to capture fine-grained local features and a High-resolution Feature Fusion (HFF) module to fuse multi-scale local-global feature maps. HRDecoder effectively improves the overall segmentation accuracy of fundus lesions while maintaining reasonable memory usage, computational overhead, and inference speed. Experimental results on the IDRID and DDR datasets demonstrate the effectiveness of our method. The code is available at https://github.com/CVIU-CSU/HRDecoder. | HRDecoder: High-Resolution Decoder Network for Fundus Image Lesion Segmentation | [
"Ding, Ziyuan",
"Liang, Yixiong",
"Kan, Shichao",
"Liu, Qing"
] | Conference | [
"https://github.com/CVIU-CSU/HRDecoder"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 45 |
||
null | https://papers.miccai.org/miccai-2024/paper/4063_paper.pdf | @InProceedings{ Wan_AHyperreflective_MICCAI2024,
author = { Wang, Xingguo and Ma, Yuhui and Guo, Xinyu and Zheng, Yalin and Zhang, Jiong and Liu, Yonghuai and Zhao, Yitian },
title = { { A Hyperreflective Foci Segmentation Network for OCT Images with Multi-dimensional Semantic Enhancement } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Diabetic macular edema (DME) is a leading cause of vision loss worldwide. Optical Coherence Tomography (OCT) serves as a widely accepted imaging tool for diagnosing DME due to its non-invasiveness and high resolution cross-sectional view. Clinical evaluation of Hyperreflective Foci (HRF) in OCT contributes to understanding the origins of DME and predicting disease progression or treatment efficacy. However, limited information and a significant imbalance between foreground and background in HRF present challenges for its precise segmentation in OCT images. In this study, we propose an attention mechanism-based MUlti-dimensional Semantic Enhancement Network (MUSE-Net) for HRF segmentation to address these challenges. Specifically, our MUSE-Net comprises attention-based multi-dimensional semantic information enhancement modules and class-imbalance-insensitive joint loss. The adaptive region guidance module softly allocates regional importance in slice, enriching the single-slice semantic information. The adjacent slice guidance module exploits the remote information across consecutive slices, enriching the multi-dimensional semantic information. Class-imbalance-insensitive joint loss combines pixel-level perception optimization with image-level considerations, alleviating the gradient dominance of the background during model training. Our experimental results demonstrate that MUSE-Net outperforms existing methods over two datasets respectively. To further promote the reproducible research, we made the code and these two datasets online available. | A Hyperreflective Foci Segmentation Network for OCT Images with Multi-dimensional Semantic Enhancement | [
"Wang, Xingguo",
"Ma, Yuhui",
"Guo, Xinyu",
"Zheng, Yalin",
"Zhang, Jiong",
"Liu, Yonghuai",
"Zhao, Yitian"
] | Conference | [
"https://github.com/iMED-Lab/MUSEnet-Pytorch"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 46 |
||
null | https://papers.miccai.org/miccai-2024/paper/4133_paper.pdf | @InProceedings{ Qua_CausalityInformed_MICCAI2024,
author = { Quan, Yuyang and Zhang, Chencheng and Guo, Rui and Qian, Xiaohua },
title = { { Causality-Informed Fusion Network for Automated Assessment of Parkinsonian Body Bradykinesia } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Body bradykinesia, a prominent clinical manifestation of Parkinson’s disease (PD), characterizes a generalized slowness and diminished movement across the entire body. The assessment of body bradykinesia in the widely employed PD rating scale (MDS-UPDRS) is inherently subjective, relying on the examiner’s overall judgment rather than specific motor tasks. Therefore, we propose a graph convolutional network (GCN) scheme for automated video-based assessment of parkinsonian body bradykinesia. This scheme incorporates a causality-informed fusion network to enhance the fusion of causal components within gait and leg-agility motion features, achieving stable multi-class assessment of body bradykinesia. Specifically, an adaptive causal feature selection module is developed to extract pertinent features for body bradykinesia assessment, effectively mitigating the influence of non-causal features. Simultaneously, a causality-informed optimization strategy is designed to refine the causality feature selection module, improving its capacity to capture causal features. Our method achieves 61.07% accuracy for three-class assessment on a dataset of 876 clinical case. Notably, our proposed scheme, utilizing only consumer-level cameras, holds significant promise for remote PD bradykinesia assessment. | Causality-Informed Fusion Network for Automated Assessment of Parkinsonian Body Bradykinesia | [
"Quan, Yuyang",
"Zhang, Chencheng",
"Guo, Rui",
"Qian, Xiaohua"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 47 |
||
null | https://papers.miccai.org/miccai-2024/paper/1063_paper.pdf | @InProceedings{ Men_Genomicsguided_MICCAI2024,
author = { Meng, Fangliangzi and Zhang, Hongrun and Yan, Ruodan and Chuai, Guohui and Li, Chao and Liu, Qi },
title = { { Genomics-guided Representation Learning for Pathologic Pan-cancer Tumor Microenvironment Subtype Prediction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | The characterization of Tumor MicroEnvironment (TME) is challenging due to its complexity and heterogeneity. Relatively consistent TME characteristics embedded within highly specific tissue features, render them difficult to predict. The capability to accurately classify TME subtypes is of critical significance for clinical tumor diagnosis and precision medicine. Based on the observation that tumors with different origins share similar microenvironment patterns, we propose PathoTME, a genomics-guided representation learning framework employing Whole Slide Image (WSI) for pan-cancer TME subtypes prediction. Specifically, we utilize Siamese network to leverage genomic information as a regularization factor to assist WSI embeddings learning during a training phase. Additionally, we employ Domain Adversarial Neural Network (DANN) to mitigate the impact of tissue type variations. To eliminate domain bias, a dynamic WSI prompt is designed to further unleash the model’s capabilities. Our model achieves better performance than other state-of-the-art methods across 23 cancer types on TCGA dataset. The related code will be released. | Genomics-guided Representation Learning for Pathologic Pan-cancer Tumor Microenvironment Subtype Prediction | [
"Meng, Fangliangzi",
"Zhang, Hongrun",
"Yan, Ruodan",
"Chuai, Guohui",
"Li, Chao",
"Liu, Qi"
] | Conference | 2406.06517 | [
"https://github.com/Mengflz/PathoTME"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 48 |
|
null | https://papers.miccai.org/miccai-2024/paper/1802_paper.pdf | @InProceedings{ Fan_ADomain_MICCAI2024,
author = { Fan, Xiaoya and Xu, Pengzhi and Zhao, Qi and Hao, Chenru and Zhao, Zheng and Wang, Zhong },
title = { { A Domain Adaption Approach for EEG-based Automated Seizure Classification with Temporal-Spatial-Spectral Attention } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Electroencephalography (EEG) based automated seizure classification can significantly ameliorate seizure diagnosis and treatment. However, the intra- and inter- subject variability in EEG data make it a challenging task. Especially, a model trained on data from multiple subjects typically degenerates when applied to new subjects. In this study, we propose an attention based deep convolutional neural network with domain adaption to tackle these issues. The model is able to learn domain-invariant temporal-spatial-spectral (TSS) features by jointly optimizing a feature extractor, a seizure classifier and a domain discriminator. The feature extractor extracts multi-level TSS features by an attention module. The domain discriminator is designed to determine which domain, i.e., source or target, the features come from. With a gradient reversal layer, it allows extraction of domain-invariant features. Thus, the classifier is able to give accurate prediction for unseen subjects by leveraging knowledge learned from the source domain. We evaluated our approach using the Temple University Hospital EEG Seizure Corpus (TUSZ) v1.5.2. Results demonstrate that the proposed approach achieves the state-of-the-art performance on seizure classification. The code is available at https://github.com/Dondlut/EEG_DOMAIN. | A Domain Adaption Approach for EEG-based Automated Seizure Classification with Temporal-Spatial-Spectral Attention | [
"Fan, Xiaoya",
"Xu, Pengzhi",
"Zhao, Qi",
"Hao, Chenru",
"Zhao, Zheng",
"Wang, Zhong"
] | Conference | [
"https://github.com/Dondlut/EEG_DOMAIN"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 49 |
||
null | https://papers.miccai.org/miccai-2024/paper/0736_paper.pdf | @InProceedings{ Li_SelfsupervisedDenoising_MICCAI2024,
author = { Li, Zhenghong and Ren, Jiaxiang and Zou, Zhilin and Garigapati, Kalyan and Du, Congwu and Pan, Yingtian and Ling, Haibin },
title = { { Self-supervised Denoising and Bulk Motion Artifact Removal of 3D Optical Coherence Tomography Angiography of Awake Brain } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Denoising of 3D Optical Coherence Tomography Angiography (OCTA) for awake brain microvasculature is challenging. An OCTA volume is scanned slice by slice, with each slice (named B-scan) derived from dynamic changes in successively acquired OCT images. A B-scan of an awake brain often suffers from complex noise and Bulk Motion Artifacts (BMA), severely degrading image quality. Also, acquiring clean B-scans for training is difficult. Fortunately, we observe that, the slice-wise imaging procedure makes the noises mostly independent across B-scans, while preserves the continuity of vessel (including capillaries) signals across B-scans. Thus inspired, we propose a novel blind-slice self-supervised learning method to denoise 3D brain OCTA volumes slice by slice. For each B-scan slice, named center B-scan, we mask it entirely black and train the network to recover the original center B-scan using its neighboring B-scans. To enhance the BMA removal performance, we adaptively select only BMA-free center B-scans for model training. We further propose two novel refinement methods: (1) a non-local block to enhance vessel continuity and (2) a weighted loss to improve vascular contrast. To the best of our knowledge, this is the first self-supervised 3D OCTA denoising method that effectively reduces both complex noise and BMA while preserving capillary signals in brain OCTA volumes. | Self-supervised Denoising and Bulk Motion Artifact Removal of 3D Optical Coherence Tomography Angiography of Awake Brain | [
"Li, Zhenghong",
"Ren, Jiaxiang",
"Zou, Zhilin",
"Garigapati, Kalyan",
"Du, Congwu",
"Pan, Yingtian",
"Ling, Haibin"
] | Conference | [
"https://github.com/ZhenghLi/SOAD"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 50 |
||
null | https://papers.miccai.org/miccai-2024/paper/0750_paper.pdf | @InProceedings{ Jin_Diff3Dformer_MICCAI2024,
author = { Jin, Zihao and Fang, Yingying and Huang, Jiahao and Xu, Caiwen and Walsh, Simon and Yang, Guang },
title = { { Diff3Dformer: Leveraging Slice Sequence Diffusion for Enhanced 3D CT Classification with Transformer Networks } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | The manifestation of symptoms associated with lung diseases can vary in different depths for individual patients, highlighting the significance of 3D information in CT scans for medical image classification. While Vision Transformer has shown superior performance over convolutional neural networks in image classification tasks, their effectiveness is often demonstrated on sufficiently large 2D datasets and they easily encounter overfitting issues on small medical image datasets. To address this limitation, we propose a Diffusion-based 3D Vision Transformer (Diff3Dformer), which utilizes the latent space of the Diffusion model to form the slice sequence for 3D analysis and incorporates clustering attention into ViT to aggregate repetitive information within 3D CT scans, thereby harnessing the power of the advanced transformer in 3D classification tasks on small datasets. Our method exhibits improved performance on two different scales of small datasets of 3D lung CT scans, surpassing the state of the art 3D methods and other transformer-based approaches that emerged during the COVID-19 pandemic, demonstrating its robust and superior performance across different scales of data. Experimental results underscore the superiority of our proposed method, indicating its potential for enhancing medical image classification tasks in real-world scenarios.
The code will be publicly available at https://github.com/ayanglab/Diff3Dformer. | Diff3Dformer: Leveraging Slice Sequence Diffusion for Enhanced 3D CT Classification with Transformer Networks | [
"Jin, Zihao",
"Fang, Yingying",
"Huang, Jiahao",
"Xu, Caiwen",
"Walsh, Simon",
"Yang, Guang"
] | Conference | 2406.17173 | [
"https://github.com/ayanglab/Diff3Dformer"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 51 |
|
null | https://papers.miccai.org/miccai-2024/paper/2954_paper.pdf | @InProceedings{ Van_Towards_MICCAI2024,
author = { Vanneste, Félix and Martin, Claire and Goury, Olivier and Courtecuisse, Hadrien and Pernod, Erik and Cotin, Stéphane and Duriez, Christian },
title = { { Towards realistic needle insertion training simulator using partitioned model order reduction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Needle-based intervention is part of minimally invasive surgery
and has the benefit of allowing the reach of deep internal organ structures while limiting trauma. However, reaching good performance requires a skilled practitioner. This paper presents a needle-insertion training simulator for the liver based on the finite element method. One of the main challenges in developing realistic training simulators is to use fine meshes to represent organ deformations accurately while keeping a real-time constraint in the speed of computation to allow interactivity of the simulator. This is especially true for simulating accurately the region of the organs where the needle is inserted. In this paper, we propose the use of model order reduction to allow drastic gains in performance. To simulate accurately the liver which undergoes highly nonlinear local deformation along the needle-insertion path, we propose a new partition method for model order reduction: applied to the liver, we can perform FEM computations on a high-resolution mesh on the part in interaction with the needle while having model reduction elsewhere for greater computational performances. We show the combined methods with an interactive simulation of percutaneous needle-based interventions for tumor biopsy/ablation using patient-based anatomy. | Towards realistic needle insertion training simulator using partitioned model order reduction | [
"Vanneste, Félix",
"Martin, Claire",
"Goury, Olivier",
"Courtecuisse, Hadrien",
"Pernod, Erik",
"Cotin, Stéphane",
"Duriez, Christian"
] | Conference | [
"https://github.com/SofaDefrost/ModelOrderReduction"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 52 |
||
null | https://papers.miccai.org/miccai-2024/paper/3110_paper.pdf | @InProceedings{ P._Domain_MICCAI2024,
author = { P. García-de-la-Puente, Natalia and López-Pérez, Miguel and Launet, Laëtitia and Naranjo, Valery },
title = { { Domain Adaptation for Unsupervised Cancer Detection: An application for skin Whole Slides Images from an interhospital dataset } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Skin cancer diagnosis relies on assessing the histopathological appearance of skin cells and the patterns of epithelial skin tissue architecture. Despite recent advancements in deep learning for automating skin cancer detection, two main challenges persist for their clinical deployment. (1) Deep learning models only recognize the classes trained on, giving arbitrary predictions for rare or unknown diseases. (2) The generalization across healthcare institutions, as variations arising from diverse scanners and staining procedures, increase the task complexity.
We propose a novel Domain Adaptation method for Unsupervised cancer Detection (DAUD) using whole slide images to address these concerns. Our method consists of an autoencoder-based model with stochastic latent variables that reflect each institution’s features.
We have validated DAUD in a real-world dataset from two different hospitals. In addition, we utilized an external dataset to evaluate the capability for out-of-distribution detection. DAUD demonstrates comparable or superior performance to the state-of-the-art methods for anomaly detection. | Domain Adaptation for Unsupervised Cancer Detection: An application for skin Whole Slides Images from an interhospital dataset | [
"P. García-de-la-Puente, Natalia",
"López-Pérez, Miguel",
"Launet, Laëtitia",
"Naranjo, Valery"
] | Conference | [
"https://github.com/cvblab/DAUD-MICCAI2024"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 53 |
||
null | https://papers.miccai.org/miccai-2024/paper/1443_paper.pdf | @InProceedings{ Liu_MultiModal_MICCAI2024,
author = { Liu, Shuting and Zhang, Baochang and Zimmer, Veronika A. and Rueckert, Daniel },
title = { { Multi-Modal Data Fusion with Missing Data Handling for Mild Cognitive Impairment Progression Prediction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Predicting Mild Cognitive Impairment (MCI) progression, an early stage of Alzheimer’s Disease (AD), is crucial but challenging due to the disease’s complexity. Integrating diverse data sources like clinical assessments and neuroimaging poses hurdles, particularly with data preprocessing and handling missing data. When data is missing, it can introduce uncertainty and reduce the effectiveness of statistical models. Moreover, ignoring missing data or handling it improperly can distort results and compromise the validity of research findings. In this paper, we introduce a novel fusion model considering missing data handling for early diagnosis of AD. This includes a novel image-to-graphical representation module that considers the heterogeneity of brain anatomy, and a missing data compensation module. In the image-to-graphical representation module, we construct a subject-specific graph representing the connectivity among 100 brain regions derived from structural MRI, incorporating the feature maps extracted by segmentation network into the node features. We also propose a novel multi-head dynamic graph convolution network to further extract graphical features. In the missing data compensation module, a self-supervised model is designed to compensate for partially missing information, alongside a latent-space transfer model tailored for cases where tabular data is completely missing. Experimental results on ADNI dataset with 696 subjects demonstrate the superiority of our proposed method over existing state-of-the-art methods. Our method achieves a balanced accuracy of 92.79% on clinical data with partially missing cases and an impressive 92.35% even without clinical data input. | Multi-Modal Data Fusion with Missing Data Handling for Mild Cognitive Impairment Progression Prediction | [
"Liu, Shuting",
"Zhang, Baochang",
"Zimmer, Veronika A.",
"Rueckert, Daniel"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 54 |
||
null | https://papers.miccai.org/miccai-2024/paper/2150_paper.pdf | @InProceedings{ Liu_Medical_MICCAI2024,
author = { Liu, Yishu and Wu, Zhongqi and Chen, Bingzhi and Zhang, Zheng and Lu, Guangming },
title = { { Medical Cross-Modal Prompt Hashing with Robust Noisy Correspondence Learning } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | In the realm of medical data analysis, medical cross-modal hashing (Med-CMH) has emerged as a promising approach to facilitate fast similarity search across multi-modal medical data. However, due to human subjective deviation or semantic ambiguity, the presence of noisy correspondence across medical modalities exacerbates the challenge of the heterogeneous gap in cross-modal learning. To eliminate clinical noisy correspondence, this paper proposes a novel medical cross-modal prompt hashing (MCPH) that incorporates multi-modal prompt optimization with noise-robust contrastive constraint for facilitating noisy correspondence issues. Benefitting from the robust reasoning capabilities inherent in medical large-scale models, we design a visual-textual prompt learning paradigm to collaboratively enhance alignment and contextual awareness between the medical visual and textual representations. By providing targeted prompts and cues from the medical large language model (LLM), i.e., CheXagent, multi-modal prompt learning facilitates the extraction of relevant features and associations, empowering the model with actionable insights and decision support. Furthermore, a noise-robust contrastive learning strategy is dedicated to dynamically adjusting the intensity of contrastive learning across modalities, thereby enhancing the contrast strength of positive pairs while mitigating the influence of noisy correspondence pairs. Extensive experiments on multiple benchmark datasets demonstrate that our MCPH surpasses the state-of-the-art baselines. | Medical Cross-Modal Prompt Hashing with Robust Noisy Correspondence Learning | [
"Liu, Yishu",
"Wu, Zhongqi",
"Chen, Bingzhi",
"Zhang, Zheng",
"Lu, Guangming"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 55 |
||
null | https://papers.miccai.org/miccai-2024/paper/0746_paper.pdf | @InProceedings{ Wan_Correlationadaptive_MICCAI2024,
author = { Wan, Peng and Zhang, Shukang and Shao, Wei and Zhao, Junyong and Yang, Yinkai and Kong, Wentao and Xue, Haiyan and Zhang, Daoqiang },
title = { { Correlation-adaptive Multi-view CEUS Fusion for Liver Cancer Diagnosis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | Dual-screen contrast-enhanced ultrasound (CEUS) has been the first-line imaging techniques for the differential diagnosis of primary liver cancer (PLC), since the imaging of tumor micro-circulation perfusion as well as anatomic features of B-mode ultrasound (BUS) view. Although previous multi-view learning methods have shown their potential to boost diagnostic efficacy, correlation variances of different views among subjects are largely underestimated, arising from the varying imaging quality of different views and the presence of valuable findings or not. In this paper, we propose a correlation-adaptive multi-view fusion method (CAMVF) for dual-screen CEUS based PLC diagnosis. Towards a reliable fusion of multi-view CEUS findings (i.e., BUS, CEUS and its parametric imaging), our method dynamically assesses the correlation of each view based on the prediction confidence itself and prediction consistency among views. Specifically, we first obtain the confidence of each view with evidence-based uncertainty estimation, then divide them into credible and incredible views based on cross-view consistency, and finally ensemble views with weights adaptive to their credibility. In this retrospective study, we collected CEUS imaging from 238 liver cancer patients in total, and our method achieves the superior diagnostic accuracy and specificity of 88.33% and 92.48%, respectively, demonstrating its efficacy for PLC differential diagnosis. Our code is available at https://github.com/shukangzh/CAMVF. | Correlation-adaptive Multi-view CEUS Fusion for Liver Cancer Diagnosis | [
"Wan, Peng",
"Zhang, Shukang",
"Shao, Wei",
"Zhao, Junyong",
"Yang, Yinkai",
"Kong, Wentao",
"Xue, Haiyan",
"Zhang, Daoqiang"
] | Conference | [
"https://github.com/shukangzh/CAMVF"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 56 |
||
null | https://papers.miccai.org/miccai-2024/paper/2948_paper.pdf | @InProceedings{ Che_Striving_MICCAI2024,
author = { Chen, Yaxiong and Wang, Yujie and Zheng, Zixuan and Hu, Jingliang and Shi, Yilei and Xiong, Shengwu and Zhu, Xiao Xiang and Mou, Lichao },
title = { { Striving for Simplicity: Simple Yet Effective Prior-Aware Pseudo-Labeling for Semi-Supervised Ultrasound Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Medical ultrasound imaging is ubiquitous, but manual analysis struggles to keep pace. Automated segmentation can help but requires large labeled datasets, which are scarce. Semi-supervised learning leveraging both unlabeled and limited labeled data is a promising approach. State-of-the-art methods use consistency regularization or pseudo-labeling but grow increasingly complex. Without sufficient labels, these models often latch onto artifacts or allow anatomically implausible segmentations. In this paper, we present a simple yet effective pseudo-labeling method with an adversarially learned shape prior to regularize segmentations. Specifically, we devise an encoder-twin-decoder network where the shape prior acts as an implicit shape model, penalizing anatomically implausible but not ground-truth-deviating predictions. Without bells and whistles, our simple approach achieves state-of-the-art performance on two benchmarks under different partition protocols. We provide a strong baseline for future semi-supervised medical image segmentation. Code is available at https://github.com/WUTCM-Lab/Shape-Prior-Semi-Seg. | Striving for Simplicity: Simple Yet Effective Prior-Aware Pseudo-Labeling for Semi-Supervised Ultrasound Image Segmentation | [
"Chen, Yaxiong",
"Wang, Yujie",
"Zheng, Zixuan",
"Hu, Jingliang",
"Shi, Yilei",
"Xiong, Shengwu",
"Zhu, Xiao Xiang",
"Mou, Lichao"
] | Conference | [
"https://github.com/WUTCM-Lab/Shape-Prior-Semi-Seg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 57 |
||
null | https://papers.miccai.org/miccai-2024/paper/2508_paper.pdf | @InProceedings{ Kwo_AnatomicallyGuided_MICCAI2024,
author = { Kwon, Junmo and Seo, Sang Won and Park, Hyunjin },
title = { { Anatomically-Guided Segmentation of Cerebral Microbleeds in T1-weighted and T2*-weighted MRI } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Cerebral microbleeds (CMBs) are defined as relatively small blood depositions in the brain that serve as severity indicators of small vessel diseases, and thus accurate quantification of CMBs is clinically useful. However, manual annotation of CMBs is an extreme burden for clinicians due to their small size and the potential risk of misclassification. Moreover, the extreme class imbalance inherent in CMB segmentation tasks presents a significant challenge for training deep neural networks. In this paper, we propose to enhance CMB segmentation performance by introducing a proxy task of segmentation of supratentorial and infratentorial regions. This proxy task could leverage clinical prior knowledge in the identification of CMBs. We evaluated the proposed model using an in-house dataset comprising 335 subjects with 582 longitudinal cases and an external public dataset consisting of 72 cases. Our method performed better than other methods that did not consider proxy tasks. Quantitative results indicate that the proxy task is robust on unseen datasets and thus effective in reducing false positives. Our code is available at https://github.com/junmokwon/AnatGuidedCMBSeg. | Anatomically-Guided Segmentation of Cerebral Microbleeds in T1-weighted and T2*-weighted MRI | [
"Kwon, Junmo",
"Seo, Sang Won",
"Park, Hyunjin"
] | Conference | [
"https://github.com/junmokwon/AnatGuidedCMBSeg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 58 |
||
null | https://papers.miccai.org/miccai-2024/paper/1351_paper.pdf | @InProceedings{ Wan_Joint_MICCAI2024,
author = { Wang, Zhicheng and Li, Jiacheng and Chen, Yinda and Shou, Jiateng and Deng, Shiyu and Huang, Wei and Xiong, Zhiwei },
title = { { Joint EM Image Denoising and Segmentation with Instance-aware Interaction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | In large scale electron microscopy(EM), the demand for rapid imaging often results in significant amounts of imaging noise, which considerably compromise segmentation accuracy. While conventional approaches typically incorporate denoising as a preliminary stage, there is limited exploration into the potential synergies between denoising and segmentation processes. To bridge this gap, we propose an instance-aware interaction framework to tackle EM image denoising and segmentation simultaneously, aiming at mutual enhancement between the two tasks. Specifically, our framework comprises three components: a denoising network, a segmentation network, and a fusion network facilitating feature-level interaction. Firstly, the denoising network mitigates noise degradation. Subsequently, the segmentation network learns an instance-level affinity prior, encoding vital spatial structural information. Finally, in the fusion network, we propose a novel Instance-aware Embedding Module (IEM) to utilize vital spatial structure information from segmentation features for denoising. IEM enables interaction between the two tasks within a unified framework, which also facilitates implicit feedback from denoising for segmentation with a joint training mechanism. Through extensive experiments across multiple datasets, our framework demonstrates substantial performance improvements over existing solutions. Moreover, our framework exhibits strong generalization capabilities across different network architectures. Code is available at https://github.com/zhichengwang-tri/EM-DenoiSeg. | Joint EM Image Denoising and Segmentation with Instance-aware Interaction | [
"Wang, Zhicheng",
"Li, Jiacheng",
"Chen, Yinda",
"Shou, Jiateng",
"Deng, Shiyu",
"Huang, Wei",
"Xiong, Zhiwei"
] | Conference | [
"https://github.com/zhichengwang-tri/EM-DenoiSeg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 59 |
||
null | https://papers.miccai.org/miccai-2024/paper/2778_paper.pdf | @InProceedings{ Pen_GBT_MICCAI2024,
author = { Peng, Zhihao and He, Zhibin and Jiang, Yu and Wang, Pengyu and Yuan, Yixuan },
title = { { GBT: Geometric-oriented Brain Transformer for Autism Diagnosis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Human brains are typically modeled as networks of Regions of Interest (ROI) to comprehend brain functional Magnetic Resonance Imaging (fMRI) connectome for Autism diagnosis. Recently, various deep neural network-based models have been developed to learn the representation of ROIs, achieving impressive performance improvements. However, they (i) heavily rely on increasingly complex network architecture with an obscure learning mechanism, or (ii) solely utilize the cross-entropy loss to supervise the training process, leading to sub-optimal performance. To this end, we propose a simple and effective Geometric-oriented Brain Transformer (GBT) with the Attention Weight Matrix Approximation (AWMA)-based transformer module and the geometric-oriented representation learning module for brain fMRI connectome analysis. Specifically, the AWMA-based transformer module selectively removes the components of the attention weight matrix with smaller singular values, aiming to learn the most relevant and representative graph representation. The geometric-oriented representation learning module imposes low-rank intra-class compactness and high-rank inter-class diversity constraints on learned representations to promote that to be discriminative. Experimental results on the ABIDE dataset validate that our method GBT consistently outperforms state-of-the-art approaches. The code is available at https://github.com/CUHK-AIM-Group/GBT. | GBT: Geometric-oriented Brain Transformer for Autism Diagnosis | [
"Peng, Zhihao",
"He, Zhibin",
"Jiang, Yu",
"Wang, Pengyu",
"Yuan, Yixuan"
] | Conference | [
"https://github.com/CUHK-AIM-Group/GBT"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 60 |
||
null | https://papers.miccai.org/miccai-2024/paper/3878_paper.pdf | @InProceedings{ Bae_HoGNet_MICCAI2024,
author = { Bae, Joseph and Kapse, Saarthak and Zhou, Lei and Mani, Kartik and Prasanna, Prateek },
title = { { HoG-Net: Hierarchical Multi-Organ Graph Network for Head and Neck Cancer Recurrence Prediction from CT Images } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15005 },
month = {October},
pages = { pending },
} | In many diseases including head and neck squamous cell carcinoma (HNSCC), pathologic processes are not limited to a single region of interest, but instead encompass surrounding anatomical structures and organs outside of the tumor. To model information from organs-at-risk (OARs) as well as from the primary tumor, we present a Hierarchical Multi-Organ Graph Network (HoG-Net) for medical image modeling which we leverage to predict locoregional tumor recurrence (LR) for HNSCC patients. HoG-Net is able to model local features from individual OARs and then constructs a holistic global representation of interactions between features from multiple OARs in a single image. HoG-Net’s prediction of LR for HNSCC patients is evaluated in a largest yet studied dataset of N=2,741 patients from six institutions, and outperforms several previously published baselines. Further, HoG-Net allows insights into which OARs are significant in predicting LR, providing specific OAR-level interpretability rather than the coarse patch-level interpretability provided by other methods. | HoG-Net: Hierarchical Multi-Organ Graph Network for Head and Neck Cancer Recurrence Prediction from CT Images | [
"Bae, Joseph",
"Kapse, Saarthak",
"Zhou, Lei",
"Mani, Kartik",
"Prasanna, Prateek"
] | Conference | [
"https://github.com/bmi-imaginelab/HoGNet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 61 |
||
null | https://papers.miccai.org/miccai-2024/paper/1451_paper.pdf | @InProceedings{ Den_HATs_MICCAI2024,
author = { Deng, Ruining and Liu, Quan and Cui, Can and Yao, Tianyuan and Xiong, Juming and Bao, Shunxing and Li, Hao and Yin, Mengmeng and Wang, Yu and Zhao, Shilin and Tang, Yucheng and Yang, Haichun and Huo, Yuankai },
title = { { HATs: Hierarchical Adaptive Taxonomy Segmentation for Panoramic Pathology Image Analysis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Panoramic image segmentation in computational pathology presents a remarkable challenge due to the morphologically complex and variably scaled anatomy. For instance, the intricate organization in kidney pathology spans multiple layers, from regions like the cortex and medulla to functional units such as glomeruli, tubules, and vessels, down to various cell types. In this paper, we propose a novel Hierarchical Adaptive Taxonomy Segmentation (HATs) method, which is designed to thoroughly segment panoramic views of kidney structures by leveraging detailed anatomical insights. Our approach entails (1) the innovative HATS technique which translates spatial relationships among 15 distinct object classes into a versatile “plug-and-play” loss function that spans across regions, functional units, and cells, (2) the incorporation of anatomical hierarchies and scale considerations into a unified simple matrix representation for all panoramic entities, (3) the adoption of the latest AI foundation model (EfficientSAM) as a feature extraction tool to boost the model’s adaptability, yet eliminating the need for manual prompt generation in conventional segment anything model (SAM). Experimental findings demonstrate that the HATS method offers an efficient and effective strategy for integrating clinical insights and imaging precedents into a unified segmentation model across more than 15 categories. The official implementation is publicly available at https://github.com/hrlblab/HATs. | HATs: Hierarchical Adaptive Taxonomy Segmentation for Panoramic Pathology Image Analysis | [
"Deng, Ruining",
"Liu, Quan",
"Cui, Can",
"Yao, Tianyuan",
"Xiong, Juming",
"Bao, Shunxing",
"Li, Hao",
"Yin, Mengmeng",
"Wang, Yu",
"Zhao, Shilin",
"Tang, Yucheng",
"Yang, Haichun",
"Huo, Yuankai"
] | Conference | 2407.00596 | [
"https://github.com/hrlblab/HATs"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 62 |
|
null | https://papers.miccai.org/miccai-2024/paper/3323_paper.pdf | @InProceedings{ Bau_Deep_MICCAI2024,
author = { Baumann, Alexander and Ayala, Leonardo and Studier-Fischer, Alexander and Sellner, Jan and Özdemir, Berkin and Kowalewski, Karl-Friedrich and Ilic, Slobodan and Seidlitz, Silvia and Maier-Hein, Lena },
title = { { Deep intra-operative illumination calibration of hyperspectral cameras } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Hyperspectral imaging (HSI) is emerging as a promising novel imaging modality with various potential surgical applications. Currently available cameras, however, suffer from poor integration into the clinical workflow because they require the lights to be switched off, or the camera to be manually recalibrated as soon as lighting conditions change. Given this critical bottleneck, the contribution of this paper is threefold: (1) We demonstrate that dynamically changing lighting conditions in the operating room dramatically affect the performance of HSI applications, namely physiological parameter estimation, and surgical scene segmentation. (2) We propose a novel learning-based approach to automatically recalibrating hyperspectral images during surgery and show that it is sufficiently accurate to replace the tedious process of white reference-based recalibration. (3) Based on a total of 742 HSI cubes from a phantom, porcine models, and rats we show that our recalibration method not only outperforms previously proposed methods, but also generalizes across species, lighting conditions, and image processing tasks. Due to its simple workflow integration as well as high accuracy, speed, and generalization capabilities, our method could evolve as a central component in clinical surgical HSI. | Deep intra-operative illumination calibration of hyperspectral cameras | [
"Baumann, Alexander",
"Ayala, Leonardo",
"Studier-Fischer, Alexander",
"Sellner, Jan",
"Özdemir, Berkin",
"Kowalewski, Karl-Friedrich",
"Ilic, Slobodan",
"Seidlitz, Silvia",
"Maier-Hein, Lena"
] | Conference | 2409.07094 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 63 |
|
null | https://papers.miccai.org/miccai-2024/paper/3549_paper.pdf | @InProceedings{ Wal_Multisequence_MICCAI2024,
author = { Walsh, Ricky and Gaubert, Malo and Meurée, Cédric and Hussein, Burhan Rashid and Kerbrat, Anne and Casey, Romain and Combès, Benoit and Galassi, Francesca },
title = { { Multi-sequence learning for multiple sclerosis lesion segmentation in spinal cord MRI } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Automated tools developed to detect multiple sclerosis lesions in spinal cord MRI have thus far been based on processing single MR sequences in a deep learning model. This study is the first to explore a multi-sequence approach to this task and we propose a method to address inherent issues in multi-sequence spinal cord data, i.e., differing fields of view, inter-sequence alignment and incomplete sequence data for training and inference. In particular, we investigate a simple missing-modality method of replacing missing features with the mean over the available sequences. This approach leads to better segmentation results when processing a single sequence at inference than a model trained directly on that sequence, and our experiments provide valuable insights into the mechanism underlying this surprising result. In particular, we demonstrate that both the encoder and decoder benefit from the variability introduced in the multi-sequence setting. Additionally, we propose a latent feature augmentation scheme to reproduce this variability in a single-sequence setting, resulting in similar improvements over the single-sequence baseline. | Multi-sequence learning for multiple sclerosis lesion segmentation in spinal cord MRI | [
"Walsh, Ricky",
"Gaubert, Malo",
"Meurée, Cédric",
"Hussein, Burhan Rashid",
"Kerbrat, Anne",
"Casey, Romain",
"Combès, Benoit",
"Galassi, Francesca"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 64 |
||
null | https://papers.miccai.org/miccai-2024/paper/1172_paper.pdf | @InProceedings{ Lv_Aligning_MICCAI2024,
author = { Lv, Yanan and Jia, Haoze and Chen, Xi and Yan, Haiyang and Han, Hua },
title = { { Aligning and Restoring Imperfect ssEM images for Continuity Reconstruction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | The serial section electron microscopy reconstruction method is commonly used in large volume reconstruction of biological tissue, but the inevitable section damage brings challenges to volume reconstruction. The section damage may result in imperfect section alignment and affect the subsequent neuron segmentation and data analysis. This paper proposes an aligning and restoring method for imperfect sections, which contributes to promoting the continuity reconstruction of biological tissues. To align imperfect sections, we improve the optical flow network to address the difficulties faced by traditional optical flow networks in handling issues related to discontinuous deformations and large displacements in the alignment of imperfect sections. Based on the deformations in different regions, the Guided Position of each coordinate point on the section is estimated to generate the Guided Field of the imperfect section. This Guided field aids the optical flow network in better handling the complex deformation and large displacement associated with the damaged area during alignment. Subsequently, the damaged region is predicted and seamlessly integrated into the aligned imperfect section images, ultimately obtaining aligned damage-free section images. Experimental results demonstrate that the proposed method effectively resolves the alignment and restoration issues of imperfect sections, achieving better alignment accuracy than existing methods and significantly improving neuron segmentation accuracy. Our code is available at https://github.com/lvyanan525/Aligning-and-Restoring-Imperfect-ssEM-images. | Aligning and Restoring Imperfect ssEM images for Continuity Reconstruction | [
"Lv, Yanan",
"Jia, Haoze",
"Chen, Xi",
"Yan, Haiyang",
"Han, Hua"
] | Conference | [
"https://github.com/lvyanan525/Aligning-and-Restoring-Imperfect-ssEM-images"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 65 |
||
null | https://papers.miccai.org/miccai-2024/paper/1033_paper.pdf | @InProceedings{ Kim_Semisupervised_MICCAI2024,
author = { Kim, Eunjin and Kwon, Gitaek and Kim, Jaeyoung and Park, Hyunjin },
title = { { Semi-supervised Segmentation through Rival Networks Collaboration with Saliency Map in Diabetic Retinopathy } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Automatic segmentation of diabetic retinopathy (DR) lesions in retinal images has a translational impact. However, collecting pixel-level annotations for supervised learning is labor-intensive. Thus, semi-supervised learning (SSL) methods tapping into the abundance of unlabeled images have been widely accepted. Still, a blind application of SSL is problematic due to the confirmation bias stemming from unreliable pseudo masks and class imbalance. To address these concerns, we propose a Rival Networks Collaboration with Saliency Map (RiCo) for multi-lesion segmentation in retinal images for DR. From two competing networks, we declare a victor network based on Dice coefficient onto which the defeated network is aligned when exploiting unlabeled images. Recognizing that this competition might overlook small lesions, we equip rival networks with distinct weight systems for imbalanced and underperforming classes. The victor network dynamically guides the defeated network by complementing its weaknesses and mimicking the victor’s strengths. This process fosters effective collaborative growth through meaningful knowledge exchange. Furthermore, we incorporate a saliency map, highlighting color-striking structures, into consistency loss to significantly enhance alignment in structural and critical areas for retinal images. This approach improves reliability and stability by minimizing the influence of unreliable areas of the pseudo mask. A comprehensive comparison with state-of-the-art SSL methods demonstrates our method’s superior performance on two datasets (IDRiD and e-ophtha). Our code is available at https://github.com/eunjinkim97/SSL_DRlesion. | Semi-supervised Segmentation through Rival Networks Collaboration with Saliency Map in Diabetic Retinopathy | [
"Kim, Eunjin",
"Kwon, Gitaek",
"Kim, Jaeyoung",
"Park, Hyunjin"
] | Conference | [
"https://github.com/eunjinkim97/SSL_DRlesion"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 66 |
||
null | https://papers.miccai.org/miccai-2024/paper/3447_paper.pdf | @InProceedings{ Zor_EnhancedquickDWI_MICCAI2024,
author = { Zormpas-Petridis, Konstantinos and Candito, Antonio and Messiou, Christina and Koh, Dow-Mu and Blackledge, Matthew D. },
title = { { Enhanced-quickDWI: Achieving equivalent clinical quality by denoising heavily sub-sampled diffusion-weighted imaging data } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Whole-body diffusion-weighted imaging (DWI) is a sensitive tool for assessing the spread of metastatic bone malignancies. It offers voxel-wise calculation of apparent diffusion coefficient (ADC) which correlates with tissue cellularity, providing a potential imaging biomarker for tumour re-sponse assessment. However, DWI is an inherently noisy technique requiring many signal aver-ages over multiple b-values, leading to times of up to 30 minutes for a whole-body exam. We present a novel neural network implicitly designed to provide high-quality images from heavily sub-sampled diffusion data (only 1 signal average) which allow whole-body acquisitions of ~5 minutes. We demonstrate that our network can achieve equivalent quality to the clinical b-value and ADC images in a radiological multi-reader study of 100 patients for whole-body and abdo-men-pelvis data. We also achieved good agreement to the quantitative values of clinical images within multi-lesion segmentations in 16 patients compared to a previous approach. | Enhanced-quickDWI: Achieving equivalent clinical quality by denoising heavily sub-sampled diffusion-weighted imaging data | [
"Zormpas-Petridis, Konstantinos",
"Candito, Antonio",
"Messiou, Christina",
"Koh, Dow-Mu",
"Blackledge, Matthew D."
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 67 |
||
null | https://papers.miccai.org/miccai-2024/paper/1711_paper.pdf | @InProceedings{ Dai_RIPAV_MICCAI2024,
author = { Dai, Wei and Yao, Yinghao and Kong, Hengte and Chen, Zhen Ji and Wang, Sheng and Bai, Qingshi and Sun, Haojun and Yang, Yongxin and Su, Jianzhong },
title = { { RIP-AV: Joint Representative Instance Pre-training with Context Aware Network for Retinal Artery/Vein Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Accurate deep learning-based segmentation of retinal arteries and veins (A/V) enables improved diagnosis, monitoring, and management of ocular fundus diseases and systemic diseases. However, existing resized and patch-based algorithms face challenges with redundancy, overlooking thin vessels, and underperforming in low-contrast edge areas of the retinal images, due to imbalanced background-to-A/V ratios and limited contexts. Here, we have developed a novel deep learning framework for retinal A/V segmentation, named RIP-AV, which integrates a Representative Instance Pre-training (RIP) task with a context-aware network for retinal A/V segmentation for the first time. Initially, we develop a direct yet effective algorithm for vascular patch-pair selection (PPS) and then introduce a RIP task, formulated as a multi-label problem, aiming at enhancing the network’s capability to learn latent arteriovenous features from diverse spatial locations across vascular patches. Subsequently, in the training phase, we introduce two novel modules: Patch Context Fusion (PCF) module and Distance Aware (DA) module. They are designed to improve the discriminability and continuity of thin vessels, especially in low-contrast edge areas, by leveraging the relationship between vascular patches and their surrounding contexts cooperatively and complementarily. The effectiveness of RIP-AV has been validated on three publicly available retinal datasets: AV-DRIVE, LES-AV, and HRF, demonstrating remarkable accuracies of 0.970, 0.967, and 0.981, respectively, thereby outperforming existing state-of-the-art methods. Notably, our method achieves a significant 1.7% improvement in accuracy on the HRF dataset, particularly enhancing the segmentation of thin edge arteries and veins. | RIP-AV: Joint Representative Instance Pre-training with Context Aware Network for Retinal Artery/Vein Segmentation | [
"Dai, Wei",
"Yao, Yinghao",
"Kong, Hengte",
"Chen, Zhen Ji",
"Wang, Sheng",
"Bai, Qingshi",
"Sun, Haojun",
"Yang, Yongxin",
"Su, Jianzhong"
] | Conference | [
"https://github.com/weidai00/RIP-AV"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 68 |
||
null | https://papers.miccai.org/miccai-2024/paper/2802_paper.pdf | @InProceedings{ Den_TAPoseNet_MICCAI2024,
author = { Deng, Qingxin and Yang, Xunyu and Huang, Minghan and Jiang, Landu and Zhang, Dian },
title = { { TAPoseNet: Teeth Alignment based on Pose estimation via multi-scale Graph Convolutional Network } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Teeth alignment plays an important role in orthodontic treatment. Automating the prediction of teeth alignment target can significantly aid both doctors and patients. Traditional methods often utilize rule-based approach or deep learning method to generate teeth alignment target. However, they usually require extra manual design by doctors, or produce deformed teeth shapes, even fail to address severe misalignment cases. To tackle the problem, we introduce a pose prediction model which can better describe the space representation of the tooth. We also consider geometric information to fully extracted features of teeth. In the meanwhile, we build a multi-scale Graph Convolutional Network(GCN) to characterize the teeth relationships from different levels (global, local, intersection). Finally the target pose of each tooth can be predicted and so the teeth movement from the initial pose to the target pose can be obtained without deforming teeth shapes. Our method has been validated in clinical orthodontic treatment cases and shows promising results both qualitatively and quantitatively. | TAPoseNet: Teeth Alignment based on Pose estimation via multi-scale Graph Convolutional Network | [
"Deng, Qingxin",
"Yang, Xunyu",
"Huang, Minghan",
"Jiang, Landu",
"Zhang, Dian"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 69 |
||
null | https://papers.miccai.org/miccai-2024/paper/1357_paper.pdf | @InProceedings{ Xie_DSNet_MICCAI2024,
author = { Xie, Qihang and Zhang, Dan and Mou, Lei and Wang, Shanshan and Zhao, Yitian and Guo, Mengguo and Zhang, Jiong },
title = { { DSNet: A Spatio-Temporal Consistency Network for Cerebrovascular Segmentation in Digital Subtraction Angiography Sequences } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Digital Subtraction Angiography (DSA) sequences serve as the foremost diagnostic standard for cerebrovascular diseases (CVDs). Accurate cerebrovascular segmentation in DSA sequences assists clinicians in analyzing pathological changes and pinpointing lesions. However, existing methods commonly utilize a single frame extracted from DSA sequences for cerebrovascular segmentation, disregarding the inherent temporal information within these sequences. This rich temporal information has the potential to achieve better segmentation coherence while reducing the interference caused by artifacts. Therefore, in this paper, we propose a spatio-temporal consistency network for cerebrovascular segmentation in DSA sequences, named DSNet, which fully exploits the information of DSA sequences. Specifically, our DSNet comprises a dual-branch encoder and a dual-branch decoder. The encoder consists of a temporal encoding branch (TEB) and a spatial encoding branch (SEB). The TEB is designed to capture dynamic vessel flow information and the SEB is utilized to extract static vessel structure information. % The Dynamic Frame reWeighting (DFW) module is designed to select frames from DSA sequences dynamically in the TEB skip connection.
To effectively capture the correlations among sequential frames, a dynamic frame reweighting module is designed to adjust the weights of the frames. In bottleneck, we exploit a spatio-temporal feature alignment (STFA) module to fuse the features from the encoder to achieve a more comprehensive vascular representation. Moreover, DSNet employs unsupervised loss for consistency regularization between the dual output from the decoder during training. Experimental results demonstrate that DSNet outperforms existing methods, achieving a Dice score of 89.34\% for cerebrovascular segmentation. | DSNet: A Spatio-Temporal Consistency Network for Cerebrovascular Segmentation in Digital Subtraction Angiography Sequences | [
"Xie, Qihang",
"Zhang, Dan",
"Mou, Lei",
"Wang, Shanshan",
"Zhao, Yitian",
"Guo, Mengguo",
"Zhang, Jiong"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 70 |
||
null | https://papers.miccai.org/miccai-2024/paper/3136_paper.pdf | @InProceedings{ Zho_Refining_MICCAI2024,
author = { Zhou, Qian and Zou, Hua and Wang, Zhongyuan and Jiang, Haifeng and Wang, Yong },
title = { { Refining Intraocular Lens Power Calculation: A Multi-modal Framework Using Cross-layer Attention and Effective Channel Attention } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Selecting the appropriate power for intraocular lenses (IOLs) is crucial for the success of cataract surgeries. Traditionally, ophthalmologists rely on manually designed formulas like “Barrett” and “Hoffer Q” to calculate IOL power. However, these methods exhibit limited accuracy since they primarily focus on biometric data such as axial length and corneal curvature, overlooking the rich details in preoperative images that reveal the eye’s internal anatomy. In this study, we propose a novel deep learning model that leverages multi-modal information for accurate IOL power calculation. In particular, to address the low information density in optical coherence tomography (OCT) images (i.e., most regions are with zero pixel values), we introduce a cross-layer attention module to take full advantage of hierarchical contextual information to extract comprehensive anatomical features. Additionally, the IOL powers given by traditional formulas are taken as prior knowledge to benefit model training. The proposed method is evaluated on a self-collected dataset consisting of 174 samples and compared with other approaches. The experimental results demonstrate that our approach significantly surpasses competing methods, achieving a mean absolute error of just 0.367 diopters (D). Impressively, the percentage of eyes with a prediction error within ± 0.5 D achieves 84.1%. Furthermore, extensive ablation studies are conducted to validate each component’s contribution and identify the biometric parameters most relevant to accurate IOL power calculation. Codes will be available at https://github.com/liyiersan/IOL. | Refining Intraocular Lens Power Calculation: A Multi-modal Framework Using Cross-layer Attention and Effective Channel Attention | [
"Zhou, Qian",
"Zou, Hua",
"Wang, Zhongyuan",
"Jiang, Haifeng",
"Wang, Yong"
] | Conference | [
"https://github.com/liyiersan/IOL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 71 |
||
null | https://papers.miccai.org/miccai-2024/paper/1438_paper.pdf | @InProceedings{ Hua_Memoryefficient_MICCAI2024,
author = { Huang, Kun and Ma, Xiao and Zhang, Yuhan and Su, Na and Yuan, Songtao and Liu, Yong and Chen, Qiang and Fu, Huazhu },
title = { { Memory-efficient High-resolution OCT Volume Synthesis with Cascaded Amortized Latent Diffusion Models } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Optical coherence tomography (OCT) image analysis plays an important role in the field of ophthalmology. Current successful analysis models rely on available large datasets, which can be challenging to be obtained for certain tasks. The use of deep generative models to create realistic data emerges as a promising approach. However, due to limitations in hardware resources, it is still difficulty to synthesize high-resolution OCT volumes. In this paper, we introduce a cascaded amortized latent diffusion model (CA-LDM) that can synthesis high-resolution OCT volumes in a memory-efficient way. First, we propose non-holistic autoencoders to efficiently build a bidirectional mapping between high-resolution volume space and low-resolution latent space. In tandem with autoencoders, we propose cascaded diffusion processes to synthesize high-resolution OCT volumes with a global-to-local refinement process, amortizing the memory and computational demands. Experiments on a public high-resolution OCT dataset show that our synthetic data have realistic high-resolution and global features, surpassing the capabilities of existing methods. Moreover, performance gains on two down-stream fine-grained segmentation tasks demonstrate the benefit of the proposed method in training deep learning models for medical imaging tasks. The code is public available. | Memory-efficient High-resolution OCT Volume Synthesis with Cascaded Amortized Latent Diffusion Models | [
"Huang, Kun",
"Ma, Xiao",
"Zhang, Yuhan",
"Su, Na",
"Yuan, Songtao",
"Liu, Yong",
"Chen, Qiang",
"Fu, Huazhu"
] | Conference | 2405.16516 | [
"https://github.com/nicetomeetu21/CA-LDM"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 72 |
|
null | https://papers.miccai.org/miccai-2024/paper/3476_paper.pdf | @InProceedings{ Don_Cycleconsistent_MICCAI2024,
author = { Dong, Xiuyu and Wu, Zhengwang and Ma, Laifa and Wang, Ya and Tang, Kaibo and Zhang, He and Lin, Weili and Li, Gang },
title = { { Cycle-consistent Learning for Fetal Cortical Surface Reconstruction } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Fetal cortical surface reconstruction is crucial for quantitative analysis of normal and abnormal prenatal brain development. While there are many cortical surface reconstruction methods available for adults and infants, there remains a notable scarcity of dedicated techniques for fetal cortical surface reconstruction. Of note, fetal brain MR images present unique challenges, characterized by nonuniform low tissue contrast associated with extremely rapid brain development and folding during the prenatal stages and low imaging resolution, as well as susceptibility to severe motion artifacts. Moreover, the smaller size of fetal brains results in much narrower cortical ribbons and sulci. Consequently, the fetal cortical surfaces are more prone to be influenced by partial volume effects and tissue boundary ambiguities. In this work, we develop a multi-task, priori-knowledge supervised fetal cortical surface reconstruction method based on deep learning. Our method incorporates a cycle-consistent strategy, utilizing prior knowledge and multiple stationary velocity fields to enhance its representation capabilities, enabling effective learning of diffeomorphic deformations from the template surface mesh to the inner and outer surfaces. Specifically, our framework involves iteratively refining both inner and outer surfaces in a cyclical manner by mutually guiding each other, thus improving accuracy especially for ambiguous and challenging cortical regions. Evaluation on a fetal MRI dataset with 83 subjects shows the superiority of our method with a geometric error of 0.229 ± 0.047 mm and 0.023 ± 0.058% self-intersecting faces, indicating promising surface geometric and topological accuracy. These results demonstrate a great advancement over state-of-the-art deep learning methods, while maintaining high computational efficiency. | Cycle-consistent Learning for Fetal Cortical Surface Reconstruction | [
"Dong, Xiuyu",
"Wu, Zhengwang",
"Ma, Laifa",
"Wang, Ya",
"Tang, Kaibo",
"Zhang, He",
"Lin, Weili",
"Li, Gang"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 73 |
||
null | https://papers.miccai.org/miccai-2024/paper/1786_paper.pdf | @InProceedings{ Pan_Integrating_MICCAI2024,
author = { Pang, Winnie and Ke, Xueyi and Tsutsui, Satoshi and Wen, Bihan },
title = { { Integrating Clinical Knowledge into Concept Bottleneck Models } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | Concept bottleneck models (CBMs), which predict human-interpretable concepts (e.g., nucleus shapes in cell images) before predicting the final output (e.g., cell type), provide insights into the decision-making processes of the model. However, training CBMs solely in a data-driven manner can introduce undesirable biases, which may compromise prediction performance, especially when the trained models are evaluated on out-of-domain images (e.g., those acquired using different devices). To mitigate this challenge, we propose integrating clinical knowledge to refine CBMs, better aligning them with clinicians’ decision-making processes. Specifically, we guide the model to prioritize the concepts that clinicians also prioritize. We validate our approach on two datasets of medical images: white blood cell and skin images. Empirical validation demonstrates that incorporating medical guidance enhances the model’s classification performance on unseen datasets with varying preparation methods, thereby increasing its real-world applicability. | Integrating Clinical Knowledge into Concept Bottleneck Models | [
"Pang, Winnie",
"Ke, Xueyi",
"Tsutsui, Satoshi",
"Wen, Bihan"
] | Conference | 2407.06600 | [
"https://github.com/PangWinnie0219/align_concept_cbm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 74 |
|
null | https://papers.miccai.org/miccai-2024/paper/0967_paper.pdf | @InProceedings{ Li_MPMNet_MICCAI2024,
author = { Li, Yuanyuan and Hao, Huaying and Zhang, Dan and Fu, Huazhu and Liu, Mengting and Shan, Caifeng and Zhao, Yitian and Zhang, Jiong },
title = { { MPMNet: Modal Prior Mutual-support Network for Age-related Macular Degeneration Classification } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | Early screening and classification of Age-related Macular Degeneration (AMD) are crucial for precise clinical treatment. Currently, most automated methods focus solely on dry and wet AMD classification. However, the classification of wet AMD into more explicit type 1 choroidal neovascularization (CNV) and type 2 CNV has rarely been explored, despite its significance in intravitreal injection. Furthermore, previous methods predominantly utilized single-modal images for distinguishing AMD types, while multi-modal images can provide a more comprehensive representation of pathological changes for accurate diagnosis.
In this paper, we propose a Modal Prior Mutual-support Network (MPMNet), which for the first time combines OCTA images and OCT sequences for the classification of normal, dry AMD, type 1 CNV, and type 2 CNV. Specifically, we first employ a multi-branch encoder to extract modality-Specific features.
A novel modal prior mutual-support mechanism is proposed, which determines the primary and auxiliary modalities based on the sensitivity of different modalities to lesions and makes joint decisions. In this mechanism, a distillation loss is employed to enforce the consistency between single-modal decisions and joint decisions. It can facilitate networks to focus on specific pathological information within individual modalities.
Furthermore, we propose a mutual information-guided feature dynamic adjustment strategy.
This strategy adjusts the channel weights of the two modalities by computing the mutual information between OCTA and OCT, thereby mitigating the influence of low-quality modal features on the network’s robustness.
Experiments on private and public datasets have demonstrated that the proposed MPMNet outperforms existing state-of-the-art methods. | MPMNet: Modal Prior Mutual-support Network for Age-related Macular Degeneration Classification | [
"Li, Yuanyuan",
"Hao, Huaying",
"Zhang, Dan",
"Fu, Huazhu",
"Liu, Mengting",
"Shan, Caifeng",
"Zhao, Yitian",
"Zhang, Jiong"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 75 |
||
null | https://papers.miccai.org/miccai-2024/paper/3884_paper.pdf | @InProceedings{ Tan_Fetal_MICCAI2024,
author = { Tan, Junpeng and Zhang, Xin and Qing, Chunmei and Yang, Chaoxiang and Zhang, He and Li, Gang and Xu, Xiangmin },
title = { { Fetal MRI Reconstruction by Global Diffusion and Consistent Implicit Representation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Although the utilization of multi-stacks can solve fetal MRI motion correction and artifact removal problems, there are still problems of regional intensity heterogeneity, and global consistency discrimination in 3D space. To this end, we propose a novel coarse-to-fine self-supervised fetal brain MRI Radiation Diffusion Generation Model (RDGM). Firstly, we propose a novel self-supervised regionally Consistent Implicit Neural Representation (CINR) network with a double-spatial voxel association consistency mechanism to solve regional intensity heterogeneity. CINR enhances regional 3D voxel association and complementarity by two-voxel mapping spaces to generate coarse MRI. We also fine-tune the weighted slice reconstruction loss to improve the network reconstruction performance. Moreover, we propose the Global Diffusion Discriminative Generation (GDDG) fine module to enhance volume global consistency and discrimination. The noise diffusion is used to transform the global intensity discriminant information in 3D volume. The experiments on two real-world fetal MRI datasets demonstrate that RDGM achieves state-of-the-art results. | Fetal MRI Reconstruction by Global Diffusion and Consistent Implicit Representation | [
"Tan, Junpeng",
"Zhang, Xin",
"Qing, Chunmei",
"Yang, Chaoxiang",
"Zhang, He",
"Li, Gang",
"Xu, Xiangmin"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 76 |
||
null | https://papers.miccai.org/miccai-2024/paper/1674_paper.pdf | @InProceedings{ Zen_ABP_MICCAI2024,
author = { Zeng, Xinyi and Zeng, Pinxian and Cui, Jiaqi and Li, Aibing and Liu, Bo and Wang, Chengdi and Wang, Yan },
title = { { ABP: Asymmetric Bilateral Prompting for Text-guided Medical Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Deep learning-based segmentation models have made remarkable progress in aiding pulmonary disease diagnosis by segmenting lung lesion areas in large amounts of annotated X-ray images. Recently, to alleviate the demand for medical image data and further improve segmentation performance, various studies have extended mono-modal models to incorporate additional modalities, such as diagnostic textual notes. Despite the prevalent utilization of cross-attention mechanisms or their variants to model interactions between visual and textual features, current text-guided medical image segmentation approaches still face limitations. These include a lack of adaptive adjustments for text tokens to accommodate variations in image contexts, as well as a deficiency in exploring and utilizing text-prior information. To mitigate these limitations, we propose Asymmetric Bilateral Prompting (ABP), a novel method tailored for text-guided medical image segmentation. Specifically, we introduce an ABP block preceding each up-sample stage in the image decoder. This block first integrates a symmetric bilateral cross-attention module for both textual and visual branches to model preliminary multi-modal interactions. Then, guided by the opposite modality, two asymmetric operations are employed for further modality-specific refinement. Notably, we utilize attention scores from the image branch as attentiveness rankings to prune and remove redundant text tokens, ensuring that the image features are progressively interacted with more attentive text tokens during up-sampling. Asymmetrically, we integrate attention scores from the text branch as text-prior information to enhance visual representations and target predictions in the visual branch. Experimental results on the QaTa-COV19 dataset validate the superiority of our proposed method. | ABP: Asymmetric Bilateral Prompting for Text-guided Medical Image Segmentation | [
"Zeng, Xinyi",
"Zeng, Pinxian",
"Cui, Jiaqi",
"Li, Aibing",
"Liu, Bo",
"Wang, Chengdi",
"Wang, Yan"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 77 |
||
null | https://papers.miccai.org/miccai-2024/paper/1289_paper.pdf | @InProceedings{ Li_An_MICCAI2024,
author = { Li, Qing and Zhang, Yizhe and Li, Yan and Lyu, Jun and Liu, Meng and Sun, Longyu and Sun, Mengting and Li, Qirong and Mao, Wenyue and Wu, Xinran and Zhang, Yajing and Chu, Yinghua and Wang, Shuo and Wang, Chengyan },
title = { { An Empirical Study on the Fairness of Foundation Models for Multi-Organ Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | The segmentation foundation model, e.g., Segment Anything Model (SAM), has attracted increasing interest in the medical image community. Early pioneering studies primarily concentrated on assessing and improving SAM’s performance from the perspectives of overall accuracy and efficiency, yet little attention was given to the fairness considerations. This oversight raises questions about the potential for performance biases that could mirror those found in task-specific deep learning models like nnU-Net. In this paper, we explored the fairness dilemma concerning large segmentation foundation models. We prospectively curate a benchmark dataset of 3D MRI and CT scans of the organs including liver, kidney, spleen, lung and aorta from a total of 1054 healthy subjects with expert segmentations. Crucially, we document demographic details such as gender, age, and body mass index (BMI) for each subject to facilitate a nuanced fairness analysis. We test state-of-the-art foundation models for medical image segmentation, including the original SAM, medical SAM and SAT models, to evaluate segmentation efficacy across different demographic groups and identify disparities. Our comprehensive analysis, which accounts for various confounding factors, reveals significant fairness concerns within these foundational models. Moreover, our findings highlight not only disparities in overall segmentation metrics, such as the Dice Similarity Coefficient but also significant variations in the spatial distribution of segmentation errors, offering empirical evidence of the nuanced challenges in ensuring fairness in medical image segmentation. | An Empirical Study on the Fairness of Foundation Models for Multi-Organ Image Segmentation | [
"Li, Qing",
"Zhang, Yizhe",
"Li, Yan",
"Lyu, Jun",
"Liu, Meng",
"Sun, Longyu",
"Sun, Mengting",
"Li, Qirong",
"Mao, Wenyue",
"Wu, Xinran",
"Zhang, Yajing",
"Chu, Yinghua",
"Wang, Shuo",
"Wang, Chengyan"
] | Conference | 2406.12646 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 78 |
|
null | https://papers.miccai.org/miccai-2024/paper/0115_paper.pdf | @InProceedings{ Tia_TaGAT_MICCAI2024,
author = { Tian, Xin and Anantrasirichai, Nantheera and Nicholson, Lindsay and Achim, Alin },
title = { { TaGAT: Topology-Aware Graph Attention Network For Multi-modal Retinal Image Fusion } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | In the realm of medical image fusion, integrating information from various modalities is crucial for improving diagnostics and treatment planning, especially in retinal health, where the important features exhibit differently in different imaging modalities. Existing deep learning-based approaches insufficiently focus on retinal image fusion, and thus fail to preserve enough anatomical structure and fine vessel details in retinal image fusion. To address this, we propose the Topology-Aware Graph Attention Network (TaGAT) for multi-modal retinal image fusion, leveraging a novel Topology-Aware Encoder (TAE) with Graph Attention Networks (GAT) to effectively enhance spatial features with retinal vasculature’s graph topology across modalities. The TAE encodes the base and detail features, extracted via a Long-short Range (LSR) encoder from retinal images, into the graph extracted from the retinal vessel. Within the TAE, the GAT-based Graph Information Update block dynamically refines and aggregates the node features to generate topology-aware graph features. The updated graph features with base and detail features are combined and decoded as a fused image. Our model outperforms state-of-the-art methods in Fluorescein Fundus Angiography (FFA) with Color Fundus (CF) and Optical Coherence Tomography (OCT) with confocal microscopy retinal image fusion. The source code can be accessed via https://github.com/xintian-99/TaGAT. | TaGAT: Topology-Aware Graph Attention Network For Multi-modal Retinal Image Fusion | [
"Tian, Xin",
"Anantrasirichai, Nantheera",
"Nicholson, Lindsay",
"Achim, Alin"
] | Conference | 2407.14188 | [
"https://github.com/xintian-99/TaGAT"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 79 |
|
null | https://papers.miccai.org/miccai-2024/paper/0674_paper.pdf | @InProceedings{ Liu_Cut_MICCAI2024,
author = { Liu, Chang and Fan, Fuxin and Schwarz, Annette and Maier, Andreas },
title = { { Cut to the Mix: Simple Data Augmentation Outperforms Elaborate Ones in Limited Organ Segmentation Datasets } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | Multi-organ segmentation is a widely applied clinical routine and automated organ segmentation tools dramatically improve the pipeline of the radiologists. Recently, deep learning (DL) based segmentation models have shown the capacity to accomplish such a task. However, the training of the segmentation networks requires large amount of data with manual annotations, which is a major concern due to the data scarcity from clinic. Working with limited data is still common for researches on novel imaging modalities. To enhance the effectiveness of DL models trained with limited data, data augmentation (DA) is a crucial regularization technique. Traditional DA (TDA) strategies focus on basic intra-image operations, i.e. generating images with different orientations and intensity distributions. In contrast, the inter-image and object-level DA operations are able to create new images from separate individuals. However, such DA strategies are not well explored on the task of multi-organ segmentation. In this paper, we investigated four possible inter-image DA strategies: CutMix, CarveMix, ObjectAug and AnatoMix, on two organ segmentation datasets. The result shows that CutMix, CarveMix and AnatoMix can improve the average dice score by 4.9, 2.0 and 1.9, compared with the state-of-the-art nnUNet without DA strategies. These results can be further improved by adding TDA strategies. It is revealed in our experiments that CutMix is a robust but simple DA strategy to drive up the segmentation performance for multi-organ segmentation, even when CutMix produces intuitively ‘wrong’ images. We present our implementation as a DA toolkit for multi-organ segmentation on GitHub for future benchmarks. | Cut to the Mix: Simple Data Augmentation Outperforms Elaborate Ones in Limited Organ Segmentation Datasets | [
"Liu, Chang",
"Fan, Fuxin",
"Schwarz, Annette",
"Maier, Andreas"
] | Conference | [
"https://github.com/Rebooorn/mosDAtoolkit"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 80 |
||
null | https://papers.miccai.org/miccai-2024/paper/0251_paper.pdf | @InProceedings{ Oh_Controllable_MICCAI2024,
author = { Oh, Hyun-Jic and Jeong, Won-Ki },
title = { { Controllable and Efficient Multi-Class Pathology Nuclei Data Augmentation using Text-Conditioned Diffusion Models } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | In the field of computational pathology, deep learning algorithms have made significant progress in tasks such as nuclei segmentation and classification. However, the potential of these advanced methods is limited by the lack of available labeled data. Although image synthesis via recent generative models has been actively explored to address this challenge, existing works have barely addressed label augmentation and are mostly limited to single-class and unconditional label generation. In this paper, we introduce a novel two-stage framework for multi-class nuclei data augmentation using text-conditional diffusion models. In the first stage, we innovate nuclei label synthesis by generating multi-class semantic labels and corresponding instance maps through a joint diffusion model conditioned by text prompts that specify the label structure information. In the second stage, we utilize a semantic and text-conditional latent diffusion model to efficiently generate high-quality pathology images that align with the generated nuclei label images. We demonstrate the effectiveness of our method on large and diverse pathology nuclei datasets, with evaluations including qualitative and quantitative analyses, as well as assessments of downstream tasks. | Controllable and Efficient Multi-Class Pathology Nuclei Data Augmentation using Text-Conditioned Diffusion Models | [
"Oh, Hyun-Jic",
"Jeong, Won-Ki"
] | Conference | 2407.14426 | [
"https://github.com/hvcl/ConNucDA"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 81 |
|
null | https://papers.miccai.org/miccai-2024/paper/2226_paper.pdf | @InProceedings{ Bar_MARVEL_MICCAI2024,
author = { Barrier, Antoine and Coudert, Thomas and Delphin, Aurélien and Lemasson, Benjamin and Christen, Thomas },
title = { { MARVEL: MR Fingerprinting with Additional micRoVascular Estimates using bidirectional LSTMs } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | The Magnetic Resonance Fingerprinting (MRF) approach aims to estimate multiple MR or physiological parameters simultaneously with a single fast acquisition sequence. Most of the MRF studies proposed so far have used simple MR sequence types to measure relaxation times (T1, T2). In that case, deep learning algorithms have been successfully used to speed up the reconstruction process. In theory, the MRF concept could be used with a variety of other MR sequence types and should be able to provide more information about the tissue microstructures. Yet, increasing the complexity of the numerical models often leads to prohibited simulation times, and estimating multiple parameters from one sequence implies new dictionary dimensions whose sizes become too large for standard computers and DL architectures.
In this paper, we propose to analyze the MRF signal coming from a complex balanced Steady-State Free Precession (bSSFP) type sequence to simultaneously estimate relaxometry maps (T1, T2), Field maps (B1, B0) as well as microvascular properties such as the local Cerebral Blood Volume (CBV) or the averaged vessel Radius (R).
To bypass the curse of dimensionality, we propose an efficient way to simulate the MR signal coming from numerical voxels containing realistic microvascular networks as well as a Bidirectional Long Short-Term Memory network that replaces the matching process.
On top of standard MRF maps, our results on 3 human volunteers suggest that our approach can quickly produce high-quality quantitative maps of microvascular parameters that are otherwise obtained using longer dedicated sequences and intravenous injection of a contrast agent. This approach could be used for the management of multiple pathologies and could be tuned to provide other types of microstructural information. | MARVEL: MR Fingerprinting with Additional micRoVascular Estimates using bidirectional LSTMs | [
"Barrier, Antoine",
"Coudert, Thomas",
"Delphin, Aurélien",
"Lemasson, Benjamin",
"Christen, Thomas"
] | Conference | 2407.10512 | [
"https://github.com/nifm-gin/MARVEL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 82 |
|
null | https://papers.miccai.org/miccai-2024/paper/2135_paper.pdf | @InProceedings{ Xu_LBUNet_MICCAI2024,
author = { Xu, Jiahao and Tong, Lyuyang },
title = { { LB-UNet: A Lightweight Boundary-assisted UNet for Skin Lesion Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Skin lesion segmentation is vital in computer-aided diagnosis and treatment of skin diseases. UNet and its variants have been widely utilized for skin lesion segmentation. However, resource constraints limit the deployment of larger parameter models on edge devices. To address this issue, we propose a novel lightweight boundary-assisted UNet (LB-UNet) for skin lesion segmentation. LB-UNet incorporates the Group Shuffle Attention module (GSA) to significantly reduce the model’s parameters and computational demands. Furthermore, to enhance the model’s segmentation capability, especially in handling ambiguous boundary, LB-UNet introduces the Prediction Map Auxiliary module (PMA). Briefly, PMA consists of three modules: (1) Segmentation Region and Boundary Prediction module is utilized to predict the segmentation region and boundary of the decoder features; (2) GA-Based Boundary Generator is employed to generate the ground truth boundary map through genetic algorithm; (3) Prediction Information Fusion module enhances the skip connection by leveraging the prediction information. By combining this modules, the region and boundary information is effectively integrated into the backbone. The experiment results on the ISIC2017 and ISIC2018 datasets demonstrate that LB-UNet outperforms current lightweight methods. To the best of our knowledge, LB-UNet the first model with a parameters count limited to 38KB and Giga-Operations Per Second (GFLOPs) limited to 0.1. The codes and trained models are publicly available at https://github.com/xuxuxuxuxuxjh/LB-UNet. | LB-UNet: A Lightweight Boundary-assisted UNet for Skin Lesion Segmentation | [
"Xu, Jiahao",
"Tong, Lyuyang"
] | Conference | [
"https://github.com/xuxuxuxuxuxjh/LB-UNet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 83 |
||
null | https://papers.miccai.org/miccai-2024/paper/3991_paper.pdf | @InProceedings{ Zho_PathM3_MICCAI2024,
author = { Zhou, Qifeng and Zhong, Wenliang and Guo, Yuzhi and Xiao, Michael and Ma, Hehuan and Huang, Junzhou },
title = { { PathM3: A Multimodal Multi-Task Multiple Instance Learning Framework for Whole Slide Image Classification and Captioning } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15004 },
month = {October},
pages = { pending },
} | In the field of computational histopathology, both whole slide images (WSIs) and diagnostic captions provide valuable insights for making diagnostic decisions. However, aligning WSIs with diagnostic captions presents a significant challenge. This difficulty arises from two main factors: 1) Gigapixel WSIs are unsuitable for direct input into deep learning models, and the redundancy and correlation among the patches demand more attention; and 2) Authentic WSI diagnostic captions are extremely limited, making it difficult to train an effective model. To overcome these obstacles, we present PathM3, a multimodal, multi-task, multiple instance learning (MIL) framework for WSI classification and captioning. PathM3 adapts a query-based transformer to effectively align WSIs with diagnostic captions. Given that histopathology visual patterns are redundantly distributed across WSIs, we aggregate each patch feature with MIL method that considers the correlations among instances. Furthermore, our PathM3 overcomes data scarcity in WSI-level captions by leveraging limited WSI diagnostic caption data in the manner of multi-task joint learning. Extensive experiments with improved classification accuracy and caption generation demonstrate the effectiveness of our method on both WSI classification and captioning task. | PathM3: A Multimodal Multi-Task Multiple Instance Learning Framework for Whole Slide Image Classification and Captioning | [
"Zhou, Qifeng",
"Zhong, Wenliang",
"Guo, Yuzhi",
"Xiao, Michael",
"Ma, Hehuan",
"Huang, Junzhou"
] | Conference | 2403.08967 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 84 |
|
null | https://papers.miccai.org/miccai-2024/paper/2709_paper.pdf | @InProceedings{ Wan_fTSPL_MICCAI2024,
author = { Wang, Pengyu and Zhang, Huaqi and He, Zhibin and Peng, Zhihao and Yuan, Yixuan },
title = { { fTSPL: Enhancing Brain Analysis with fMRI-Text Synergistic Prompt Learning } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | Using functional Magnetic Resonance Imaging (fMRI) to construct the functional connectivity is a well-established paradigm for deep learning-based brain analysis. Recently, benefiting from the remarkable effectiveness and generalization brought by large-scale multi-modal pre-training data, Vision-Language (V-L) models have achieved excellent performance in numerous medical tasks. However, applying the pre-trained V-L model to brain analysis presents two significant challenges: (1) The lack of paired fMRI-text data; (2) The construction of functional connectivity from multi-modal data. To tackle these challenges, we propose a fMRI-Text Synergistic Prompt Learning (fTSPL) pipeline, which utilizes the pre-trained V-L model to enhance brain analysis for the first time. In fTSPL, we first propose an Activation-driven Brain-region Text Generation (ABTG) scheme that can automatically generate instance-level texts describing each fMRI, and then leverage the V-L model to learn multi-modal fMRI and text representations. We also propose a Prompt-boosted Multi-modal Functional Connectivity Construction (PMFCC) scheme by establishing the correlations between fMRI-text representations and brain-region embeddings. This scheme serves as a plug-and-play preliminary that can connect with various Graph Neural Networks (GNNs) for brain analysis. Experiments on ABIDE and HCP datasets demonstrate that our pipeline outperforms state-of-the-art methods on brain classification and prediction tasks. The code is available at https://github.com/CUHK-AIM-Group/fTSPL. | fTSPL: Enhancing Brain Analysis with fMRI-Text Synergistic Prompt Learning | [
"Wang, Pengyu",
"Zhang, Huaqi",
"He, Zhibin",
"Peng, Zhihao",
"Yuan, Yixuan"
] | Conference | [
"https://github.com/CUHK-AIM-Group/fTSPL"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 85 |
||
null | https://papers.miccai.org/miccai-2024/paper/1307_paper.pdf | @InProceedings{ Wan_Prior_MICCAI2024,
author = { Wang, Qingbin and Wong, Wai Chon and Yin, Mi and Ma, Yutao },
title = { { Prior Activation Map Guided Cervical OCT Image Classification } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15003 },
month = {October},
pages = { pending },
} | Cervical cancer poses a severe threat to women’s health globally. As a non-invasive imaging modality, cervical optical coherence tomography (OCT) rapidly generates micrometer-resolution images from the cervix, comparable nearly to histopathology. However, the scarcity of high-quality labeled OCT images and the inevitable speckle noise impede deep-learning models from extracting discriminative features of high-risk lesion images. This study utilizes segmentation masks and bounding boxes to construct prior activation maps (PAMs) that encode pathologists’ diagnostic insights into different cervical disease categories in OCT images. These PAMs guide the classification model in producing reasonable class activation maps during training, enhancing interpretability and performance to meet gynecologists’ needs. Experiments using five-fold cross-validation demonstrate that the PAM-guided classification model boosts the classification of high-risk lesions on three datasets. Besides, our method enhances histopathology-based interpretability to assist gynecologists in analyzing cervical OCT images efficiently, advancing the integration of deep learning in clinical practice. | Prior Activation Map Guided Cervical OCT Image Classification | [
"Wang, Qingbin",
"Wong, Wai Chon",
"Yin, Mi",
"Ma, Yutao"
] | Conference | [
"https://github.com/ssea-lab/AMGuided_Cervical_OCT_Classification"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 86 |
||
null | https://papers.miccai.org/miccai-2024/paper/0919_paper.pdf | @InProceedings{ Tan_Interpretable_MICCAI2024,
author = { Tang, Haoteng and Liu, Guodong and Dai, Siyuan and Ye, Kai and Zhao, Kun and Wang, Wenlu and Yang, Carl and He, Lifang and Leow, Alex and Thompson, Paul and Huang, Heng and Zhan, Liang },
title = { { Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | The MRI-derived brain network serves as a pivotal instrument in elucidating both the structural and functional aspects of the brain, encompassing the ramifications of diseases and developmental processes. However, prevailing methodologies, often focusing on synchronous BOLD signals from functional MRI (fMRI), may not capture directional influences among brain regions and rarely tackle temporal functional dynamics.
In this study, we first construct the brain-effective network via the dynamic causal model. Subsequently, we introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE). This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks via an ordinary differential equation (ODE) model, which characterizes spatial-temporal brain dynamics. Our framework is validated on several clinical phenotype prediction tasks using two independent publicly available datasets (HCP and OASIS). The experimental results clearly demonstrate the advantages of our model compared to several state-of-the-art methods. | Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation | [
"Tang, Haoteng",
"Liu, Guodong",
"Dai, Siyuan",
"Ye, Kai",
"Zhao, Kun",
"Wang, Wenlu",
"Yang, Carl",
"He, Lifang",
"Leow, Alex",
"Thompson, Paul",
"Huang, Heng",
"Zhan, Liang"
] | Conference | 2405.13190 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 87 |
|
null | https://papers.miccai.org/miccai-2024/paper/3584_paper.pdf | @InProceedings{ Koc_DinoBloom_MICCAI2024,
author = { Koch, Valentin and Wagner, Sophia J. and Kazeminia, Salome and Sancar, Ece and Hehr, Matthias and Schnabel, Julia A. and Peng, Tingying and Marr, Carsten },
title = { { DinoBloom: A Foundation Model for Generalizable Cell Embeddings in Hematology } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15012 },
month = {October},
pages = { pending },
} | In hematology, computational models offer significant potential to improve diagnostic accuracy, streamline workflows, and reduce the tedious work of analyzing single cells in peripheral blood or bone marrow smears. However, clinical adoption of computational models has been hampered by the lack of generalization due to large batch effects, small dataset sizes, and poor performance in transfer learning from natural images. To address these challenges, we introduce DinoBloom, the first foundation model for single cell images in hematology, utilizing a tailored DINOv2 pipeline. Our model is built upon an extensive collection of 13 diverse, publicly available datasets of peripheral blood and bone marrow smears, the most substantial open-source cohort in hematology so far, comprising over 380,000 white blood cell images.
To assess its generalization capability, we evaluate it on an external dataset with a challenging domain shift. We show that our model outperforms existing medical and non-medical vision models in (i) linear probing and k-nearest neighbor evaluations on blood and bone marrow smears and (ii) weakly supervised multiple instance learning for acute myeloid leukemia subtyping by a large margin.
A family of four DinoBloom models (small, base, large, and giant) can be adapted for a wide range of downstream applications, be a strong baseline for classification problems, and facilitate the assessment of batch effects in new datasets. All models are available at github.com/marrlab/DinoBloom. | DinoBloom: A Foundation Model for Generalizable Cell Embeddings in Hematology | [
"Koch, Valentin",
"Wagner, Sophia J.",
"Kazeminia, Salome",
"Sancar, Ece",
"Hehr, Matthias",
"Schnabel, Julia A.",
"Peng, Tingying",
"Marr, Carsten"
] | Conference | 2404.05022 | [
"github.com/marrlab/DinoBloom"
] | https://huggingface.co./papers/2404.05022 | 0 | 1 | 0 | 8 | [
"1aurent/vit_base_patch14_224.dinobloom",
"1aurent/vit_small_patch14_224.dinobloom",
"1aurent/vit_large_patch14_224.dinobloom",
"1aurent/vit_giant_patch14_224.dinobloom"
] | [] | [] | [
"1aurent/vit_base_patch14_224.dinobloom",
"1aurent/vit_small_patch14_224.dinobloom",
"1aurent/vit_large_patch14_224.dinobloom",
"1aurent/vit_giant_patch14_224.dinobloom"
] | [] | [] | 1 | Poster | 88 |
null | https://papers.miccai.org/miccai-2024/paper/3604_paper.pdf | @InProceedings{ Mob_Harnessing_MICCAI2024,
author = { Mobadersany, Pooya and Parmar, Chaitanya and Damasceno, Pablo F. and Fadnavis, Shreyas and Chaitanya, Krishna and Li, Shilong and Schwab, Evan and Xiao, Jaclyn and Surace, Lindsey and Mansi, Tommaso and Cula, Gabriela Oana and Ghanem, Louis R. and Standish, Kristopher },
title = { { Harnessing Temporal Information for Precise Frame-Level Predictions in Endoscopy Videos } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Camera localization in endoscopy videos plays a fundamental role in enabling precise diagnosis and effective treatment planning for patients with Inflammatory Bowel Disease (IBD). Precise frame-level classification, however, depends on long-range temporal dynamics, ranging from hundreds to tens of thousands of frames per video, challenging current neural network approaches. To address this, we propose EndoFormer, a frame-level classification model that leverages long-range temporal information for anatomic segment classification in gastrointestinal endoscopy videos. EndoFormer combines a Foundation Model block, judicious video-level augmentations, and a Transformer classifier for frame-level classification while maintaining a small memory footprint. Experiments on 4160 endoscopy videos from four clinical trials and over 61 million frames demonstrate that EndoFormer has an AUC=0.929, significantly improving state-of-the-art models for anatomic segment classification. These results highlight the potential for adopting EndoFormer in endoscopy video analysis applications that require long-range temporal dynamics for precise frame-level predictions. | Harnessing Temporal Information for Precise Frame-Level Predictions in Endoscopy Videos | [
"Mobadersany, Pooya",
"Parmar, Chaitanya",
"Damasceno, Pablo F.",
"Fadnavis, Shreyas",
"Chaitanya, Krishna",
"Li, Shilong",
"Schwab, Evan",
"Xiao, Jaclyn",
"Surace, Lindsey",
"Mansi, Tommaso",
"Cula, Gabriela Oana",
"Ghanem, Louis R.",
"Standish, Kristopher"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 89 |
||
null | https://papers.miccai.org/miccai-2024/paper/1958_paper.pdf | @InProceedings{ Zha_IPLC_MICCAI2024,
author = { Zhang, Guoning and Qi, Xiaoran and Yan, Bo and Wang, Guotai },
title = { { IPLC: Iterative Pseudo Label Correction Guided by SAM for Source-Free Domain Adaptation in Medical Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15011 },
month = {October},
pages = { pending },
} | Source-Free Domain Adaptation (SFDA) is important for dealing with domain shift without access to source data and labels of target domain images for medical image segmentation. However, existing SFDA methods have limited performance due to insufficient supervision and unreliable pseudo labels. To address this issue, we propose a novel Iterative Pseudo Label Correction (IPLC) guided by the Segment Anything Model (SAM) SFDA framework for medical image segmentation. Specifically, with a pre-trained source model and SAM, we propose multiple random sampling and entropy estimation to obtain robust pseudo labels and mitigate the noise. We introduce mean negative curvature minimization to provide more sufficient constraints and achieve smoother segmentation. We also propose an Iterative Correction Learning (ICL) strategy to iteratively generate reliable pseudo labels with updated prompts for domain adaptation. Experiments on a public multi-site heart MRI segmentation dataset (M&MS) demonstrate that our method effectively improved the quality of pseudo labels and outperformed several state-of-the-art SFDA methods. The code is available at https://github.com/HiLab-git/IPLC. | IPLC: Iterative Pseudo Label Correction Guided by SAM for Source-Free Domain Adaptation in Medical Image Segmentation | [
"Zhang, Guoning",
"Qi, Xiaoran",
"Yan, Bo",
"Wang, Guotai"
] | Conference | [
"https://github.com/HiLab-git/IPLC"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 90 |
||
null | https://papers.miccai.org/miccai-2024/paper/0240_paper.pdf | @InProceedings{ Xu_StereoDiffusion_MICCAI2024,
author = { Xu, Haozheng and Xu, Chi and Giannarou, Stamatia },
title = { { StereoDiffusion: Temporally Consistent Stereo Depth Estimation with Diffusion Models } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | In Minimally Invasive Surgery (MIS), temporally consistent depth estimation is necessary for accurate intraoperative surgical navigation and robotic control. Despite the plethora of stereo depth estimation methods, estimating temporally consistent disparity is still challenging due to scene and camera dynamics. The aim of this paper is to introduce the StereoDiffusion framework for temporally consistent disparity estimation. For the first time, a latent diffusion model is incorporated into stereo depth estimation. Advancing existing depth estimation methods based on diffusion models, StereoDiffusion uses prior knowledge to refine disparity. Prior knowledge is generated using optical flow to warp the disparity map of the previous frame and predict a reprojected disparity map in the current frame to be refined. For efficient inference, fewer denoising steps and an efficient denoising scheduler have been used. Extensive validation on MIS stereo datasets and comparison to state-of-the-art (SOTA) methods show that StereoDiffusion achieves best performance and provides temporally consistent disparity estimation with high-fidelity details, despite having been trained on natural scenes only. | StereoDiffusion: Temporally Consistent Stereo Depth Estimation with Diffusion Models | [
"Xu, Haozheng",
"Xu, Chi",
"Giannarou, Stamatia"
] | Conference | [
"https://github.com/xuhaozheng/StereoDiff"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 91 |
||
null | https://papers.miccai.org/miccai-2024/paper/1369_paper.pdf | @InProceedings{ Boy_MEGFormer_MICCAI2024,
author = { Boyko, Maria and Druzhinina, Polina and Kormakov, Georgii and Beliaeva, Aleksandra and Sharaev, Maxim },
title = { { MEGFormer: enhancing speech decoding from brain activity through extended semantic representations } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Even though multiple studies have examined the decoding of speech from brain activity through non-invasive technologies in recent years, the task still presents a challenge as decoding quality is still insufficient for practical applications. An effective solution could help in the advancement of brain-computer interfaces (BCIs), potentially enabling communication restoration for individuals experiencing speech impairments. At the same time, these studies can provide fundamental insights into how the brain processes speech and sound.
One of the approaches for decoding perceived speech involves using a self-supervised model that has been trained using contrastive learning. This model matches segments of the same length from magnetoencephalography (MEG) to audio in a zero-shot way. We improve the method for decoding perceived speech by incorporating a new architecture based on CNN Transformer. As a result of proposed modifications, the accuracy of perceived speech decoding increases significantly from current 69\% to 83\% and from 67\% to 70\% on publicly available datasets. Notably, the greatest improvement in accuracy is observed in longer speech fragments that carry semantic meaning, rather than in shorter fragments with sounds and phonemes.
Our code is available at https://github.com/maryjis/MEGformer | MEGFormer: enhancing speech decoding from brain activity through extended semantic representations | [
"Boyko, Maria",
"Druzhinina, Polina",
"Kormakov, Georgii",
"Beliaeva, Aleksandra",
"Sharaev, Maxim"
] | Conference | [
"https://github.com/maryjis/MEGformer"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 92 |
||
null | https://papers.miccai.org/miccai-2024/paper/1856_paper.pdf | @InProceedings{ Wei_Enhanced_MICCAI2024,
author = { Wei, Ruofeng and Li, Bin and Chen, Kai and Ma, Yiyao and Liu, Yunhui and Dou, Qi },
title = { { Enhanced Scale-aware Depth Estimation for Monocular Endoscopic Scenes with Geometric Modeling } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | Scale-aware monocular depth estimation poses a significant challenge in computer-aided endoscopic navigation. However, existing depth estimation methods that do not consider the geometric priors struggle to learn the absolute scale from training with monocular endoscopic sequences. Additionally, conventional methods face difficulties in accurately estimating details on tissue and instruments boundaries. In this paper, we tackle these problems by proposing a novel enhanced scale-aware framework that only uses monocular images with geometric modeling for depth estimation. Specifically, we first propose a multi-resolution depth fusion strategy to enhance the quality of monocular depth estimation. To recover the precise scale between relative depth and real-world values, we further calculate the 3D poses of instruments in the endoscopic scenes by algebraic geometry based on the image-only geometric primitives (i.e., boundaries and tip of instruments). Afterwards, the 3D poses of surgical instruments enable the scale recovery of relative depth maps. By coupling scale factors and relative depth estimation, the scale aware depth of the monocular endoscopic scenes can be estimated. We evaluate the pipeline on in-house endoscopic surgery videos and simulated data. The results demonstrate that our method can learn the absolute scale with geometric modeling and accurately estimate scale-aware depth for monocular scenes. Code is available at: https://github.com/med-air/MonoEndoDepth | Enhanced Scale-aware Depth Estimation for Monocular Endoscopic Scenes with Geometric Modeling | [
"Wei, Ruofeng",
"Li, Bin",
"Chen, Kai",
"Ma, Yiyao",
"Liu, Yunhui",
"Dou, Qi"
] | Conference | 2408.07266 | [
"https://github.com/med-air/MonoEndoDepth"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Oral | 93 |
|
null | https://papers.miccai.org/miccai-2024/paper/1442_paper.pdf | @InProceedings{ Hua_Optimizing_MICCAI2024,
author = { Huang, Yifei and Shen, Chuyun and Li, Wenhao and Wang, Xiangfeng and Jin, Bo and Cai, Haibin },
title = { { Optimizing Efficiency and Effectiveness in Sequential Prompt Strategy for SAM using Reinforcement Learning } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15008 },
month = {October},
pages = { pending },
} | In the rapidly advancing field of medical image analysis, Interactive Medical Image Segmentation (IMIS) plays a crucial role in augmenting diagnostic precision.
Within the realm of IMIS, the Segment Anything Model (SAM), trained on natural images, demonstrates zero-shot capabilities when applied to medical images as the foundation model.
Nevertheless, SAM has been observed to display considerable sensitivity to variations in interaction forms within interactive sequences, introducing substantial uncertainty into the interaction segmentation process.
Consequently, the identification of optimal temporal prompt forms is essential for guiding clinicians in their utilization of SAM.
Furthermore, determining the appropriate moment to terminate an interaction represents a delicate balance between efficiency and effectiveness.
For providing sequential optimal prompt forms and best stopping time, we introduce an \textbf{A}daptive \textbf{I}nteraction and \textbf{E}arly \textbf{S}topping mechanism, named \textbf{AIES}.
This mechanism models the IMIS process as a Markov Decision Process (MDP) and employs a Deep Q-network (DQN) with an adaptive penalty mechanism to optimize interaction forms and ascertain the optimal cessation point when implementing SAM.
Upon evaluation using three public datasets, AIES identified an efficient and effective prompt strategy that significantly reduced interaction costs while achieving better segmentation accuracy than the rule-based method. | Optimizing Efficiency and Effectiveness in Sequential Prompt Strategy for SAM using Reinforcement Learning | [
"Huang, Yifei",
"Shen, Chuyun",
"Li, Wenhao",
"Wang, Xiangfeng",
"Jin, Bo",
"Cai, Haibin"
] | Conference | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 94 |
||
null | https://papers.miccai.org/miccai-2024/paper/0973_paper.pdf | @InProceedings{ Xie_Multidisease_MICCAI2024,
author = { Xie, Jianyang and Chen, Xiuju and Zhao, Yitian and Meng, Yanda and Zhao, He and Nguyen, Anh and Li, Xiaoxin and Zheng, Yalin },
title = { { Multi-disease Detection in Retinal Images Guided by Disease Causal Estimation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15001 },
month = {October},
pages = { pending },
} | There have been significant advancements in analyzing retinal images for the diagnosis of eye diseases and other systemic conditions. However, a key challenge is multi-disease detection, particularly in addressing the demands of real-world applications where a patient may have more than one condition. To address this challenge, this study introduces a novel end-to-end approach to multi-disease detection using retinal images guided by disease causal estimation. This model leverages disease-specific features, integrating disease causal relationships and interactions between image features and disease conditions. Specifically, 1) the interactions between disease and image features are captured by cross-attention in a transformer decoder. 2) The causal relationships among diseases are automatically estimated as the directed acyclic graph (DAG) based on the dataset itself and are utilized to regularize disease-specific feature learning with disease causal interaction. 3) A novel retinal multi-disease dataset of 500 patients, including six lesion labels, was generated for evaluation purposes. Compared with other methods, the proposed approach not only achieves multi-disease diagnosis with high performance but also provides a method to estimate the causal relationships among diseases. We evaluated our method on two retinal datasets: a public color fundus photography and an in-house fundus fluorescein angiography (FFA). The results show that the proposed method outperforms other state-of-the-art multi-label models. Our FFA database and code have been released at https://github.com/davelailai/multi-disease-detection-guided-by-causal-estimation.git. | Multi-disease Detection in Retinal Images Guided by Disease Causal Estimation | [
"Xie, Jianyang",
"Chen, Xiuju",
"Zhao, Yitian",
"Meng, Yanda",
"Zhao, He",
"Nguyen, Anh",
"Li, Xiaoxin",
"Zheng, Yalin"
] | Conference | [
"https://github.com/davelailai/multi-disease-detection-guided-by-causal-estimation.git"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 95 |
||
null | https://papers.miccai.org/miccai-2024/paper/1165_paper.pdf | @InProceedings{ Ou_AGraphEmbedded_MICCAI2024,
author = { Ou, Zaixin and Jiang, Caiwen and Liu, Yuxiao and Zhang, Yuanwang and Cui, Zhiming and Shen, Dinggang },
title = { { A Graph-Embedded Latent Space Learning and Clustering Framework for Incomplete Multimodal Multiclass Alzheimer’s Disease Diagnosis } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15007 },
month = {October},
pages = { pending },
} | Alzheimer’s disease (AD) is an irreversible neurodegenerative disease, where early diagnosis is crucial for improving prognosis and delaying the progression of the disease. Leveraging multimodal PET images, which can reflect various biomarkers like Aβ and tau protein, is a promising method for AD diagnosis. However, due to the high cost and practical issues of PET imaging, it often faces challenges with incomplete multimodal data. To address this dilemma, in this paper, we propose a Graph-embedded latent Space Learning and Clustering framework, named Graph-SLC, for multiclass AD diagnosis under incomplete multimodal data scenarios. The key concept is leveraging all available subjects, including those with incomplete modality data, to train a network for projecting subjects into their latent representations. These latent representations not only exploit the complementarity of different modalities but also showcase separability among different classes. Specifically, our Graph-SLC consists of three modules, i.e., a multimodal reconstruction module, a subject-similarity graph embedding module, and an AD-oriented latent clustering module. Among them, the multimodal reconstruction module generates subject-specific latent representations that can comprehensively incorporate information from different modalities with guidance from all available modalities. The subject-similarity graph embedding module then enhances the discriminability of different latent representations by ensuring the neighborhood relationships between subjects are preserved in subject-specific latent representations. The AD-oriented latent clustering module facilitates the separability of multiple classes by constraining subject-specific latent representations within the same class to be in the same cluster. Experiments on the ADNI show that our method achieves state-of-the-art performance in multiclass AD diagnosis. Our code is available at https://github.com/Ouzaixin/Graph-SLC. | A Graph-Embedded Latent Space Learning and Clustering Framework for Incomplete Multimodal Multiclass Alzheimer’s Disease Diagnosis | [
"Ou, Zaixin",
"Jiang, Caiwen",
"Liu, Yuxiao",
"Zhang, Yuanwang",
"Cui, Zhiming",
"Shen, Dinggang"
] | Conference | [
"https://github.com/Ouzaixin/Graph-SLC"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 96 |
||
null | https://papers.miccai.org/miccai-2024/paper/3962_paper.pdf | @InProceedings{ Sha_APatientSpecific_MICCAI2024,
author = { Sharma, Susheela and Go, Sarah and Yakay, Zeynep and Kulkarni, Yash and Kapuria, Siddhartha and Amadio, Jordan P. and Rajebi, Reza and Khadem, Mohsen and Navab, Nassir and Alambeigi, Farshid },
title = { { A Patient-Specific Framework for Autonomous Spinal Fixation via a Steerable Drilling Robot } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15006 },
month = {October},
pages = { pending },
} | In this paper, with the goal of enhancing the minimally invasive spinal fixation procedure in osteoporotic patients, we propose a first-of-its-kind image-guided robotic framework for performing and autonomous and patient-specific procedure using a unique concentric tube steerable drilling robot (CT-SDR). Particularly, leveraging CT-SDR, we introduce the concept of J-shape drilling based on a pre-operative trajectory planned in CT scan of a patient followed by appropriate calibration, registration, and navigation steps to safely execute this trajectory in real-time using our unique robotic setup. To thoroughly evaluate the performance of our framework, we performed several experiments on two different vertebral phantoms designed based on CT scan of real patients. | A Patient-Specific Framework for Autonomous Spinal Fixation via a Steerable Drilling Robot | [
"Sharma, Susheela",
"Go, Sarah",
"Yakay, Zeynep",
"Kulkarni, Yash",
"Kapuria, Siddhartha",
"Amadio, Jordan P.",
"Rajebi, Reza",
"Khadem, Mohsen",
"Navab, Nassir",
"Alambeigi, Farshid"
] | Conference | 2405.17606 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Oral | 97 |
|
null | https://papers.miccai.org/miccai-2024/paper/0150_paper.pdf | @InProceedings{ Wan_Toward_MICCAI2024,
author = { Wang, Bomin and Luo, Xinzhe and Zhuang, Xiahai },
title = { { Toward Universal Medical Image Registration via Sharpness-Aware Meta-Continual Learning } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15002 },
month = {October},
pages = { pending },
} | Current deep learning approaches in medical image registration usually face the challenges of distribution shift and data collection, hindering real-world deployment. In contrast, universal medical image registration aims to perform registration on a wide range of clinically relevant tasks simultaneously, thus having tremendous potential for clinical applications. In this paper, we present the first attempt to achieve the goal of universal 3D medical image registration in sequential learning scenarios by proposing a continual learning method. Specifically, we utilize meta-learning with experience replay to mitigating the problem of catastrophic forgetting. To promote the generalizability of meta-continual learning, we further propose sharpness-aware meta-continual learning (SAMCL). We validate the effectiveness of our method on four datasets in a continual learning setup, including brain MR, abdomen CT, lung CT, and abdomen MR-CT image pairs. Results have shown the potential of SAMCL in realizing universal image registration, which performs better than or on par with vanilla sequential or centralized multi-task training strategies. The source code will be available from https://github.com/xzluo97/Continual-Reg. | Toward Universal Medical Image Registration via Sharpness-Aware Meta-Continual Learning | [
"Wang, Bomin",
"Luo, Xinzhe",
"Zhuang, Xiahai"
] | Conference | 2406.17575 | [
"https://github.com/xzluo97/Continual-Reg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 98 |
|
null | https://papers.miccai.org/miccai-2024/paper/1923_paper.pdf | @InProceedings{ Cha_EMNet_MICCAI2024,
author = { Chang, Ao and Zeng, Jiajun and Huang, Ruobing and Ni, Dong },
title = { { EM-Net: Efficient Channel and Frequency Learning with Mamba for 3D Medical Image Segmentation } },
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = { LNCS 15009 },
month = {October},
pages = { pending },
} | Convolutional neural networks have primarily led 3D medical image segmentation but may be limited by small receptive fields.
Transformer models excel in capturing global relationships through self-attention but are challenged by high computational costs at high resolutions. Recently, Mamba, a state space model, has emerged as an effective approach for sequential modeling. Inspired by its success, we introduce a novel Mamba-based 3D medical image segmentation model called EM-Net. It not only efficiently captures attentive interaction between regions by integrating and selecting channels, but also effectively utilizes frequency domain to harmonize the learning of features across varying scales, while accelerating training speed. Comprehensive experiments on two challenging multi-organ datasets with other state-of-the-art (SOTA) algorithms show that our method exhibits better segmentation accuracy while requiring nearly half the parameter size of SOTA models and 2x faster training speed. Our code is publicly available at https://github.com/zang0902/EM-Net. | EM-Net: Efficient Channel and Frequency Learning with Mamba for 3D Medical Image Segmentation | [
"Chang, Ao",
"Zeng, Jiajun",
"Huang, Ruobing",
"Ni, Dong"
] | Conference | 2409.17675 | [
"https://github.com/zang0902/EM-Net"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 | Poster | 99 |