Datasets:

bibtex_url
null
proceedings
stringlengths
58
58
bibtext
stringlengths
511
974
abstract
stringlengths
92
2k
title
stringlengths
30
207
authors
sequencelengths
1
22
id
stringclasses
1 value
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
14 values
n_linked_authors
int64
-1
1
upvotes
int64
-1
1
num_comments
int64
-1
0
n_authors
int64
-1
10
Models
sequencelengths
0
4
Datasets
sequencelengths
0
1
Spaces
sequencelengths
0
0
old_Models
sequencelengths
0
4
old_Datasets
sequencelengths
0
1
old_Spaces
sequencelengths
0
0
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
unique_id
int64
0
855
null
https://papers.miccai.org/miccai-2024/paper/2205_paper.pdf
@InProceedings{ Che_SpatialDivision_MICCAI2024, author = { Chen, Jixiang and Lin, Yiqun and Sun, Haoran and Li, Xiaomeng }, title = { { Spatial-Division Augmented Occupancy Field for Bone Shape Reconstruction from Biplanar X-Rays } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Retrieving 3D bone anatomy from biplanar X-ray images is crucial since it can significantly reduce radiation exposure compared to traditional CT-based methods. Although various deep learning models have been proposed to address this complex task, they suffer from two limitations: 1) They employ voxel representation for bone shape and exploit 3D convolutional layers to capture anatomy prior, which are memory-intensive and limit the reconstruction resolution. 2) They overlook the prevalent occlusion effect within X-ray images and directly extract features using a simple loss, which struggles to fully exploit complex X-ray information. To tackle these concerns, we present Spatial-division Augmented Occupancy Field~(SdAOF). SdAOF adopts the continuous occupancy field for shape representation, reformulating the reconstruction problem as a per-point occupancy value prediction task. Its implicit and continuous nature enables memory-efficient training and fine-scale surface reconstruction at different resolutions during the inference. Moreover, we propose a novel spatial-division augmented distillation strategy to provide feature-level guidance for capturing the occlusion relationship. Extensive experiments on the pelvis reconstruction dataset show that SdAOF outperforms state-of-the-art methods and reconstructs fine-scale bone surfaces. Our code will be made available.
Spatial-Division Augmented Occupancy Field for Bone Shape Reconstruction from Biplanar X-Rays
[ "Chen, Jixiang", "Lin, Yiqun", "Sun, Haoran", "Li, Xiaomeng" ]
Conference
2407.15433
[ "https://github.com/xmed-lab/SdAOF" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
100
null
https://papers.miccai.org/miccai-2024/paper/2795_paper.pdf
@InProceedings{ Liu_Causal_MICCAI2024, author = { Liu, Hengxin and Li, Qiang and Nie, Weizhi and Xu, Zibo and Liu, Anan }, title = { { Causal Intervention for Brain tumor Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Due to blurred boundaries between the background and the foreground, along with the overlapping of different tumor lesions, accurate segmentation of brain tumors presents significant challenges. To tackle these issues, we propose a causal intervention model designed for brain tumor segmentation. This model effectively eliminates the influence of irrelevant content on tumor region feature extraction, thereby enhancing segmentation precision. Notably, we adopt a front-door adjustment strategy to mitigate the confounding effects of MRI images on our segmentation outcomes. Our approach specifically targets the removal of background effects and interference in overlapping areas across tumor categories. Comprehensive experiments on the BraTS2020 and BraTS2021 datasets confirm the superior performance of our proposed method, demonstrating its effectiveness in improving accuracy in challenging segmentation scenarios.
Causal Intervention for Brain tumor Segmentation
[ "Liu, Hengxin", "Li, Qiang", "Nie, Weizhi", "Xu, Zibo", "Liu, Anan" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
101
null
https://papers.miccai.org/miccai-2024/paper/2136_paper.pdf
@InProceedings{ Zhe_Deep_MICCAI2024, author = { Zheng, Yuanhang and Qiu, Yiqiao and Che, Haoxuan and Chen, Hao and Zheng, Wei-Shi and Wang, Ruixuan }, title = { { Deep Model Reference: Simple yet Effective Confidence Estimation for Image Classification } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Effective confidence estimation is desired for image classifi- cation tasks like clinical diagnosis based on medical imaging. However, it is well known that modern neural networks often show over-confidence in their predictions. Deep Ensemble (DE) is one of the state-of-the-art methods to estimate reliable confidence. In this work, we observed that DE sometimes harms the confidence estimation due to relatively lower confidence output for correctly classified samples. Motivated by the observation that a doctor often refers to other doctors’ opinions to adjust the confidence for his or her own decision, we propose a simple but effective post-hoc confidence estimation method called Deep Model Reference (DMR). Specifically, DMR employs one individual model to make decision while a group of individual models to help estimate the confidence for its decision. Rigorous proof and extensive empirical evaluations show that DMR achieves superior performance in confidence estimation compared to DE and other state-of-the-art methods, making trustworthy image classification more practical. Source code is available at https://openi.pcl.ac.cn/OpenMedIA/MICCAI2024_DMR.
Deep Model Reference: Simple yet Effective Confidence Estimation for Image Classification
[ "Zheng, Yuanhang", "Qiu, Yiqiao", "Che, Haoxuan", "Chen, Hao", "Zheng, Wei-Shi", "Wang, Ruixuan" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
102
null
https://papers.miccai.org/miccai-2024/paper/2191_paper.pdf
@InProceedings{ Che_TinyUNet_MICCAI2024, author = { Chen, Junren and Chen, Rui and Wang, Wei and Cheng, Junlong and Zhang, Lei and Chen, Liangyin }, title = { { TinyU-Net: Lighter yet Better U-Net with Cascaded Multi-Receptive Fields } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
The lightweight models for automatic medical image segmentation have the potential to advance health equity, particularly in limited-resource settings. Nevertheless, their reduced parameters and computational complexity compared to state-of-the-art methods often result in inadequate feature representation, leading to suboptimal segmentation performance. To this end, We propose a Cascade Multi-Receptive Fields (CMRF) module and develop a lighter yet better U-Net based on CMRF, named TinyU-Net, comprising only 0.48M parameters. Specifically, the CMRF module leverages redundant information across multiple channels in the feature map to explore diverse receptive fields by a cost-friendly cascading strategy, improving feature representation while maintaining the lightweightness of the model, thus enhancing performance. Testing CMRF-based TinyU-Net on cost-effective medical image segmentation datasets demonstrates superior performance with significantly fewer parameters and computational complexity compared to state-of-the-art methods. For instance, in the lesion segmentation of the ISIC2018 dataset, TinyU-Net is 52x, 3x, and 194x fewer parameters, respectively, while being +3.90%, +3.65%, and +1.05% higher IoU score than baseline U-Net, lightweight UNeXt, and high-performance TransUNet, respectively. Notably, the CMRF module exhibits adaptability, easily integrating into other networks. Experimental results suggest that TinyU-Net, with its outstanding performance, holds the potential to be implemented in limited-resource settings, thereby contributing to health equity. The code is available at https://github.com/ChenJunren-Lab/TinyU-Net.
TinyU-Net: Lighter yet Better U-Net with Cascaded Multi-Receptive Fields
[ "Chen, Junren", "Chen, Rui", "Wang, Wei", "Cheng, Junlong", "Zhang, Lei", "Chen, Liangyin" ]
Conference
[ "https://github.com/ChenJunren-Lab/TinyU-Net" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
103
null
https://papers.miccai.org/miccai-2024/paper/0757_paper.pdf
@InProceedings{ Wu_TeleOR_MICCAI2024, author = { Wu, Yixuan and Hu, Kaiyuan and Shao, Qian and Chen, Jintai and Chen, Danny Z. and Wu, Jian }, title = { { TeleOR: Real-time Telemedicine System for Full-Scene Operating Room } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
The advent of telemedicine represents a transformative development in leveraging technology to extend the reach of specialized medical expertise to remote surgeries, a field where the immediacy of expert guidance is paramount. However, the intricate dynamics of Operating Room (OR) scene pose unique challenges for telemedicine, particularly in achieving high-fidelity, real-time scene reconstruction and transmission amidst obstructions and bandwidth limitations. This paper introduces TeleOR, a pioneering system designed to address these challenges through real-time OR scene reconstruction for Tele-intervention. TeleOR distinguishes itself with three innovative approaches: dynamic self-calibration, which leverages inherent scene features for calibration without the need for preset markers, allowing for obstacle avoidance and real-time camera adjustment; selective OR reconstruction, focusing on dynamically changing scene segments to reduce reconstruction complexity; and viewport-adaptive transmission, optimizing data transmission based on real-time client feedback to efficiently deliver high-quality 3D reconstructions within bandwidth constraints. Comprehensive experiments on the 4D-OR surgical scene dataset demostrate the superiority and applicability of TeleOR, illuminating the potential to revolutionize tele-interventions by overcoming the spatial and technical barriers inherent in remote surgical guidance.
TeleOR: Real-time Telemedicine System for Full-Scene Operating Room
[ "Wu, Yixuan", "Hu, Kaiyuan", "Shao, Qian", "Chen, Jintai", "Chen, Danny Z.", "Wu, Jian" ]
Conference
2407.19763
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
104
null
https://papers.miccai.org/miccai-2024/paper/3047_paper.pdf
@InProceedings{ Zho_SBCAL_MICCAI2024, author = { Zhou, Taimin and Yang, Jin and Cui, Lingguo and Zhang, Nan and Chai, Senchun }, title = { { SBC-AL: Structure and Boundary Consistency-based Active Learning for Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Deep learning-based (DL) models have shown superior representation capabilities in medical image segmentation tasks. However, these representation powers require DL models to be trained by extensive annotated data, but the high annotation costs hinder this, thus limiting their performance. Active learning (AL) is a feasible solution for efficiently training models to demonstrate representation powers under low annotation budgets. It is achieved by querying unlabeled data for new annotations to continuously train models. Thus, the performance of AL methods largely depends on the query strategy. However, designing an efficient query strategy remains challenging due to limited informa- tion from unlabeled data for querying. Another challenge is that few methods exploit information in segmentation results for querying. To address them, first, we propose a Structure-aware Feature Prediction (SFP) and Attentional Segmentation Refinement (ASR) module to enable models to generate segmentation results with sufficient information for querying. The incorporation of these modules enhances the models to capture information related to the anatomical structures and boundaries. Additionally, we propose an uncertainty-based querying strategy to leverage information in segmentation results. Specifically, uncertainty is evaluated by assessing the consistency of anatomical structure and boundary information within segmentation results by calculating Structure Consistency Score (SCS) and Boundary Consistency Score (BCS). Subsequently, data is queried for annotations based on uncertainty. The incorporation of SFP and ASR-enhanced segmentation models and this uncertainty-based querying strategy into a standard AL strategy leads to a novel method, termed Structure and Boundary Consistency-based Active Learning (SBC-AL).
SBC-AL: Structure and Boundary Consistency-based Active Learning for Medical Image Segmentation
[ "Zhou, Taimin", "Yang, Jin", "Cui, Lingguo", "Zhang, Nan", "Chai, Senchun" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
105
null
https://papers.miccai.org/miccai-2024/paper/0783_paper.pdf
@InProceedings{ Zha_WIALD2ND_MICCAI2024, author = { Zhao, Haoyu and Gu, Yuliang and Zhao, Zhou and Du, Bo and Xu, Yongchao and Yu, Rui }, title = { { WIA-LD2ND: Wavelet-based Image Alignment for Self-supervised Low-Dose CT Denoising } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
In clinical examinations and diagnoses, low-dose computed tomography (LDCT) is crucial for minimizing health risks compared with normal-dose computed tomography (NDCT). However, reducing the radiation dose compromises the signal-to-noise ratio, leading to degraded quality of CT images. To address this, we analyze LDCT denoising task based on experimental results from the frequency perspective, and then introduce a novel self-supervised CT image denoising method called WIA-LD2ND, only using NDCT data. The proposed WIA-LD2ND comprises two modules: Wavelet-based Image Alignment (WIA) and Frequency-Aware Multi-scale Loss (FAM). First, WIA is introduced to align NDCT with LDCT by mainly adding noise to the high-frequency components, which is the main difference between LDCT and NDCT. Second, to better capture high-frequency components and detailed information, Frequency-Aware Multi-scale Loss (FAM) is proposed by effectively utilizing multi-scale feature space. Extensive experiments on two public LDCT denoising datasets demonstrate that our WIA-LD2ND, only uses NDCT, outperforms existing several state-of-the-art weakly-supervised and self-supervised methods.
WIA-LD2ND: Wavelet-based Image Alignment for Self-supervised Low-Dose CT Denoising
[ "Zhao, Haoyu", "Gu, Yuliang", "Zhao, Zhou", "Du, Bo", "Xu, Yongchao", "Yu, Rui" ]
Conference
2403.11672
[ "https://github.com/zhaohaoyu376/WI-LD2ND" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
106
null
https://papers.miccai.org/miccai-2024/paper/1764_paper.pdf
@InProceedings{ Zha_MemWarp_MICCAI2024, author = { Zhang, Hang and Chen, Xiang and Hu, Renjiu and Liu, Dongdong and Li, Gaolei and Wang, Rongguang }, title = { { MemWarp: Discontinuity-Preserving Cardiac Registration with Memorized Anatomical Filters } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Many existing learning-based deformable image registration methods impose constraints on deformation fields to ensure they are globally smooth and continuous. However, this assumption does not hold in cardiac image registration, where different anatomical regions exhibit asymmetric motions during respiration and movements due to sliding organs within the chest. Consequently, such global constraints fail to accommodate local discontinuities across organ boundaries, potentially resulting in erroneous and unrealistic displacement fields. In this paper, we address this issue with \textit{MemWarp}, a learning framework that leverages a memory network to store prototypical information tailored to different anatomical regions. \textit{MemWarp} is different from earlier approaches in two main aspects: firstly, by decoupling feature extraction from similarity matching in moving and fixed images, it facilitates more effective utilization of feature maps; secondly, despite its capability to preserve discontinuities, it eliminates the need for segmentation masks during model inference. In experiments on a publicly available cardiac dataset, our method achieves considerable improvements in registration accuracy and producing realistic deformations, outperforming state-of-the-art methods with a remarkable 7.1\% Dice score improvement over the runner-up semi-supervised method. Source code will be available at \url{https://github.com/tinymilky/Mem-Warp}.
MemWarp: Discontinuity-Preserving Cardiac Registration with Memorized Anatomical Filters
[ "Zhang, Hang", "Chen, Xiang", "Hu, Renjiu", "Liu, Dongdong", "Li, Gaolei", "Wang, Rongguang" ]
Conference
2407.08093
[ "https://github.com/tinymilky/Mem-Warp" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
107
null
https://papers.miccai.org/miccai-2024/paper/1911_paper.pdf
@InProceedings{ Liu_Controllable_MICCAI2024, author = { Liu, Shiyu and Wang, Fan and Ren, Zehua and Lian, Chunfeng and Ma, Jianhua }, title = { { Controllable Counterfactual Generation for Interpretable Medical Image Classification } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Counterfactual generation is used to solve the problem of lack of interpreta-bility and insufficient data in deep diagnostic models. By synthesize counter-factual images based on an image-to-image generation model trained with unpaired data, we can interpret the output of a classification model according to a hypothetical class and enhance the training dataset. Recent counterfactu-al generation approaches based on autoencoders or generative adversarial models are difficult to train or produce realistic images due to the trade-off between image similarity and class difference. In this paper, we propose a new counterfactual generation method based on diffusion models. Our method combines the class-condition control from classifier-free guidance and the reference-image control with attention injection to transform the in-put images with unknown labels into a hypothesis class. Our methods can flexibly adjust the generation trade-off in the inference stage instead of the training stage, providing controllable visual explanations consistent with medical knowledge for clinicians. We demonstrate the effectiveness of our method on the ADNI structural MRI dataset for Alzheimer’s disease diagno-sis and conditional 3D image2image generation tasks. Our codes can be found at https://github.com/ladderlab-xjtu/ControlCG.
Controllable Counterfactual Generation for Interpretable Medical Image Classification
[ "Liu, Shiyu", "Wang, Fan", "Ren, Zehua", "Lian, Chunfeng", "Ma, Jianhua" ]
Conference
[ "https://github.com/ladderlab-xjtu/ControlCG" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
108
null
https://papers.miccai.org/miccai-2024/paper/2031_paper.pdf
@InProceedings{ Che_RoCoSDF_MICCAI2024, author = { Chen, Hongbo and Gao, Yuchong and Zhang, Shuhang and Wu, Jiangjie and Ma, Yuexin and Zheng, Rui }, title = { { RoCoSDF: Row-Column Scanned Neural Signed Distance Fields for Freehand 3D Ultrasound Imaging Shape Reconstruction } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
The reconstruction of high-quality shape geometry is crucial for developing freehand 3D ultrasound imaging. However, the shape reconstruction of multi-view ultrasound data remains challenging due to the elevation distortion caused by thick transducer probes. In this paper, we present a novel learning-based framework RoCoSDF, which can effectively generate an implicit surface through continuous shape representations derived from row-column scanned datasets. In RoCoSDF, we encode the datasets from different views into the corresponding neural signed distance function (SDF) and then operate all SDFs in a normalized 3D space to restore the actual surface contour. Without requiring pre-training on large-scale ground truth shapes, our approach can synthesize a smooth and continuous signed distance field from multi-view SDFs to implicitly represent the actual geometry. Furthermore, two regularizers are introduced to facilitate shape refinement by constraining the SDF near the surface. The experiments on twelve shape datasets acquired by two ultrasound transducer probes validate that RoCoSDF can effectively reconstruct accurate geometric shapes from multi-view ultrasound data, which outperforms current reconstruction methods. Code is available at https://github.com/chenhbo/RoCoSDF.
RoCoSDF: Row-Column Scanned Neural Signed Distance Fields for Freehand 3D Ultrasound Imaging Shape Reconstruction
[ "Chen, Hongbo", "Gao, Yuchong", "Zhang, Shuhang", "Wu, Jiangjie", "Ma, Yuexin", "Zheng, Rui" ]
Conference
2408.07325
[ "https://github.com/chenhbo/RoCoSDF" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
109
null
https://papers.miccai.org/miccai-2024/paper/0215_paper.pdf
@InProceedings{ Yin_HistoSyn_MICCAI2024, author = { Yin, Chong and Liu, Siqi and Wong, Vincent Wai-Sun and Yuen, Pong C. }, title = { { HistoSyn: Histomorphology-Focused Pathology Image Synthesis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Examining pathology images through visual microscopy is widely considered the most reliable method for diagnosing different medical conditions. Although deep learning-based methods show great potential for aiding pathology image analysis, they are hindered by the lack of accessible large-scale annotated data. Large text-to-image models have significantly advanced the synthesis of diverse contexts within natural image analysis, thereby expanding existing datasets. However, the variety of histomorphological features in pathology images, which differ from that of natural images, has been less explored. In this paper, we propose a histomorphology-focused pathology image synthesis (HistoSyn) method. Specifically, HistoSyn constructs instructive textural prompts from spatial and morphological attributes of pathology images. It involves analyzing the intricate patterns and structures found within pathological images and translating these visual details into descriptive prompts. Furthermore, HistoSyn presents new criteria for image quality evaluation focusing on spatial and morphological characteristics. Experiments have demonstrated that our method can achieve a diverse range of high-quality pathology images, with a focus on histomorphological attributes.
HistoSyn: Histomorphology-Focused Pathology Image Synthesis
[ "Yin, Chong", "Liu, Siqi", "Wong, Vincent Wai-Sun", "Yuen, Pong C." ]
Conference
[ "https://github.com/7LFB/HistoSyn" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
110
null
https://papers.miccai.org/miccai-2024/paper/3261_paper.pdf
@InProceedings{ Kim_Enhancing_MICCAI2024, author = { Kim, Yunsoo and Wu, Jinge and Abdulle, Yusuf and Gao, Yue and Wu, Honghan }, title = { { Enhancing Human-Computer Interaction in Chest X-ray Analysis using Vision and Language Model with Eye Gaze Patterns } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Recent advancements in Computer Assisted Diagnosis have shown promising performance in medical imaging tasks, particularly in chest X-ray analysis. However, the interaction between these models and radiologists has been primarily limited to input images. This work proposes a novel approach to enhance human-computer interaction in chest X-ray analysis using Vision-Language Models (VLMs) enhanced with radiologists’ attention by incorporating eye gaze data alongside textual prompts. Our approach leverages heatmaps generated from eye gaze data, overlaying them onto medical images to highlight areas of intense radiologist’s focus during chest X-ray evaluation. We evaluate this methodology in tasks such as visual question answering, chest X-ray report automation, error detection, and differential diagnosis. Our results demonstrate the inclusion of eye gaze information significantly enhances the accuracy of chest X-ray analysis. Also, the impact of eye gaze on fine-tuning was confirmed as it outperformed other medical VLMs in all tasks except visual question answering. This work marks the potential of leveraging both the VLM’s capabilities and the radiologist’s domain knowledge to improve the capabilities of AI models in medical imaging, paving a novel way for Computer Assisted Diagnosis with a human-centred AI.
Enhancing Human-Computer Interaction in Chest X-ray Analysis using Vision and Language Model with Eye Gaze Patterns
[ "Kim, Yunsoo", "Wu, Jinge", "Abdulle, Yusuf", "Gao, Yue", "Wu, Honghan" ]
Conference
2404.02370
[ "https://github.com/knowlab/CXR_VLM_EyeGaze" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
111
null
https://papers.miccai.org/miccai-2024/paper/0139_paper.pdf
@InProceedings{ Lyu_SuperpixelGuided_MICCAI2024, author = { Lyu, Fei and Xu, Jingwen and Zhu, Ye and Wong, Grace Lai-Hung and Yuen, Pong C. }, title = { { Superpixel-Guided Segment Anything Model for Liver Tumor Segmentation with Couinaud Segment Prompt } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
The Segment Anything Model (SAM) is a powerful foundation model which has shown impressive performance for generic image segmentation. However, directly applying SAM to liver tumor segmentation presents challenges due to the domain gap between nature images and medical images, and the requirement of labor-intensive manual prompt generation. To address these challenges, we first investigate text promptable liver tumor segmentation by Couinaud segment, where Couinaud segment prompt can be automatically extracted from radiology reports to reduce massive manual efforts. Moreover, we propose a novel CouinaudSAM to adapt SAM for liver tumor segmentation. Specifically, we achieve this by: 1) a superpixel-guided prompt generation approach to effectively transform Couinaud segment prompt into SAM-acceptable point prompt; and 2) a difficulty-aware prompt sampling strategy to make model training more effective and efficient. Experimental results on the public liver tumor segmentation dataset demonstrate that our method outperforms the other state-of-the-art methods.
Superpixel-Guided Segment Anything Model for Liver Tumor Segmentation with Couinaud Segment Prompt
[ "Lyu, Fei", "Xu, Jingwen", "Zhu, Ye", "Wong, Grace Lai-Hung", "Yuen, Pong C." ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
112
null
https://papers.miccai.org/miccai-2024/paper/1356_paper.pdf
@InProceedings{ Gu_Revisiting_MICCAI2024, author = { Gu, Yi and Lin, Yi and Cheng, Kwang-Ting and Chen, Hao }, title = { { Revisiting Deep Ensemble Uncertainty for Enhanced Medical Anomaly Detection } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Medical anomaly detection (AD) is crucial in pathological identification and localization. Current methods typically rely on uncertainty estimation in deep ensembles to detect anomalies, assuming that ensemble learners should agree on normal samples while exhibiting disagreement on unseen anomalies in the output space. However, these methods may suffer from inadequate disagreement on anomalies or diminished agreement on normal samples. To tackle these issues, we propose D2UE, a Diversified Dual-space Uncertainty Estimation framework for medical anomaly detection. To effectively balance agreement and disagreement for anomaly detection, we propose Redundancy-Aware Repulsion (RAR), which uses a similarity kernel that remains invariant to both isotropic scaling and orthogonal transformations, explicitly promoting diversity in learners’ feature space. Moreover, to accentuate anomalous regions, we develop Dual-Space Uncertainty (DSU), which utilizes the ensemble’s uncertainty in input and output spaces. In input space, we first calculate gradients of reconstruction error with respect to input images. The gradients are then integrated with reconstruction outputs to estimate uncertainty for inputs, enabling effective anomaly discrimination even when output space disagreement is minimal. We conduct a comprehensive evaluation of five medical benchmarks with different backbones. Experimental results demonstrate the superiority of our method to state-of-the-art methods and the effectiveness of each component in our framework.
Revisiting Deep Ensemble Uncertainty for Enhanced Medical Anomaly Detection
[ "Gu, Yi", "Lin, Yi", "Cheng, Kwang-Ting", "Chen, Hao" ]
Conference
2409.17485
[ "https://github.com/Rubiscol/D2UE" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
113
null
https://papers.miccai.org/miccai-2024/paper/2261_paper.pdf
@InProceedings{ Dan_SiNGR_MICCAI2024, author = { Dang, Trung and Nguyen, Huy Hoang and Tiulpin, Aleksei }, title = { { SiNGR: Brain Tumor Segmentation via Signed Normalized Geodesic Transform Regression } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
One of the primary challenges in brain tumor segmentation arises from the uncertainty of voxels close to tumor boundaries. However, the conventional process of generating ground truth segmentation masks fails to treat such uncertainties properly. Those ``hard labels’’ with 0s and 1s conceptually influenced the majority of prior studies on brain image segmentation. As a result, tumor segmentation is often solved through voxel classification. In this work, we instead view this problem as a voxel-level regression, where the ground truth represents a certainty mapping from any pixel to the border of the tumor. We propose a novel ground truth label transformation, which is based on a signed geodesic transform, to capture the uncertainty in brain tumors’ vicinity. We combine this idea with a Focal-like regression L1-loss that enables effective regression learning in high-dimensional output space by appropriately weighting voxels according to their difficulty. We thoroughly conduct an experimental evaluation to validate the components of our proposed method, compare it to a diverse array of state-of-the-art segmentation models, and show that it is architecture-agnostic. The code of our method is made publicly available (\url{https://github.com/Oulu-IMEDS/SiNGR/}).
SiNGR: Brain Tumor Segmentation via Signed Normalized Geodesic Transform Regression
[ "Dang, Trung", "Nguyen, Huy Hoang", "Tiulpin, Aleksei" ]
Conference
2405.16813
[ "https://github.com/Oulu-IMEDS/SiNGR" ]
https://huggingface.co./papers/2405.16813
1
0
0
3
[]
[]
[]
[]
[]
[]
1
Poster
114
null
https://papers.miccai.org/miccai-2024/paper/2040_paper.pdf
@InProceedings{ Agg_Acrosssubject_MICCAI2024, author = { Aggarwal, Himanshu and Al-Shikhley, Liza and Thirion, Bertrand }, title = { { Across-subject ensemble-learning alleviates the need for large samples for fMRI decoding } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Decoding cognitive states from functional magnetic resonance imaging is central to understanding the functional organization of the brain. Within-subject decoding avoids between-subject correspondence problems but requires large sample sizes to make accurate predictions; obtaining such large sample sizes is both challenging and expensive. Here, we investigate an ensemble approach to decoding that combines the classifiers trained on data from other subjects to decode cognitive states in a new subject. We compare it with the conventional decoding approach on five different datasets and cognitive tasks. We find that it outperforms the conventional approach by up to 20% in accuracy, especially for datasets with limited per-subject data. The ensemble approach is particularly advantageous when the classifier is trained in voxel space. Furthermore, a Multi-layer Perceptron turns out to be a good default choice as an ensemble method. These results show that the pre-training strategy reduces the need for large per-subject data.
Across-subject ensemble-learning alleviates the need for large samples for fMRI decoding
[ "Aggarwal, Himanshu", "Al-Shikhley, Liza", "Thirion, Bertrand" ]
Conference
2407.12056
[ "https://github.com/man-shu/ensemble-fmri" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
115
null
https://papers.miccai.org/miccai-2024/paper/1314_paper.pdf
@InProceedings{ Zul_CardioSpectrum_MICCAI2024, author = { Zuler, Shahar and Tejman-Yarden, Shai and Raviv, Dan }, title = { { CardioSpectrum: Comprehensive Myocardium Motion Analysis with 3D Deep Learning and Geometric Insights } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
The ability to map left ventricle (LV) myocardial motion using computed tomography angiography (CTA) is essential to diagnosing cardiovascular conditions and guiding interventional procedures. Due to their inherent locality, conventional neural networks typically have difficulty predicting subtle tangential movements, which considerably lessens the level of precision at which myocardium three-dimensional (3D) mapping can be performed. Using 3D optical flow techniques and Functional Maps (FMs), we present a comprehensive approach to address this problem. FMs are known for their capacity to capture global geometric features, thus providing a fuller understanding of 3D geometry. As an alternative to traditional segmentation-based priors, we employ surface-based two-dimensional (2D) constraints derived from spectral correspondence methods. Our 3D deep learning architecture, based on the ARFlow model, is optimized to handle complex 3D motion analysis tasks. By incorporating FMs, we can capture the subtle tangential movements of the myocardium surface precisely, hence significantly improving the accuracy of 3D mapping of the myocardium. The experimental results confirm the effectiveness of this method in enhancing myocardium motion analysis. This approach can contribute to improving cardiovascular diagnosis and treatment. Our code and additional resources are available at: https://shaharzuler.github.io/CardioSpectrumPage
CardioSpectrum: Comprehensive Myocardium Motion Analysis with 3D Deep Learning and Geometric Insights
[ "Zuler, Shahar", "Tejman-Yarden, Shai", "Raviv, Dan" ]
Conference
2407.03794
[ "https://github.com/shaharzuler/CardioSpectrum" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
116
null
https://papers.miccai.org/miccai-2024/paper/1064_paper.pdf
@InProceedings{ Jin_Location_MICCAI2024, author = { Jin, Qiangguo and Huang, Jiapeng and Sun, Changming and Cui, Hui and Xuan, Ping and Su, Ran and Wei, Leyi and Wu, Yu-Jie and Wu, Chia-An and Duh, Henry B. L. and Lu, Yueh-Hsun }, title = { { Location embedding based pairwise distance learning for fine-grained diagnosis of urinary stones } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
The precise diagnosis of urinary stones is crucial for devising effective treatment strategies. The diagnostic process, however, is often complicated by the low contrast between stones and surrounding tissues, as well as the variability in stone locations across different patients. To address this issue, we propose a novel location embedding based pairwise distance learning network (LEPD-Net) that leverages low-dose abdominal X-ray imaging combined with location information for the fine-grained diagnosis of urinary stones. LEPD-Net enhances the representation of stone-related features through context-aware region enhancement, incorporates critical location knowledge via stone location embedding, and achieves recognition of fine-grained objects with our innovative fine-grained pairwise distance learning. Additionally, we have established an in-house dataset on urinary tract stones to demonstrate the effectiveness of our proposed approach. Comprehensive experiments conducted on this dataset reveal that our framework significantly surpasses existing state-of-the-art methods.
Location embedding based pairwise distance learning for fine-grained diagnosis of urinary stones
[ "Jin, Qiangguo", "Huang, Jiapeng", "Sun, Changming", "Cui, Hui", "Xuan, Ping", "Su, Ran", "Wei, Leyi", "Wu, Yu-Jie", "Wu, Chia-An", "Duh, Henry B. L.", "Lu, Yueh-Hsun" ]
Conference
2407.00431
[ "https://github.com/BioMedIA-repo/LEPD-Net.git" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
117
null
https://papers.miccai.org/miccai-2024/paper/0730_paper.pdf
@InProceedings{ Bao_Realworld_MICCAI2024, author = { Bao, Mingkun and Wang, Yan and Wei, Xinlong and Jia, Bosen and Fan, Xiaolin and Lu, Dong and Gu, Yifan and Cheng, Jian and Zhang, Yingying and Wang, Chuanyu and Zhu, Haogang }, title = { { Real-world Visual Navigation for Cardiac Ultrasound View Planning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Echocardiography (ECHO) is commonly used to assist in the diagnosis of cardiovascular diseases (CVDs). However, manually conducting standardized ECHO view acquisitions by manipulating the probe demands significant experience and training for sonographers. In this work, we propose a visual navigation system for cardiac ultrasound view planning, designed to assist novice sonographers in accurately obtaining the required views for CVDs diagnosis. The system introduces a view-agnostic feature extractor to explore the spatial relationships between source frame views, learning the relative rotations among different frames for network regression, thereby facilitating transfer learning to improve the accuracy and robustness of identifying specific target planes. Additionally, we present a target consistency loss to ensure that frames within the same scan regress to the same target plane. The experimental results demonstrate that the average error in the apical four-chamber view (A4C) can be reduced to 7.055 degrees. Moreover, results from practical clinical validation indicate that, with the guidance of the visual navigation system, the average time for acquiring A4C view can be reduced by at least 3.86 times, which is instructive for the clinical practice of novice sonographers.
Real-world Visual Navigation for Cardiac Ultrasound View Planning
[ "Bao, Mingkun", "Wang, Yan", "Wei, Xinlong", "Jia, Bosen", "Fan, Xiaolin", "Lu, Dong", "Gu, Yifan", "Cheng, Jian", "Zhang, Yingying", "Wang, Chuanyu", "Zhu, Haogang" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
118
null
https://papers.miccai.org/miccai-2024/paper/2265_paper.pdf
@InProceedings{ Zha_ModelMix_MICCAI2024, author = { Zhang, Ke and Patel, Vishal M. }, title = { { ModelMix: A New Model-Mixup Strategy to Minimize Vicinal Risk across Tasks for Few-scribble based Cardiac Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Pixel-level dense labeling is both resource-intensive and time-consuming, whereas weak labels such as scribble present a more feasible alternative to full annotations. However, training segmentation networks with weak supervision from scribbles remains challenging. Inspired by the fact that different segmentation tasks can be correlated with each other, we introduce a new approach to few-scribble supervised segmentation based on model parameter interpolation, termed as ModelMix. Leveraging the prior knowledge that linearly interpolating convolution kernels and bias terms should result in linear interpolations of the corresponding feature vectors, ModelMix constructs virtual models using convex combinations of convolutional parameters from separate encoders. We then regularize the model set to minimize vicinal risk across tasks in both unsupervised and scribble-supervised way. Validated on three open datasets, i.e., ACDC, MSCMRseg, and MyoPS, our few-scribble guided ModelMix significantly surpasses the performance of state-of-the-art scribble supervised methods. Our code is available at https://github.com/BWGZK/ModelMix.
ModelMix: A New Model-Mixup Strategy to Minimize Vicinal Risk across Tasks for Few-scribble based Cardiac Segmentation
[ "Zhang, Ke", "Patel, Vishal M." ]
Conference
2406.13237
[ "https://github.com/BWGZK/ModelMix/" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
119
null
https://papers.miccai.org/miccai-2024/paper/1749_paper.pdf
@InProceedings{ Yan_Adversarial_MICCAI2024, author = { Yang, Yiguang and Ning, Guochen and Zhong, Changhao and Liao, Hongen }, title = { { Adversarial Diffusion Model for Domain-Adaptive Depth Estimation in Bronchoscopic Navigation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
In bronchoscopic navigation, depth estimation has emerged as a promising method with higher robustness for localizing camera and obtaining scene geometry. While many supervised approaches have shown success for natural images, the scarcity of depth annotations limits their deployment in bronchoscopic scenarios. To address the issue of lacking depth labels, a common approach for unsupervised domain adaptation (UDA) includes one-shot mapping through generative adversarial networks. However, conventional adversarial models that directly recover the image distribution can suffer from reduced sample fidelity and learning biases. In this study, we propose a novel adversarial diffusion model for domain-adaptive depth estimation on bronchoscopic images. Our two-stage approach sequentially trains a supervised network on labeled virtual images, and an unsupervised adversarial network that aligns domain-invariant representations for cross-domain adaptation. This model reformulates depth estimation at each stage as an iterative diffusion-denoising process within the latent space for mitigating mapping biases and enhancing model performance. The experiments on clinical sequences show the superiority of our method on depth estimation as well as geometry reconstruction for bronchoscopic navigation.
Adversarial Diffusion Model for Domain-Adaptive Depth Estimation in Bronchoscopic Navigation
[ "Yang, Yiguang", "Ning, Guochen", "Zhong, Changhao", "Liao, Hongen" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
120
null
https://papers.miccai.org/miccai-2024/paper/0765_paper.pdf
@InProceedings{ Hou_EnergyBased_MICCAI2024, author = { Hou, Zeyi and Yan, Ruixin and Yan, Ziye and Lang, Ning and Zhou, Xiuzhuang }, title = { { Energy-Based Controllable Radiology Report Generation with Medical Knowledge } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Automated generation of radiology reports from chest X-rays has the potential to substantially reduce the workload of radiologists. Recent advances in report generation using deep learning algorithms have achieved significant results, benefiting from the incorporation of medical knowledge. However, incorporation of additional knowledge or constraints in existing models often require either altering network structures or task-specific fine-tuning. In this paper, we propose an energy-based controllable report generation method, named ECRG. Specifically, our method directly utilizes diverse off-the-shelf medical expert models or knowledge to design energy functions, which are integrated into pre-trained report generation models during the inference stage, without any alterations to the network structure or fine-tuning. We also propose an acceleration algorithm to improve the efficiency of sampling the complex multi-modal distribution of report generation. ECRG is model-agnostic and can be readily used for other pre-trained report generation models. Two cases are presented on the design of energy functions tailored to medical expert systems and knowledge. The experiments on widely used datasets Chest ImaGenome v1.0.0 and MIMIC-CXR demonstrate the effectiveness of our proposed approach.
Energy-Based Controllable Radiology Report Generation with Medical Knowledge
[ "Hou, Zeyi", "Yan, Ruixin", "Yan, Ziye", "Lang, Ning", "Zhou, Xiuzhuang" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
121
null
https://papers.miccai.org/miccai-2024/paper/0245_paper.pdf
@InProceedings{ He_FRCNet_MICCAI2024, author = { He, Along and Li, Tao and Wu, Yanlin and Zou, Ke and Fu, Huazhu }, title = { { FRCNet: Frequency and Region Consistency for Semi-supervised Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Limited labeled data hinder the application of deep learning in medical domain. In clinical practice, there are sufficient unlabeled data that are not effectively used, and semi-supervised learning (SSL) is a promising way for leveraging these unlabeled data. However, existing SSL methods ignore frequency domain and region-level information and it is important for lesion regions located at low frequencies and with significant scale changes. In this paper, we introduce two consistency regularization strategies for semi-supervised medical image segmentation, including frequency domain consistency (FDC) to assist the feature learning in frequency domain and multi-granularity region similarity consistency (MRSC) to perform multi-scale region-level local context information feature learning. With the help of the proposed FDC and MRSC, we can leverage the powerful feature representation capability of them in an effective and efficient way. Extensive experiments on two medical image segmentation datasets show that our approach achieves large performance gains and exceeds other state-of-the-art methods. Code will be available.
FRCNet: Frequency and Region Consistency for Semi-supervised Medical Image Segmentation
[ "He, Along", "Li, Tao", "Wu, Yanlin", "Zou, Ke", "Fu, Huazhu" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
122
null
https://papers.miccai.org/miccai-2024/paper/2311_paper.pdf
@InProceedings{ Kol_MedCLIPSAM_MICCAI2024, author = { Koleilat, Taha and Asgariandehkordi, Hojat and Rivaz, Hassan and Xiao, Yiming }, title = { { MedCLIP-SAM: Bridging Text and Image Towards Universal Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Medical image segmentation of anatomical structures and pathology is crucial in modern clinical diagnosis, disease study, and treatment planning. To date, great progress has been made in deep learning-based segmentation techniques, but most methods still lack data efficiency, generalizability, and interactability. Consequently, the development of new, precise segmentation methods that demand fewer labeled datasets is of utmost importance in medical image analysis. Recently, the emergence of foundation models, such as CLIP and Segment-Anything-Model (SAM), with comprehensive cross-domain representation opened the door for interactive and universal image segmentation. However, exploration of these models for data-efficient medical image segmentation is still limited but is highly necessary. In this paper, we propose a novel framework, called MedCLIP-SAM that combines CLIP and SAM models to generate segmentation of clinical scans using text prompts in both zero-shot and weakly supervised settings. To achieve this, we employed a new Decoupled Hard Negative Noise Contrastive Estimation (DHN-NCE) loss to fine-tune the BiomedCLIP model and the recent gScoreCAM to generate prompts to obtain segmentation masks from SAM in a zero-shot setting. Additionally, we explored the use of zero-shot segmentation labels in a weakly supervised paradigm to improve the segmentation quality further. By extensively testing three diverse segmentation tasks and medical image modalities (breast tumor ultrasound, brain tumor MRI, and lung X-ray), our proposed framework has demonstrated excellent accuracy. Code is available at https://github.com/HealthX-Lab/MedCLIP-SAM.
MedCLIP-SAM: Bridging Text and Image Towards Universal Medical Image Segmentation
[ "Koleilat, Taha", "Asgariandehkordi, Hojat", "Rivaz, Hassan", "Xiao, Yiming" ]
Conference
2403.20253
[ "https://github.com/HealthX-Lab/MedCLIP-SAM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
123
null
https://papers.miccai.org/miccai-2024/paper/1340_paper.pdf
@InProceedings{ Cui_MCAD_MICCAI2024, author = { Cui, Jiaqi and Zeng, Xinyi and Zeng, Pinxian and Liu, Bo and Wu, Xi and Zhou, Jiliu and Wang, Yan }, title = { { MCAD: Multi-modal Conditioned Adversarial Diffusion Model for High-Quality PET Image Reconstruction } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Radiation hazards associated with standard-dose positron emission tomography (SPET) images remain a concern, whereas the quality of low-dose PET (LPET) images fails to meet clinical requirements. Therefore, there is great interest in reconstructing SPET images from LPET images. However, prior studies focus solely on image data, neglecting vital complementary information from other mo-dalities, e.g., patients’ clinical tabular, resulting in compromised reconstruction with limited diagnostic utility. Moreover, they often overlook the semantic consistency between real SPET and reconstructed images, leading to distorted semantic contexts. To tackle these problems, we propose a novel Multi-modal Conditioned Adversarial Diffusion model (MCAD) to reconstruct SPET images from multi-modal inputs, including LPET images and clinical tabular. Specifically, our MCAD incorporates a Multi-modal conditional Encoder (Mc-Encoder) to extract multi-modal features, followed by a conditional diffusion process to blend noise with multi-modal features and gradually map blended features to the target SPET images. To balance multi-modal inputs, the Mc-Encoder embeds Optimal Multi-modal Transport co-Attention (OMTA) to narrow the heterogeneity gap between image and tabular while capturing their interactions, providing sufficient guidance for reconstruction. In addition, to mitigate semantic distortions, we introduce the Multi-Modal Masked Text Reconstruction (M3TRec), which leverages semantic knowledge extracted from denoised PET images to restore the masked clinical tabular, thereby compelling the network to maintain accurate semantics during reconstruction. To expedite the diffusion process, we further introduce an adversarial diffusive network with a reduced number of diffusion steps. Experiments show that our method achieves the state-of-the-art performance both qualitatively and quantitatively.
MCAD: Multi-modal Conditioned Adversarial Diffusion Model for High-Quality PET Image Reconstruction
[ "Cui, Jiaqi", "Zeng, Xinyi", "Zeng, Pinxian", "Liu, Bo", "Wu, Xi", "Zhou, Jiliu", "Wang, Yan" ]
Conference
2406.13150
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
124
null
https://papers.miccai.org/miccai-2024/paper/3386_paper.pdf
@InProceedings{ Ber_Simulation_MICCAI2024, author = { Bergere, Bastien and Dautremer, Thomas and Comtat, Claude }, title = { { Simulation Based Inference for PET iterative reconstruction } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
The analytical projector (system matrix) used in most PET reconstructions does not incorporate Compton scattering and other important physical effects that affect the process generating the PET data, which can lead to biases. In our work, we define the projector from the generative model of a Monte-Carlo simulator, which already encompasses many of these effects. Based on the simulator’s implicit distribution, we propose to learn a continuous analytic surrogate for the projector by using a neural density estimator. This avoids the discretization bottleneck associated with direct Monte-Carlo estimation of the PET system matrix, which leads to very high simulation cost. We compare our method with reconstructions using the classical projector, in which corrective terms are factored into a geometrically derived system matrix. Our experiments were carried out in the 2D setting, which enables smaller-scale testing
Simulation Based Inference for PET iterative reconstruction
[ "Bergere, Bastien", "Dautremer, Thomas", "Comtat, Claude" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
125
null
https://papers.miccai.org/miccai-2024/paper/1441_paper.pdf
@InProceedings{ Zha_XASim2Real_MICCAI2024, author = { Zhang, Baochang and Zhang, Zichen and Liu, Shuting and Faghihroohi, Shahrooz and Schunkert, Heribert and Navab, Nassir }, title = { { XA-Sim2Real: Adaptive Representation Learning for Vessel Segmentation in X-ray Angiography } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Accurate vessel segmentation from X-ray Angiography (XA) is essential for various medical applications, including diagnosis, treatment planning, and image-guided interventions. However, learning-based methods face challenges such as inaccurate or insufficient manual annotations, anatomical variability, and data heterogeneity across different medical institutions. In this paper, we propose XA-Sim2Real, a novel adaptive framework for vessel segmentation in XA image. Our approach leverages Digitally Reconstructed Vascular Radiographs (DRVRs) and a two-stage adaptation process to achieve promising segmentation performance on XA image without the need for manual annotations. The first stage involves an XA simulation module for generating realistic simulated XA images from patients’ CT angiography data, providing more accurate vascular shapes and backgrounds than existing curvilinear-structure simulation methods. In the second stage, a novel adaptive representation alignment module addresses data heterogeneity by performing intra-domain adaptation for the complex and diverse nature of XA data in different settings. This module utilizes self-supervised and contrastive learning mechanisms to learn adaptive representations for unlabeled XA image. We extensively evaluate our method on both public and in-house datasets, demonstrating superior performance compared to state-of-the-art self-supervised methods and competitive performance compared to supervised method.
XA-Sim2Real: Adaptive Representation Learning for Vessel Segmentation in X-ray Angiography
[ "Zhang, Baochang", "Zhang, Zichen", "Liu, Shuting", "Faghihroohi, Shahrooz", "Schunkert, Heribert", "Navab, Nassir" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
126
null
https://papers.miccai.org/miccai-2024/paper/2279_paper.pdf
@InProceedings{ Zeh_Rethinking_MICCAI2024, author = { Zehra, Talat and Marino, Joseph and Wang, Wendy and Frantsuzov, Grigoriy and Nadeem, Saad }, title = { { Rethinking Histology Slide Digitization Workflows for Low-Resource Settings } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Histology slide digitization is becoming essential for telepathology (remote consultation), knowledge sharing (education), and using the state-of-the-art artificial intelligence algorithms (augmented/automated end-to-end clinical workflows). However, the cumulative costs of digital multi-slide high-speed brightfield scanners, cloud/on-premises storage, and personnel (IT and technicians) make the current slide digitization workflows out-of-reach for limited-resource settings, further widening the health equity gap; even single-slide manual scanning commercial solutions are costly due to hardware requirements (high-resolution cameras, high-spec PC/workstation, and support for only high-end microscopes). In this work, we present a new cloud slide digitization workflow for creating scanner-quality whole-slide images (WSIs) from uploaded low-quality videos, acquired from cheap and inexpensive microscopes with built-in cameras. Specifically, we present a pipeline to create stitched WSIs while automatically deblurring out-of-focus regions, upsampling input 10X images to 40X resolution, and reducing brightness/contrast and light-source illumination variations. We demonstrate the WSI creation efficacy from our workflow on World Health Organization-declared neglected tropical disease, Cutaneous Leishmaniasis (prevalent only in the poorest regions of the world and only diagnosed by sub-specialist dermatopathologists, rare in poor countries), as well as other common pathologies on core biopsies of breast, liver, duodenum, stomach and lymph node. Upon acceptance, we will release our code, datasets, pretrained models, and cloud platform for uploading microscope videos and downloading/viewing WSIs with shareable links (no sign-in required) for telepathology and knowledge sharing.
Rethinking Histology Slide Digitization Workflows for Low-Resource Settings
[ "Zehra, Talat", "Marino, Joseph", "Wang, Wendy", "Frantsuzov, Grigoriy", "Nadeem, Saad" ]
Conference
2405.08169
[ "https://github.com/nadeemlab/DeepLIIF" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
127
null
https://papers.miccai.org/miccai-2024/paper/0721_paper.pdf
@InProceedings{ Wu_Towards_MICCAI2024, author = { Wu, Hong and Fu, Juan and Ye, Hongsheng and Zhong, Yuming and Zou, Xuebin and Zhou, Jianhua and Wang, Yi }, title = { { Towards Multi-modality Fusion and Prototype-based Feature Refinement for Clinically Significant Prostate Cancer Classification in Transrectal Ultrasound } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Prostate cancer is a highly prevalent cancer and ranks as the second leading cause of cancer-related deaths in men globally. Recently, the utilization of multi-modality transrectal ultrasound (TRUS) has gained significant traction as a valuable technique for guiding prostate biopsies. In this study, we present a novel learning framework for clinically significant prostate cancer (csPCa) classification by using multi-modality TRUS. The proposed framework employs two separate 3D ResNet-50 to extract distinctive features from B-mode and shear wave elastography (SWE). Additionally, an attention module is incorporated to effectively refine B-mode features and aggregate the extracted features from both modalities. Furthermore, we utilize few shot segmentation task to enhance the capacity of the classification encoder. Due to the limited availability of csPCa masks, a prototype correction module is employed to extract representative prototypes of csPCa. The performance of the framework is assessed on a large-scale dataset consisting of 512 TRUS videos with biopsy-proved prostate cancer. The results demonstrate the strong capability in accurately identifying csPCa, achieving an area under the curve (AUC) of 0.86. Moreover, the framework generates visual class activation mapping (CAM), which can serve as valuable assistance for localizing csPCa. These CAM images may offer valuable guidance during TRUS-guided targeted biopsies, enhancing the efficacy of the biopsy procedure. The code is available at https://github.com/2313595986/SmileCode.
Towards Multi-modality Fusion and Prototype-based Feature Refinement for Clinically Significant Prostate Cancer Classification in Transrectal Ultrasound
[ "Wu, Hong", "Fu, Juan", "Ye, Hongsheng", "Zhong, Yuming", "Zou, Xuebin", "Zhou, Jianhua", "Wang, Yi" ]
Conference
2406.14069
[ "https://github.com/2313595986/SmileCode" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
128
null
https://papers.miccai.org/miccai-2024/paper/0262_paper.pdf
@InProceedings{ Fan_AttentionEnhanced_MICCAI2024, author = { Fang, Yuqi and Wang, Wei and Wang, Qianqian and Li, Hong-Jun and Liu, Mingxia }, title = { { Attention-Enhanced Fusion of Structural and Functional MRI for Analyzing HIV-Associated Asymptomatic Neurocognitive Impairment } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
Asymptomatic neurocognitive impairment (ANI) is a predominant form of cognitive impairment among individuals infected with human immunodeficiency virus (HIV). The current diagnostic criteria for ANI primarily rely on subjective clinical assessments, possibly leading to different interpretations among clinicians. Some recent studies leverage structural or functional MRI containing objective biomarkers for ANI analysis, offering clinicians companion diagnostic tools. However, they mainly utilize a single imaging modality, neglecting complementary information provided by structural and functional MRI. To this end, we propose an attention-enhanced structural and functional MRI fusion (ASFF) framework for HIV-associated ANI analysis. Specifically, the ASFF first extracts data-driven and human-engineered features from structural MRI, and also captures functional MRI features via a graph isomorphism network and Transformer. A mutual cross-attention fusion module is then designed to model the underlying relationship between structural and functional MRI. Additionally, a semantic inter-modality constraint is introduced to encourage consistency of multimodal features, facilitating effective feature fusion. Experimental results on 137 subjects from an HIV-associated ANI dataset with T1-weighted MRI and resting-state functional MRI show the effectiveness of our ASFF in ANI identification. Furthermore, our method can identify both modality-shared and modality-specific brain regions, which may advance our understanding of the structural and functional pathology underlying ANI.
Attention-Enhanced Fusion of Structural and Functional MRI for Analyzing HIV-Associated Asymptomatic Neurocognitive Impairment
[ "Fang, Yuqi", "Wang, Wei", "Wang, Qianqian", "Li, Hong-Jun", "Liu, Mingxia" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
129
null
https://papers.miccai.org/miccai-2024/paper/0901_paper.pdf
@InProceedings{ Li_Iterative_MICCAI2024, author = { Li, Shuhan and Lin, Yi and Chen, Hao and Cheng, Kwang-Ting }, title = { { Iterative Online Image Synthesis via Diffusion Model for Imbalanced Classification } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Accurate and robust classification of diseases is important for proper diagnosis and treatment. However, medical datasets often face challenges related to limited sample sizes and inherent imbalanced distributions, due to difficulties in data collection and variations in disease prevalence across different types. In this paper, we introduce an Iterative Online Image Synthesis (IOIS) framework to address the class imbalance problem in medical image classification. Our framework incorporates two key modules, namely Online Image Synthesis (OIS) and Accuracy Adaptive Sampling (AAS), which collectively target the imbalance classification issue at both the instance level and the class level. The OIS module alleviates the data insufficiency problem by generating representative samples tailored for online training of the classifier. On the other hand, the AAS module dynamically balances the synthesized samples among various classes, targeting those with low training accuracy. To evaluate the effectiveness of our proposed method in addressing imbalanced classification, we conduct experiments on the HAM10000 and APTOS datasets. The results obtained demonstrate the superiority of our approach over state-of-the-art methods as well as the effectiveness of each component. The source code is available at https://github.com/ustlsh/IOIS_imbalance.
Iterative Online Image Synthesis via Diffusion Model for Imbalanced Classification
[ "Li, Shuhan", "Lin, Yi", "Chen, Hao", "Cheng, Kwang-Ting" ]
Conference
2403.08407
[ "https://github.com/ustlsh/IOIS_imbalance" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
130
null
https://papers.miccai.org/miccai-2024/paper/1623_paper.pdf
@InProceedings{ Che_Modeling_MICCAI2024, author = { Chen, Aobo and Li, Yangyi and Qian, Wei and Morse, Kathryn and Miao, Chenglin and Huai, Mengdi }, title = { { Modeling and Understanding Uncertainty in Medical Image Classification } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Medical image classification is an important task in many different medical applications. The past years have witnessed the success of Deep Neural Networks (DNNs) in medical image classification. However, traditional softmax outputs produced by DNNs fail to estimate uncertainty in medical image predictions. Contrasting with conventional uncertainty estimation approaches, conformal prediction (CP) stands out as a model-agnostic and distribution-free methodology that constructs statistically rigorous uncertainty sets for model predictions. However, existing exact full conformal methods involve retraining the underlying DNN model for each test instance with each possible label, demanding substantial computational resources. Additionally, existing works fail to uncover the root causes of medical prediction uncertainty, making it difficult for doctors to interpret the estimated uncertainties associated with medical diagnoses. To address these challenges, in this paper, we first propose an efficient approximate full CP method, which involves tracking the gradient updates contributed by these samples during training. Subsequently, we design an interpretation method that uses these updates to identify the top-k most influential training samples that significantly impact models’ uncertainties. Extensive experiments on real-world medical image datasets are conducted to verify the effectiveness of the proposed methods.
Modeling and Understanding Uncertainty in Medical Image Classification
[ "Chen, Aobo", "Li, Yangyi", "Qian, Wei", "Morse, Kathryn", "Miao, Chenglin", "Huai, Mengdi" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
131
null
https://papers.miccai.org/miccai-2024/paper/4076_paper.pdf
@InProceedings{ Li_Prediction_MICCAI2024, author = { Li, Ganping and Otake, Yoshito and Soufi, Mazen and Masuda, Masachika and Uemura, Keisuke and Takao, Masaki and Sugano, Nobuhiko and Sato, Yoshinobu }, title = { { Prediction of Disease-Related Femur Shape Changes Using Geometric Encoding and Clinical Context on a Hip Disease CT Database } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
The accurate prediction of femur shape changes due to hip diseases is potentially useful for early diagnosis, treatment planning, and the assessment of disease progression. This study proposes a novel pipeline that leverages geometry encoding and context-awareness mechanisms to predict disease-related femur shape changes. Our method exploits the inherent geometric properties of femurs in CT scans to model and predict alterations in bone structure associated with various hip diseases, such as osteoarthritis (OA). We constructed a database of 367 CT scans from patients with hip OA, annotated using a previously developed bone segmentation model and an automated OA grading system. By combining geometry encoding and clinical context, our model achieves femur surface deformation prediction through implicit geometric and clinical insights, allowing for the detailed modeling of bone geometry variations due to disease progression. Our model demonstrated moderate accuracy in a cross-validation study, with a point-to-face distance (P2F) of 1.545mm on the femoral head, aligning with other advanced predictive methods. This work marks a significant step toward personalized hip disease treatment, offering a valuable tool for clinicians and researchers and aiming to enhance patient care outcomes.
Prediction of Disease-Related Femur Shape Changes Using Geometric Encoding and Clinical Context on a Hip Disease CT Database
[ "Li, Ganping", "Otake, Yoshito", "Soufi, Mazen", "Masuda, Masachika", "Uemura, Keisuke", "Takao, Masaki", "Sugano, Nobuhiko", "Sato, Yoshinobu" ]
Conference
[ "https://github.com/RIO98/FemurSurfacePrediction" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
132
null
https://papers.miccai.org/miccai-2024/paper/2652_paper.pdf
@InProceedings{ Don_UncertaintyAware_MICCAI2024, author = { Dong, Zhicheng and Yue, Xiaodong and Chen, Yufei and Zhou, Xujing and Liang, Jiye }, title = { { Uncertainty-Aware Multi-View Learning for Prostate Cancer Grading with DWI } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Grading of prostate cancer plays an important role in the planning of surgery and prognosis. Multi-parametric magnetic resonance imaging (mp-MRI) of the prostate can facilitate the detection, localization and grade of prostate cancer. In mp-MRI, Diffusion-Weighted Imaging (DWI) can distinguish a malignant neoplasm from benign prostate tissue due to a significant difference in the apparent diffusion sensitivity coefficient (b-value). DWI using high b-value is preferred for prostate cancer grading, providing high accuracy despite a decrease signal-to-noise ratio and increased image distortion. On the other hand, low b-value could avoid confounding pseudo-perfusion effects but in which the prostate normal parenchyma shows a very high signal intensity, making it difficult to distinguish it from prostate cancer foci. To fully capitalize on the advantages and information of DWIs with different b-values, we formulate the prostate cancer grading as a multi-view classification problem, treating DWIs with different b-values as distinct views. Multi-view classification aims to integrate views into a unified and comprehensive representation. However, existing multi-view methods cannot quantify the uncertainty of views and lack a interpretable and reliable fusion rule. To tackle this problem, we propose uncertainty-aware multi-view classification with uncertainty-aware belief integration. We measure the uncertainty of DWI based on Evidential Deep Learning and propose a novel strategy of uncertainty-aware belief integration to fuse multiple DWIs based on uncertainty measurements. Results demonstrate that our method outperforms current multi-view learning methods, showcasing its superior performance.
Uncertainty-Aware Multi-View Learning for Prostate Cancer Grading with DWI
[ "Dong, Zhicheng", "Yue, Xiaodong", "Chen, Yufei", "Zhou, Xujing", "Liang, Jiye" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
133
null
https://papers.miccai.org/miccai-2024/paper/0480_paper.pdf
@InProceedings{ Djo_This_MICCAI2024, author = { Djoumessi, Kerol and Bah, Bubacarr and Kühlewein, Laura and Berens, Philipp and Koch, Lisa }, title = { { This actually looks like that: Proto-BagNets for local and global interpretability-by-design } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Interpretability is a key requirement for the use of machine learning models in high-stakes applications, including medical diagnosis. Explaining black-box models mostly relies on post-hoc methods that do not faithfully reflect the model’s behavior. As a remedy, prototype-based networks have been proposed, but their interpretability is limited as they have been shown to provide coarse, unreliable, and imprecise explanations. In this work, we introduce Proto-BagNets, an interpretable-by-design prototype-based model that combines the advantages of bag-of-local feature models and prototype learning to provide meaningful, coherent, and relevant prototypical parts needed for accurate and interpretable image classification tasks. We evaluated the Proto-BagNet for drusen detection on publicly available retinal OCT data. The Proto-BagNet performed comparably to the state-of-the-art interpretable and non-interpretable models while providing faithful, accurate, and clinically meaningful local and global explanations.
This actually looks like that: Proto-BagNets for local and global interpretability-by-design
[ "Djoumessi, Kerol", "Bah, Bubacarr", "Kühlewein, Laura", "Berens, Philipp", "Koch, Lisa" ]
Conference
2406.15168
[ "https://github.com/kdjoumessi/Proto-BagNets" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
134
null
https://papers.miccai.org/miccai-2024/paper/1263_paper.pdf
@InProceedings{ Han_InterIntra_MICCAI2024, author = { Han, Xiangmin and Xue, Rundong and Du, Shaoyi and Gao, Yue }, title = { { Inter-Intra High-Order Brain Network for ASD Diagnosis via Functional MRIs } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Currently in the field of computer-aided diagnosis, graph or hypergraph-based methods are widely used in the diagnosis of neurological diseases. However, existing graph-based work primarily focuses on pairwise correlations, neglecting high-order correlations. Additionally, existing hypergraph methods can only explore the commonality of high-order representations at a single scale, resulting in the lack of a framework that can integrate multi-scale high-order correlations. To address the above issues, we propose an Inter-Intra High-order Brain Network (I2HBN) framework for ASD-assisted diagnosis, which is divided into two parts: intra-hypergraph computation and inter-hypergraph computation. Specifically, the intra-hypergraph computation employs the hypergraph to represent high-order correlations among different brain regions based on fMRI signal, generating intra-embeddings and intra-results. Subsequently, inter-hypergraph computation utilizes these intra-embeddings as features of inter-vertices to model inter-hypergraph that captures the inter-correlations among individuals at the population level. Finally, the intra-results and the inter-results are weighted to perform brain disease diagnosis. We demonstrate the potential of this method on two ABIDE datasets (NYU and UCLA), the results show that the proposed method for ASD diagnosis has superior performance, compared with existing state-of-the-art methods.
Inter-Intra High-Order Brain Network for ASD Diagnosis via Functional MRIs
[ "Han, Xiangmin", "Xue, Rundong", "Du, Shaoyi", "Gao, Yue" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
135
null
https://papers.miccai.org/miccai-2024/paper/2444_paper.pdf
@InProceedings{ Yim_DermaVQA_MICCAI2024, author = { Yim, Wen-wai and Fu, Yujuan and Sun, Zhaoyi and Ben Abacha, Asma and Yetisgen, Meliha and Xia, Fei }, title = { { DermaVQA: A Multilingual Visual Question Answering Dataset for Dermatology } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Remote medical care has become commonplace with the establishment of patient portals, the maturation of web technologies, and the proliferation of personal devices. However, though on-demand care provides convenience and expands patient access, this same phenomenon may lead to increased workload for healthcare providers. Drafting candidate responses may help speed up physician workflows answering electronic messages. One specialty that may benefit from the latest multi-modal vision-language foundational models is dermatology. However, there is no existing dataset that incorporate dermatological health queries along with user-generated images. In this work, we contribute a new dataset, DermaVQA(https://osf.io/72rp3/), for the task of dermatology question answering and we benchmark the performance of state-of-the-art multi-modal models on multilingual response generation using relevant multi-reference metrics. The dataset and corresponding code are available on our project’s GitHub repository (https://github.com/velvinnn/DermaVQA).
DermaVQA: A Multilingual Visual Question Answering Dataset for Dermatology
[ "Yim, Wen-wai", "Fu, Yujuan", "Sun, Zhaoyi", "Ben Abacha, Asma", "Yetisgen, Meliha", "Xia, Fei" ]
Conference
[ "https://github.com/velvinnn/DermaVQA" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
136
null
https://papers.miccai.org/miccai-2024/paper/2521_paper.pdf
@InProceedings{ Hua_DESSAM_MICCAI2024, author = { Huang, Lina and Liang, Yixiong and Liu, Jianfeng }, title = { { DES-SAM: Distillation-Enhanced Semantic SAM for Cervical Nuclear Segmentation with Box Annotation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Nuclei segmentation in cervical cell images is a crucial technique for the automatic diagnosis of cervical cell pathology. The current state-of-the-art (SOTA) nuclei segmentation methods often require significant time and resources to provide pixel-level annotations for training. To reduce the labor-intensive annotation costs, we propose DES-SAM, a box-supervised cervical nucleus segmentation network with strong generalization ability based on self-distillation prompting. We utilize Segment Anything Model (SAM) to generate high-quality pseudo-labels by integrating a lightweight detector. The main challenges lie in the poor generalization ability brought by small-scale training datasets and the large-scale training parameters of traditional knowledge distillation frameworks. To address these challenges, we propose leveraging the strong feature extraction ability of SAM and a self-distillation prompting strategy to maximize the performance of the downstream nuclear semantic segmentation task without compromising SAM’s generalization. Additionally, we propose an Edge-aware Enhanced Loss to improve the segmentation capability of DES-SAM. Various comparative and generalization experiments on public cervical cell nuclei datasets demonstrate the effectiveness of the proposed method.
DES-SAM: Distillation-Enhanced Semantic SAM for Cervical Nuclear Segmentation with Box Annotation
[ "Huang, Lina", "Liang, Yixiong", "Liu, Jianfeng" ]
Conference
[ "https://github.com/CVIU-CSU/DES-SAM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
137
null
https://papers.miccai.org/miccai-2024/paper/0040_paper.pdf
@InProceedings{ Xie_MHpFLGB_MICCAI2024, author = { Xie, Luyuan and Lin, Manqing and Xu, ChenMing and Luan, Tianyu and Zeng, Zhipeng and Qian, Wenjun and Li, Cong and Fang, Yuejian and Shen, Qingni and Wu, Zhonghai }, title = { { MH-pFLGB: Model Heterogeneous personalized Federated Learning via Global Bypass for Medical Image Analysis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
In the evolving application of medical artificial intelligence, federated learning stands out for its capacity to protect the privacy of training data, enabling collaborative model development without sharing local data from healthcare entities. However, the heterogeneity of data and systems across institutions presents significant challenges, undermining the efficiency of federated learning and the exchange of information between clients. To address these issues, we introduce a novel approach, MH-pFLGB, which employs a global bypass strategy to mitigate the reliance on public datasets and navigate the complexities of non-IID data distributions. Our method enhances traditional federated learning by integrating a global bypass model, which would share the information among the client, but also serves as part of the network to enhance the performance on each client. Additionally, \model provides a feature fusion module to better combine the local and global features. We validate MH-pFLGB’s effectiveness and adaptability through extensive testing on different medical tasks, demonstrating superior performance compared to existing state-of-the-art methods.
MH-pFLGB: Model Heterogeneous personalized Federated Learning via Global Bypass for Medical Image Analysis
[ "Xie, Luyuan", "Lin, Manqing", "Xu, ChenMing", "Luan, Tianyu", "Zeng, Zhipeng", "Qian, Wenjun", "Li, Cong", "Fang, Yuejian", "Shen, Qingni", "Wu, Zhonghai" ]
Conference
2407.00474
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
138
null
https://papers.miccai.org/miccai-2024/paper/0286_paper.pdf
@InProceedings{ Wan_LKMUNet_MICCAI2024, author = { Wang, Jinhong and Chen, Jintai and Chen, Danny Z. and Wu, Jian }, title = { { LKM-UNet: Large Kernel Vision Mamba UNet for Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
In clinical practice, medical image segmentation provides useful information on the contours and dimensions of target organs or tissues, facilitating improved diagnosis, analysis, and treatment. In the past few years, convolutional neural networks (CNNs) and Transformers have dominated this area, but they still suffer from either limited receptive fields or costly long-range modeling. Mamba, a State Space Sequence Model (SSM), recently emerged as a promising paradigm for long-range dependency modeling with linear complexity. In this paper, we introduce a Large Kernel vision Mamba U-shape Network, or LKM-UNet, for medical image segmentation. A distinguishing feature of our LKM-UNet is its utilization of large Mamba kernels, excelling in locally spatial modeling compared to small kernel-based CNNs and Transformers, while maintaining superior efficiency in global modeling compared to self-attention with quadratic complexity. Additionally, we design a novel hierarchical and bidirectional Mamba block to further enhance Mamba’s global and neighborhood spatial modeling capability for vision inputs. Comprehensive experiments demonstrate the feasibility and the effectiveness of using large-size Mamba kernels to achieve large receptive fields. Codes are available at https://github.com/wjh892521292/LKM-UNet.
LKM-UNet: Large Kernel Vision Mamba UNet for Medical Image Segmentation
[ "Wang, Jinhong", "Chen, Jintai", "Chen, Danny Z.", "Wu, Jian" ]
Conference
2403.07332
[ "https://github.com/wjh892521292/LKM-UNet" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
139
null
https://papers.miccai.org/miccai-2024/paper/1552_paper.pdf
@InProceedings{ Gao_Improving_MICCAI2024, author = { Gao, Yuan and Zhou, Hong-Yu and Wang, Xin and Zhang, Tianyu and Han, Luyi and Lu, Chunyao and Liang, Xinglong and Teuwen, Jonas and Beets-Tan, Regina and Tan, Tao and Mann, Ritse }, title = { { Improving Neoadjuvant Therapy Response Prediction by Integrating Longitudinal Mammogram Generation with Cross-Modal Radiological Reports: A Vision-Language Alignment-guided Model } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Longitudinal imaging examinations are vital for predicting pathological complete response (pCR) to neoadjuvant therapy (NAT) by assessing changes in tumor size and density. However, quite-often the imaging modalities at different time points during NAT may differ from patients, hindering comprehensive treatment response estimation when utilizing multi-modal information. This may result in underestimation or overestimation of disease status. Also, existing longitudinal image generation models mainly rely on raw-pixel inputs while less exploring in the integration with practical longitudinal radiology reports, which can convey valuable temporal content on disease remission or progression. Further, extracting textual-aligned dynamic information from longitudinal images poses a challenge. To address these issues, we propose a longitudinal image-report alignment-guided model for longitudinal mammogram generation using cross-modality radiology reports. We utilize generated mammograms to compensate for absent mammograms in our pCR prediction pipeline. Our experimental result achieves comparable performance to the theoretical upper bound, therefore providing a potential 3-month window for therapeutic replacement. The code will be accessible to the public.
Improving Neoadjuvant Therapy Response Prediction by Integrating Longitudinal Mammogram Generation with Cross-Modal Radiological Reports: A Vision-Language Alignment-guided Model
[ "Gao, Yuan", "Zhou, Hong-Yu", "Wang, Xin", "Zhang, Tianyu", "Han, Luyi", "Lu, Chunyao", "Liang, Xinglong", "Teuwen, Jonas", "Beets-Tan, Regina", "Tan, Tao", "Mann, Ritse" ]
Conference
[ "https://github.com/yawwG/LIMRA/" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
140
null
https://papers.miccai.org/miccai-2024/paper/3797_paper.pdf
@InProceedings{ Kas_IHRRBDINO_MICCAI2024, author = { Kasem, Mahmoud SalahEldin and Abdallah, Abdelrahman and Abdelhalim, Ibrahim and Alghamdi, Norah Saleh and Contractor, Sohail and El-Baz, Ayman }, title = { { IHRRB-DINO: Identifying High-Risk Regions of Breast Masses in Mammogram Images Using Data-Driven Instance Noise (DINO) } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
In this paper, we introduce IHRRB-DINO, an advanced model designed to assist radiologists in effectively detecting breast masses in mammogram images. This tool is specifically engineered to highlight high-risk regions, enhancing the capability of radiologists in identifying breast masses for more accurate and efficient assessments. Our approach incorporates a novel technique that employs Data-Driven Instance Noise (DINO) for Object Localization, which significantly improves breast mass localization. This method is augmented by data augmentation using instance-level noise during the training phase, focusing on refining the model’s proficiency in precisely localizing breast masses in mammographic images. Rigorous testing and validation conducted on the BI-RADS dataset using our model, especially with the Swin-L backbone, have demonstrated promising results. We achieved an Average Precision (AP) of 46.96, indicating a substantial improvement in the accuracy and consistency of breast cancer (BC) detection and localization. These results underscore the potential of IHRRB-DINO in contributing to the advancements in computer-aided diagnosis systems for breast cancer, marking a significant stride in the field of medical imaging technology.
IHRRB-DINO: Identifying High-Risk Regions of Breast Masses in Mammogram Images Using Data-Driven Instance Noise (DINO)
[ "Kasem, Mahmoud SalahEldin", "Abdallah, Abdelrahman", "Abdelhalim, Ibrahim", "Alghamdi, Norah Saleh", "Contractor, Sohail", "El-Baz, Ayman" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
141
null
https://papers.miccai.org/miccai-2024/paper/3513_paper.pdf
@InProceedings{ Tiv_Hallucination_MICCAI2024, author = { Tivnan, Matthew and Yoon, Siyeop and Chen, Zhennong and Li, Xiang and Wu, Dufan and Li, Quanzheng }, title = { { Hallucination Index: An Image Quality Metric for Generative Reconstruction Models } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Generative image reconstruction algorithms such as measurement conditioned diffusion models are increasingly popular in the field of medical imaging. These powerful models can transform low signal-to-noise ratio (SNR) inputs into outputs with the appearance of high SNR. However, the outputs can have a new type of error called hallucinations. In medical imaging, these hallucinations may not be obvious to a Radiologist but could cause diagnostic errors. Generally, hallucination refers to error in estimation of object structure caused by a machine learning model, but there is no widely accepted method to evaluate hallucination magnitude. In this work, we propose a new image quality metric called the hallucination index. Our approach is to compute the Hellinger distance from the distribution of reconstructed images to a zero hallucination reference distribution. To evaluate our approach, we conducted a numerical experiment with electron microcopy images, simulated noisy measurements, and applied diffusion based reconstructions. We sampled the measurements and the generative reconstructions repeatedly to compute the sample mean and covariance. For the zero hallucination reference, we used the forward diffusion process applied to ground truth. Our results show that higher measurement SNR leads to lower hallucination index for the same apparent image quality. We also evaluated the impact of early stopping in the reverse diffusion process and found that more modest denoising strengths can reduce hallucination. We believe this metric could be useful for evaluation of generative image reconstructions or as a warning label to inform radiologists about the degree of hallucinations in medical images.
Hallucination Index: An Image Quality Metric for Generative Reconstruction Models
[ "Tivnan, Matthew", "Yoon, Siyeop", "Chen, Zhennong", "Li, Xiang", "Wu, Dufan", "Li, Quanzheng" ]
Conference
2407.12780
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
142
null
https://papers.miccai.org/miccai-2024/paper/1816_paper.pdf
@InProceedings{ Zha_Spatialaware_MICCAI2024, author = { Zhang, Zerui and Sun, Zhichao and Liu, Zelong and Zhao, Zhou and Yu, Rui and Du, Bo and Xu, Yongchao }, title = { { Spatial-aware Attention Generative Adversarial Network for Semi-supervised Anomaly Detection in Medical Image } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Medical anomaly detection is a critical research area aimed at recognizing abnormal images to aid in diagnosis. Most existing methods adopt synthetic anomalies and image restoration on normal samples to detect anomaly. The unlabeled data consisting of both normal and abnormal data is not well explored. We introduce a novel Spatial-aware Attention Generative Adversarial Network (SAGAN) for one-class semi-supervised generation of health images. Our core insight is the utiliza tion of position encoding and attention to accurately focus on restoring abnormal regions and preserving normal regions. To fully utilize the unlabelled data, SAGAN relaxes the cyclic consistency requirement of the existing unpaired image-to image conversion methods, and generates high-quality health images corresponding to unlabeled data, guided by the reconstruction of normal images and restoration of pseudo-anomaly images. Subsequently, the discrepancy between the generated healthy image and the original image is utilized as an anomaly score. Extensive experiments on three medical datasets demonstrate that the proposed SAGAN outperforms the state-of-the-art methods. Code is available at https://github.com/zzr728/SAGAN
Spatial-aware Attention Generative Adversarial Network for Semi-supervised Anomaly Detection in Medical Image
[ "Zhang, Zerui", "Sun, Zhichao", "Liu, Zelong", "Zhao, Zhou", "Yu, Rui", "Du, Bo", "Xu, Yongchao" ]
Conference
2405.12872
[ "https://github.com/zzr728/SAGAN" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
143
null
https://papers.miccai.org/miccai-2024/paper/0562_paper.pdf
@InProceedings{ Che_LUCIDA_MICCAI2024, author = { Chen, Yixin and Meng, Xiangxi and Wang, Yan and Zeng, Shuang and Liu, Xi and Xie, Zhaoheng }, title = { { LUCIDA: Low-dose Universal-tissue CT Image Domain Adaptation For Medical Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Accurate segmentation in low-dose CT scans remains a challenge in medical imaging, primarily due to the high annotation costs. This study introduces LUCIDA, a Low-dose Universal-tissue CT Image Domain Adaptation model operating under an unsupervised protocol without requiring LDCT annotations. It uniquely incorporates the Weighted Segmentation Reconstruction (WSR) module to establish a linear relationship between prediction maps and reconstructed images. By enhancing the quality of reconstructed images, LUCIDA improves the accuracy of prediction maps, facilitating a new domain adaptation framework. Extensive evaluation experiments demonstrate LUCIDA’s effectiveness in accurately recognizing a wide range of tissues, significantly outperforming traditional methods. We also introduce the LUCIDA-Ensemble model, demonstrating comparable performance to supervised learning models in organ segmentation and recognizing 112 tissue types.
LUCIDA: Low-dose Universal-tissue CT Image Domain Adaptation For Medical Segmentation
[ "Chen, Yixin", "Meng, Xiangxi", "Wang, Yan", "Zeng, Shuang", "Liu, Xi", "Xie, Zhaoheng" ]
Conference
[ "https://github.com/YixinChen-AI/LUCIDA" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
144
null
https://papers.miccai.org/miccai-2024/paper/0293_paper.pdf
@InProceedings{ Li_PRISM_MICCAI2024, author = { Li, Hao and Liu, Han and Hu, Dewei and Wang, Jiacheng and Oguz, Ipek }, title = { { PRISM: A Promptable and Robust Interactive Segmentation Model with Visual Prompts } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
In this paper, we present PRISM, a Promptable and Robust Interactive Segmentation Model, aiming for precise segmentation of 3D medical images. PRISM accepts various visual inputs, including points, boxes, and scribbles as sparse prompts, as well as masks as dense prompts. Specifically, PRISM is designed with four principles to achieve robustness: (1) Iterative learning. The model produces segmentations by using visual prompts from previous iterations to achieve progressive improvement. (2) Confidence learning. PRISM employs multiple segmentation heads per input image, each generating a candidate mask and a confidence score to optimize predictions. (3) Corrective learning. Following each segmentation iteration, PRISM employs a shallow corrective refinement network to reassign mislabeled voxels. (4) Hybrid design. PRISM integrates hybrid encoders to better capture both the local and global information. Comprehensive validation of PRISM is conducted using four public datasets for tumor segmentation in the colon, pancreas, liver, and kidney, highlighting challenges caused by anatomical variations and ambiguous boundaries in accurate tumor identification. Compared to state-of-the-art methods, both with and without prompt engineering, PRISM significantly improves performance, achieving results that are close to human levels.
PRISM: A Promptable and Robust Interactive Segmentation Model with Visual Prompts
[ "Li, Hao", "Liu, Han", "Hu, Dewei", "Wang, Jiacheng", "Oguz, Ipek" ]
Conference
2404.15028
[ "https://github.com/MedICL-VU/PRISM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
145
null
https://papers.miccai.org/miccai-2024/paper/0014_paper.pdf
@InProceedings{ Li_TPDRSeg_MICCAI2024, author = { Li, Wenxue and Xiong, Xinyu and Xia, Peng and Ju, Lie and Ge, Zongyuan }, title = { { TP-DRSeg: Improving Diabetic Retinopathy Lesion Segmentation with Explicit Text-Prompts Assisted SAM } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Recent advances in large foundation models, such as the Segment Anything Model (SAM), have demonstrated considerable promise across various tasks. Despite their progress, these models still encounter challenges in specialized medical image analysis, especially in recognizing subtle inter-class differences in Diabetic Retinopathy (DR) lesion segmentation. In this paper, we propose a novel framework that customizes SAM for text-prompted DR lesion segmentation, termed TP-DRSeg. Our core idea involves exploiting language cues to inject medical prior knowledge into the vision-only segmentation network, thereby combining the advantages of different foundation models and enhancing the credibility of segmentation. Specifically, to unleash the potential of vision-language models in the recognition of medical concepts, we propose an explicit prior encoder that transfers implicit medical concepts into explicit prior knowledge, providing explainable clues to excavate low-level features associated with lesions. Furthermore, we design a prior-aligned injector to inject explicit priors into the segmentation process, which can facilitate knowledge sharing across multi-modality features and allow our framework to be trained in a parameter-efficient fashion. Experimental results demonstrate the superiority of our framework over other traditional models and foundation model variants. The code implementations are accessible at https://github.com/wxliii/TP-DRSeg.
TP-DRSeg: Improving Diabetic Retinopathy Lesion Segmentation with Explicit Text-Prompts Assisted SAM
[ "Li, Wenxue", "Xiong, Xinyu", "Xia, Peng", "Ju, Lie", "Ge, Zongyuan" ]
Conference
2406.15764
[ "https://github.com/wxliii/TP-DRSeg" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
146
null
https://papers.miccai.org/miccai-2024/paper/2633_paper.pdf
@InProceedings{ She_GCAN_MICCAI2024, author = { Shen, Xiongri and Song, Zhenxi and Zhang, Zhiguo }, title = { { GCAN: Generative Counterfactual Attention-guided Network for Explainable Cognitive Decline Diagnostics based on fMRI Functional Connectivity } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Diagnosis of mild cognitive impairment (MCI) and subjective cognitive decline (SCD) from fMRI functional connectivity (FC) has gained popularity, but most FC-based diagnostic models are black boxes lacking casual reasoning so they contribute little to the knowledge about FC-based neural biomarkers of cognitive decline. To enhance the explainability of diagnostic models, we propose a generative counterfactual attention-guided network (GCAN), which introduces counterfactual reasoning to recognize cognitive decline-related brain regions and then uses these regions as attention maps to boost the prediction performance of diagnostic models. Furthermore, to tackle the difficulty in the generation of highly-structured and brain-atlas constrained FC, which is essential in counterfactual reasoning, an Atlas-Aware Bidirectional Transformer (AABT) method is developed. AABT employs a bidirectional strategy to encode and decode the tokens from each network of brain atlas, thereby enhancing the generation of high-quality target label FC. In the experiments of in-house and public datasets, the generated attention maps closely resemble FC changes in the literature on neurodegenerative diseases. The diagnostic performance is also superior to baseline and SOTA models. The code is available at https://anonymous.4open.science/status/GCAN-665C.
GCAN: Generative Counterfactual Attention-guided Network for Explainable Cognitive Decline Diagnostics based on fMRI Functional Connectivity
[ "Shen, Xiongri", "Song, Zhenxi", "Zhang, Zhiguo" ]
Conference
2403.01758
[ "https://github.com/SXR3015/GCAN" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
147
null
https://papers.miccai.org/miccai-2024/paper/1516_paper.pdf
@InProceedings{ Zin_Towards_MICCAI2024, author = { Zinsou, Kpêtchéhoué Merveille Santi and Diop, Cheikh Talibouya and Diop, Idy and Tsirikoglou, Apostolia and Siddig, Emmanuel Edwar and Sow, Doudou and Ndiaye, Maodo }, title = { { Towards Rapid Mycetoma Species Diagnosis: A Deep Learning Approach for Stain-Invariant Classification on H&E Images from Senegal } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
Mycetoma, categorized as a Neglected Tropical Disease (NTD), poses significant health, social, and economic challenges due to its causative agents, which include both bacterial and fungal pathogens. Accurate identification of the mycetoma type and species is crucial for initiating appropriate medical interventions, as treatment strategies vary widely. Although several diagnostic tools have been developed over time, histopathology remains a most used method due to its quickness, cost-effectiveness and simplicity. However, its reliance on expert pathologists to perform the diagnostic procedure and accurately interpret the result, particularly in resource-limited settings. Additionally, pathologists face the challenge of stain variability during the histopathological analyses on slides. In response to this need, this study pioneers an automated approach to mycetoma species identification using histopathological images from black skin patients in Senegal. Integrating various stain normalization techniques such as macenko, vahadane, and Reinhard to mitigate color variations, we combine these methods with the MONAI framework alongside DenseNet121 architecture. Our system achieves an average accuracy of 99.34%, 94.06%, 94.45% respectively on Macenko, Reinhard and Vahadane datasets. The system is trained using an original dataset comprising histopathological images stained with Hematoxylin and Eosin (H&E), meticulously collected, annotated, and labeled from various hospitals across Senegal. This study represents a significant advancement in the field of mycetoma diagnosis, offering a reliable and efficient solution that can facilitate timely and accurate species identification, particularly in endemic regions like Senegal.
Towards Rapid Mycetoma Species Diagnosis: A Deep Learning Approach for Stain-Invariant Classification on H E Images from Senegal
[ "Zinsou, Kpêtchéhoué Merveille Santi", "Diop, Cheikh Talibouya", "Diop, Idy", "Tsirikoglou, Apostolia", "Siddig, Emmanuel Edwar", "Sow, Doudou", "Ndiaye, Maodo" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
148
null
https://papers.miccai.org/miccai-2024/paper/0663_paper.pdf
@InProceedings{ Xin_SegMamba_MICCAI2024, author = { Xing, Zhaohu and Ye, Tian and Yang, Yijun and Liu, Guang and Zhu, Lei }, title = { { SegMamba: Long-range Sequential Modeling Mamba For 3D Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
The Transformer architecture has demonstrated remarkable results in 3D medical image segmentation due to its capability of modeling global relationships. However, it poses a significant computational burden when processing high-dimensional medical images. Mamba, as a State Space Model (SSM), has recently emerged as a notable approach for modeling long-range dependencies in sequential data, and has excelled in the field of natural language processing with its remarkable memory efficiency and computational speed. Inspired by this, we devise \textbf{SegMamba}, a novel 3D medical image \textbf{Seg}mentation \textbf{Mamba} model, to effectively capture long-range dependencies within whole-volume features at every scale. Our SegMamba outperforms Transformer-based methods in whole-volume feature modeling, maintaining high efficiency even at a resolution of {$64\times 64\times 64$}, where the sequential length is approximately 260k. Moreover, we collect and annotate a novel large-scale dataset (named CRC-500) to facilitate benchmarking evaluation in 3D colorectal cancer (CRC) segmentation. Experimental results on our CRC-500 and two public benchmark datasets further demonstrate the effectiveness and universality of our method.
SegMamba: Long-range Sequential Modeling Mamba For 3D Medical Image Segmentation
[ "Xing, Zhaohu", "Ye, Tian", "Yang, Yijun", "Liu, Guang", "Zhu, Lei" ]
Conference
2401.13560
[ "https://github.com/ge-xing/segmamba" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
149
null
https://papers.miccai.org/miccai-2024/paper/0253_paper.pdf
@InProceedings{ Liu_PEPSI_MICCAI2024, author = { Liu, Peirong and Puonti, Oula and Sorby-Adams, Annabel and Kimberly, W. Taylor and Iglesias, Juan E. }, title = { { PEPSI: Pathology-Enhanced Pulse-Sequence-Invariant Representations for Brain MRI } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Remarkable progress has been made by data-driven machine-learning methods in the analysis of MRI scans. However, most existing MRI analysis approaches are crafted for specific MR pulse sequences (MR contrasts) and usually require nearly isotropic acquisitions. This limits their applicability to the diverse, real-world clinical data, where scans commonly exhibit variations in appearances due to being obtained with varying sequence parameters, resolutions, and orientations – especially in the presence of pathology. In this paper, we propose PEPSI, the first pathology-enhanced, and pulse-sequence-invariant feature representation learning model for brain MRI. PEPSI is trained entirely on synthetic images with a novel pathology encoding strategy, and enables co-training across datasets with diverse pathologies and missing modalities. Despite variations in pathology appearances across different MR pulse sequences or the quality of acquired images (e.g., resolution, orientation, artifacts, etc), PEPSI produces a high-resolution image of reference contrast (MP-RAGE) that captures anatomy, along with an image specifically highlighting the pathology. Our experiments demonstrate PEPSI’s remarkable capability for image synthesis compared with the state-of-the-art, contrast-agnostic synthesis models, as it accurately reconstructs anatomical structures while differentiating between pathology and normal tissue. We further illustrate the efficiency and effectiveness of PEPSI features for downstream pathology segmentation on five public datasets covering white matter hyperintensities and stroke lesions.
PEPSI: Pathology-Enhanced Pulse-Sequence-Invariant Representations for Brain MRI
[ "Liu, Peirong", "Puonti, Oula", "Sorby-Adams, Annabel", "Kimberly, W. Taylor", "Iglesias, Juan E." ]
Conference
2403.06227
[ "https://github.com/peirong26/PEPSI" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
150
null
https://papers.miccai.org/miccai-2024/paper/2373_paper.pdf
@InProceedings{ Zha_See_MICCAI2024, author = { Zhao, Ziyuan and Fang, Fen and Yang, Xulei and Xu, Qianli and Guan, Cuntai and Zhou, S. Kevin }, title = { { See, Predict, Plan: Diffusion for Procedure Planning in Robotic Surgical Videos } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Automatic surgical video analysis is pivotal in enhancing the effectiveness and safety of robot-assisted minimally invasive surgery. This study introduces a novel procedure planning task aimed at predicting target-conditioned actions in surgical videos to achieve desired visual goals, thereby addressing the question of ``What to do to achieve a desired visual goal?”. Leveraging recent advancements in deep learning, particularly diffusion models, our work proposes the Multi-Scale Phase-Condition Diffusion (MS-PCD) framework. This innovative approach incorporates multi-scale visual features into the diffusion process, conditioned by phase class, to generate goal-conditioned plans. By cascading multiple diffusion models with inputs at different scales, MS-PCD adaptively extracts fine-grained visual features, significantly enhancing procedure planning performance in unstructured robotic surgical videos. We establish a new benchmark for procedure planning in robotic surgical videos using the publicly available PSI-AVA dataset, demonstrating that our method notably outperforms existing baselines on several metrics. Our research not only presents an innovative approach to surgical video analysis but also opens new avenues for automation in surgical procedures, contributing to both patient safety and surgical training.
See, Predict, Plan: Diffusion for Procedure Planning in Robotic Surgical Videos
[ "Zhao, Ziyuan", "Fang, Fen", "Yang, Xulei", "Xu, Qianli", "Guan, Cuntai", "Zhou, S. Kevin" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
151
null
https://papers.miccai.org/miccai-2024/paper/3315_paper.pdf
@InProceedings{ Car_Characterizing_MICCAI2024, author = { Carrera-Pinzón, Andrés Felipe and Toro-Quitian, Leonard and Torres, Juan Camilo and Cerón, Alexander and Sarmiento, Wilsón and Mendez-Toro, Arnold and Cruz-Roa, Angel and Gutiérrez-Carvajal, R. E. and Órtiz-Davila, Carlos and González, Fabio and Romero, Eduardo and Iregui Guerrero, Marcela }, title = { { Characterizing the left ventricular ultrasound dynamics in the frequency domain to estimate the cardiac function } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Assessment of cardiac function typically relies on the Left Ventricular Ejection Fraction (LVEF), i.e., the ratio between diastolic and systolic volumes. However, inconsistent LVEF values have been reported in many clinic situations. This study introduces a novel approach to quantify the cardiac function by analyzing the frequency patterns of the segmented Left Ventricle (LV) along the entire cardiac cycle in the four-chamber-image of echocardiography videos. After automatic segmentation of the left ventricle, the area is computed during a complete cycle and the obtained signal is transformed to the frequency space. A soft clustering of the spectrum magnitude was performed with 7.835 cases from the EchoNet-dynamic open database by applying spectral clustering with Euclidean distance and eigengap heuristics to obtain four dense groups. Once groups were set, the medoid of each was used as representant, and for a set of 99 test cases from a local collection with different underlying pathology, the magnitude distance to the medoid was replaced by the norm of the sum of vectors representing both the medoid and a particular case making an angle estimated from the dot product between the temporal signals obtained from the inverse Fourier transform of the spectrum phase of each and a constant magnitude. Results show the four clusters characterize different types of patterns, and while LVEF was usually spread within clusters and mixed up the clinic condition, the new indicator showed a narrow progression consistent with the particular pathology degree.
Characterizing the left ventricular ultrasound dynamics in the frequency domain to estimate the cardiac function
[ "Carrera-Pinzón, Andrés Felipe", "Toro-Quitian, Leonard", "Torres, Juan Camilo", "Cerón, Alexander", "Sarmiento, Wilsón", "Mendez-Toro, Arnold", "Cruz-Roa, Angel", "Gutiérrez-Carvajal, R. E.", "Órtiz-Davila, Carlos", "González, Fabio", "Romero, Eduardo", "Iregui Guerrero, Marcela" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
152
null
https://papers.miccai.org/miccai-2024/paper/1997_paper.pdf
@InProceedings{ Gua_Labelguided_MICCAI2024, author = { Guan, Jiale and Zou, Xiaoyang and Tao, Rong and Zheng, Guoyan }, title = { { Label-guided Teacher for Surgical Phase Recognition via Knowledge Distillation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Automatic surgical phase recognition plays an essential role in developing advanced, context-aware, computer-assisted intervention systems. Knowledge distillation is an effective framework to transfer knowledge from a teacher network to a student network, which has been used to solve the challenging surgical phase recognition task. A key to a successful knowledge distillation is to learn a better teacher network. To this end, we propose a novel label-guided teacher network for knowledge distillation. Specifically, our teacher network takes both video frames and ground-truth labels as input. Instead of only using labels to supervise the final predictions, we additionally introduce two types of label guidance to learn a better teacher: 1) we propose label embedding-frame feature cross-attention transformer blocks for feature enhancement; and 2) we propose to use label information to sample positive (from same phase) and negative features (from different phases) in a supervised contrastive learning framework to learn better feature embeddings. Then, by minimizing feature similarity, the knowledge learnt by our teacher network is effectively distilled into a student network. At inference stage, the distilled student network can perform accurate surgical phase recognition taking only video frames as input. Comprehensive experiments are conducted on two laparoscopic cholecystectomy video datasets to validate the proposed method, offering an accuracy of 93.3±5.8 on the Cholec80 dataset and an accuracy of 91.6±9.1 on the M2cai16 dataset.
Label-guided Teacher for Surgical Phase Recognition via Knowledge Distillation
[ "Guan, Jiale", "Zou, Xiaoyang", "Tao, Rong", "Zheng, Guoyan" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
153
null
https://papers.miccai.org/miccai-2024/paper/0405_paper.pdf
@InProceedings{ Ouy_SOM2LM_MICCAI2024, author = { Ouyang, Jiahong and Zhao, Qingyu and Adeli, Ehsan and Zaharchuk, Greg and Pohl, Kilian M. }, title = { { SOM2LM: Self-Organized Multi-Modal Longitudinal Maps } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Neuroimage modalities acquired by longitudinal studies often provide complementary information regarding disease progression. For example, amyloid PET visualizes the build-up of amyloid plaques that appear in earlier stages of Alzheimer’s disease (AD), while structural MRIs depict brain atrophy appearing in the later stages of the disease. To accurately model multi-modal longitudinal data, we propose an interpretable self-supervised model called Self-Organized Multi-Modal Longitudinal Maps (SOM2LM). SOM2LM encodes each modality as a 2D self-organizing map (SOM) so that one dimension of each modality-specific SOMs corresponds to disease abnormality. The model also regularizes across modalities to depict their temporal order of capturing abnormality. When applied to longitudinal T1w MRIs and amyloid PET of the Alzheimer’s Disease Neuroimaging Initiative (ADNI, N=741), SOM2LM generates interpretable latent spaces that characterize disease abnormality. When compared to state-of-art models, it achieves higher accuracy for the downstream tasks of cross-modality prediction of amyloid status from T1w-MRI and joint-modality prediction of individuals with mild cognitive impairment converting to AD using both MRI and amyloid PET. The code is available at https://github.com/ouyangjiahong/longitudinal-som-multi-modality.
SOM2LM: Self-Organized Multi-Modal Longitudinal Maps
[ "Ouyang, Jiahong", "Zhao, Qingyu", "Adeli, Ehsan", "Zaharchuk, Greg", "Pohl, Kilian M." ]
Conference
[ "https://github.com/ouyangjiahong/longitudinal-som-multi-modality" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
154
null
https://papers.miccai.org/miccai-2024/paper/1606_paper.pdf
@InProceedings{ Xie_An_MICCAI2024, author = { Xie, Shiyu and Zhang, Kai and Entezari, Alireza }, title = { { An Evaluation of State-of-the-Art Projectors in the Presence of Noise and Nonlinearity in the Beer-Lambert Law } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Efficient computation of forward and back projection is key to scalability of iterative methods for low dose CT imaging at resolutions needed in clinical applications. State-of-the-art projectors provide computationally-efficient approximations to X-ray optics calculations in the forward model that strike a balance between speed and accuracy. While computational performance of these projectors are well studied, their accuracy is often analyzed in idealistic settings. When choosing a projector a key question is whether differences between projectors can impact image reconstruction in realistic settings where nonlinearity of the Beer-Lambert law and measurement noise may mask those differences. We present an approach for comparing the accuracy of projectors in practical settings where the effects of the Beer-Lambert law and measurement noise are captured by a sensitivity analysis of the forward model. Our experiments provide a comparative analysis of state-of-the-art projectors based on the impact of their approximations to the forward model on the reconstruction error. Our experiments suggest that the differences between projectors, measured by reconstruction errors, persists with noise in low-dose measurements and become significant in few-view imaging configurations.
An Evaluation of State-of-the-Art Projectors in the Presence of Noise and Nonlinearity in the Beer-Lambert Law
[ "Xie, Shiyu", "Zhang, Kai", "Entezari, Alireza" ]
Conference
[ "https://github.com/ShiyuXie0116/Evaluation-of-Projectors-Noise-Nonlinearity" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
155
null
https://papers.miccai.org/miccai-2024/paper/1779_paper.pdf
@InProceedings{ Wan_BrainSCK_MICCAI2024, author = { Wang, Lilong and Liu, Mianxin and Zhang, Shaoting and Wang, Xiaosong }, title = { { BrainSCK: Brain Structure and Cognition Alignment via Knowledge Injection and Reactivation for Diagnosing Brain Disorders } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Emerging evidence from advanced neuroimaging study suggests common neurological bases across different brain disorders (BD) throughout the human lifespan. Researchers thus aim to create a general neuroimaging-based diagnosis model for population-scale screening for multiple BDs. Existing models predominantly use the transfer learning paradigm for BD tasks based on either out-of-domain models pre-trained with large-scale but less-related data and tasks or in-domain models pre-trained on healthy population brain data with auxiliary tasks such as age prediction. The former approach has few recognition of inter-individual variations and BD-related features in the population-scale brain data, while the latter relies on weak implicit association between the proxy and BD tasks. In this work, we propose a two-stage vision-language model adaptation strategy to incorporate novel knowledge into the out-of-domain well pre-trained model (e.g., BLIP) by aligning basic cognition and brain structural features for accurate diagnosis of multiple BDs. First, using life-span Human Connectome Project data, we textualize the demographics and psychometrics records and construct knowledge-injecting textual prompts (with important cognitive science contexts). The model is expected to learn the alignment between brain structure from images and cognitive knowledge from texts. Then, we customize knowledge-reactivating instructions and further tune the model to accommodate the cognitive symptoms in each BD diagnosis task. Experimental results show that our framework outperforms other state-of-the-art methods on three BD diagnosis tasks of different age groups. It demonstrates a promising and feasible learning paradigm for adapting large foundation models to the cognitive neuroscience and neurology fields.
BrainSCK: Brain Structure and Cognition Alignment via Knowledge Injection and Reactivation for Diagnosing Brain Disorders
[ "Wang, Lilong", "Liu, Mianxin", "Zhang, Shaoting", "Wang, Xiaosong" ]
Conference
[ "https://github.com/openmedlab/BrainSCK" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
156
null
https://papers.miccai.org/miccai-2024/paper/0595_paper.pdf
@InProceedings{ Thi_Conditional_MICCAI2024, author = { Thibeault, Sylvain and Romaguera, Liset Vazquez and Kadoury, Samuel }, title = { { Conditional 4D Motion Diffusion Models with Masked Observations to Forecast Deformations } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Image-guided radiotherapy procedures in the abdominal region require accurate real-time motion management for safe dose delivery. Anticipating future 4D motion using live in-plane imaging is crucial for accurate tumor tracking, which enables sparing normal tissue and reducing recurrence probabilities. However current real-time tracking methods often require a specific template and volumetric inputs, which is not feasible for online treatments. Generative models remain hindered by several issues, including complex loss functions and training processes. This paper presents a conditional motion diffusion model treating high-dimensional data, describing complex anatomical deformations. A discrete wavelet transform (DWT) maps inputs into a frequency domain, allowing to select top features for the denoising process. The end-to-end model includes a masking mechanism of deformation observations, where during training, a motion diffusion model is learned to produce deformations from random noise. For future sequences, a denoising process conditioned on input deformations and time-wise prior distributions are applied to generate smooth and continuous deformation outputs from cine 2D images. Lastly, a temporal 3D local tracking module exploiting latent representations is used to refine the local motion vectors around pre-defined tracked regions. The proposed forecasting technique allows to reduce errors by 62% when confronted to a 4D conditional Transformer displacement model, with target errors of 1.29+/-0.95 mm, and mean geometrical errors of 1.05+/-0.53 mm on forecasted abdominal MRI.
Conditional 4D Motion Diffusion Models with Masked Observations to Forecast Deformations
[ "Thibeault, Sylvain", "Romaguera, Liset Vazquez", "Kadoury, Samuel" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
157
null
https://papers.miccai.org/miccai-2024/paper/1173_paper.pdf
@InProceedings{ Xie_DiffDGSS_MICCAI2024, author = { Xie, Yingpeng and Qu, Junlong and Xie, Hai and Wang, Tianfu and Lei, Baiying }, title = { { DiffDGSS: Generalizable Retinal Image Segmentation with Deterministic Representation from Diffusion Models } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Acquiring a comprehensive segmentation map of the retinal image serves as the preliminary step in developing an interpretable diagnostic tool for retinopathy. However, the inherent complexity of retinal anatomical structures and lesions, along with data heterogeneity and annotations scarcity, poses challenges to the development of accurate and generalizable models. Denoising diffusion probabilistic models (DDPM) have recently shown promise in various medical image applications. In this paper, driven by the motivation to leverage strong pre-trained DDPM, we introduce a novel framework, named DiffDGSS, to exploit the latent representations from the diffusion models for Domain Generalizable Semantic Segmentation (DGSS). In particular, we demonstrate that the deterministic inversion of diffusion models yields robust representations that allow for strong out-of-domain generalization. Subsequently, we develop an adaptive semantic feature interpreter for projecting these representations into an accurate segmentation map. Extensive experiments across various tasks (retinal lesion and vessel segmentation) and settings (cross-domain and cross-modality) demonstrate the superiority of our DiffDGSS over state-of-the-art methods.
DiffDGSS: Generalizable Retinal Image Segmentation with Deterministic Representation from Diffusion Models
[ "Xie, Yingpeng", "Qu, Junlong", "Xie, Hai", "Wang, Tianfu", "Lei, Baiying" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
158
null
https://papers.miccai.org/miccai-2024/paper/1419_paper.pdf
@InProceedings{ Gam_Automatic_MICCAI2024, author = { Gamal, Mahmoud and Baraka, Marwa and Torki, Marwan }, title = { { Automatic Mandibular Semantic Segmentation of Teeth Pulp Cavity and Root Canals, and Inferior Alveolar Nerve on Pulpy3D Dataset } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Accurate segmentation of the pulp cavity, root canals, and inferior alveolar nerve (IAN) in dental imaging is essential for effective orthodontic interventions. Despite the availability of numerous Cone Beam Computed Tomography (CBCT) scans annotated for individual dental-anatomical structures, there is a lack of a comprehensive dataset covering all necessary parts. As a result, existing deep learning models have encountered challenges due to the scarcity of comprehensive datasets encompassing all relevant anatomical structures. We present our novel Pulpy3D dataset, specifically curated to address dental-anatomical structures’ segmentation and identification needs. Additionally, we noticed that many current deep learning methods in dental imaging prefer 2D segmentation, missing out on the benefits of 3D segmentation. Our study suggests a UNet-based approach capable of segmenting dental structures using 3D volume segmentation, providing a better understanding of spatial relationships and more precise dental anatomy representation. Pulpy3D contributed in creating the seeding model from 150 scans, which helped complete the remainder of the dataset. Other modifications in the architecture, such as using separate networks, one semantic network, and a multi-task network, were highlighted in the model description to show how versatile the Pulpy3D dataset is and how different models, architectures, and tasks can run on the dataset. Additionally, we stress the lack of attention to pulp segmentation tasks in existing studies, underlining the need for specialized methods in this area. The code and Pulpy3D links can be found at https://github.com/mahmoudgamal0/Pulpy3D
Automatic Mandibular Semantic Segmentation of Teeth Pulp Cavity and Root Canals, and Inferior Alveolar Nerve on Pulpy3D Dataset
[ "Gamal, Mahmoud", "Baraka, Marwa", "Torki, Marwan" ]
Conference
[ "https://github.com/mahmoudgamal0/Pulpy3D" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
159
null
https://papers.miccai.org/miccai-2024/paper/1614_paper.pdf
@InProceedings{ Tan_Follow_MICCAI2024, author = { Tang, Xin and Cao, Zhi and Zhang, Weijing and Zhao, Di and Liao, Hongen and Zhang, Daoqiang and Chen, Fang }, title = { { Follow Sonographers’ Visual Scan-path: Adjusting CNN Model for Diagnosing Gout from Musculoskeletal Ultrasound } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
The current models for automatic gout diagnosis train convolutional neural network (CNN) using musculoskeletal ultrasound (MSKUS) images paired with classification labels, which are annotated by skilled sonographers. However, this prevalent diagnostic model overlooks valuable supplementary information derived from sonographers’ annotations, such as the visual scan-path followed by sonographers. We notice that this annotation procedure offers valuable insight into human attention, aiding the CNN model in focusing on crucial features in gouty MSKUS scans, including the double contour sign, tophus, and snowstorm, which play a crucial role in sonographers’ diagnostic decisions. To verify this, we create a gout MSKUS dataset that enriched with sonographers’ annotation byproduct visual scan-path. Furthermore, we introduce a scan path based fine-tuning training mechanism (SFT) for gout diagnosis models, leveraging the annotation byproduct scan-paths for enhanced learning. The experimental results demonstrate the superiority of our SFT method over several SOTA CNNs.
Follow Sonographers’ Visual Scan-path: Adjusting CNN Model for Diagnosing Gout from Musculoskeletal Ultrasound
[ "Tang, Xin", "Cao, Zhi", "Zhang, Weijing", "Zhao, Di", "Liao, Hongen", "Zhang, Daoqiang", "Chen, Fang" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
160
null
https://papers.miccai.org/miccai-2024/paper/1193_paper.pdf
@InProceedings{ Zhu_DiffuseReg_MICCAI2024, author = { Zhuo, Yongtai and Shen, Yiqing }, title = { { DiffuseReg: Denoising Diffusion Model for Obtaining Deformation Fields in Unsupervised Deformable Image Registration } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Deformable image registration aims to precisely align medical images from different modalities or times. Traditional deep learning methods, while effective, often lack interpretability, real-time observability and adjustment capacity during registration inference. Denoising diffusion models present an alternative by reformulating registration as iterative image denoising. However, existing diffusion registration approaches do not fully harness capabilities, neglecting the critical sampling phase that enables continuous observability during the inference. Hence, we introduce DiffuseReg, an innovative diffusion-based method that denoises deformation fields instead of images for improved transparency. We also propose a novel denoising network upon Swin Transformer, which better integrates moving and fixed images with diffusion time step throughout the denoising process. Furthermore, we enhance control over the denoising registration process with a novel similarity consistency regularization. Experiments on ACDC datasets demonstrate DiffuseReg outperforms existing diffusion registration methods by 1.32% in Dice score. The sampling process in DiffuseReg enables real-time output observability and adjustment unmatched by previous deep models. The code is available at https://github.com/KUJOYUTA/DiffuseReg
DiffuseReg: Denoising Diffusion Model for Obtaining Deformation Fields in Unsupervised Deformable Image Registration
[ "Zhuo, Yongtai", "Shen, Yiqing" ]
Conference
2410.05234
[ "https://github.com/KUJOYUTA/DiffuseReg" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
161
null
https://papers.miccai.org/miccai-2024/paper/2182_paper.pdf
@InProceedings{ Kum_Continual_MICCAI2024, author = { Kumari, Pratibha and Reisenbüchler, Daniel and Luttner, Lucas and Schaadt, Nadine S. and Feuerhake, Friedrich and Merhof, Dorit }, title = { { Continual Domain Incremental Learning for Privacy-aware Digital Pathology } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
In recent years, there has been remarkable progress in the field of digital pathology, driven by the ability to model complex tissue patterns using advanced deep-learning algorithms. However, the robustness of these models is often severely compromised in the presence of data shifts (e.g., different stains, organs, centers, etc.). Alternatively, continual learning (CL) techniques aim to reduce the forgetting of past data when learning new data with distributional shift conditions. Specifically, rehearsal-based CL techniques, which store some past data in a buffer and then replay it with new data, have proven effective in medical image analysis tasks. However, privacy concerns arise as these approaches store past data, prompting the development of our novel Generative Latent Replay-based CL (GLRCL) approach. GLRCL captures the previous distribution through Gaussian Mixture Models instead of storing past samples, which are then utilized to generate features and perform latent replay with new data. We systematically evaluate our proposed framework under different shift conditions in histopathology data, including stain and organ shift. Our approach significantly outperforms popular buffer-free CL approaches and performs similarly to rehearsal-based CL approaches that require large buffers causing serious privacy violations.
Continual Domain Incremental Learning for Privacy-aware Digital Pathology
[ "Kumari, Pratibha", "Reisenbüchler, Daniel", "Luttner, Lucas", "Schaadt, Nadine S.", "Feuerhake, Friedrich", "Merhof, Dorit" ]
Conference
2409.06455
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
162
null
https://papers.miccai.org/miccai-2024/paper/0423_paper.pdf
@InProceedings{ Lin_Shortcut_MICCAI2024, author = { Lin, Manxi and Weng, Nina and Mikolaj, Kamil and Bashir, Zahra and Svendsen, Morten B. S. and Tolsgaard, Martin G. and Christensen, Anders Nymark and Feragen, Aasa }, title = { { Shortcut Learning in Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Shortcut learning is a phenomenon where machine learning models prioritize learning simple, potentially misleading cues from data that do not generalize well beyond the training set. While existing research primarily investigates this in the realm of image classification, this study extends the exploration of shortcut learning into medical image segmentation. We demonstrate that clinical annotations such as calipers, and the combination of zero-padded convolutions and center-cropped training sets in the dataset can inadvertently serve as shortcuts, impacting segmentation accuracy. We identify and evaluate the shortcut learning on two different but common medical image segmentation tasks. In addition, we suggest strategies to mitigate the influence of shortcut learning and improve the generalizability of the segmentation models. By uncovering the presence and implications of shortcuts in medical image segmentation, we provide insights and methodologies for evaluating and overcoming this pervasive challenge and call for attention in the community for shortcuts in segmentation. Our code is public at https://github.com/nina-weng/shortcut_skinseg .
Shortcut Learning in Medical Image Segmentation
[ "Lin, Manxi", "Weng, Nina", "Mikolaj, Kamil", "Bashir, Zahra", "Svendsen, Morten B. S.", "Tolsgaard, Martin G.", "Christensen, Anders Nymark", "Feragen, Aasa" ]
Conference
2403.06748
[ "https://github.com/nina-weng/shortcut_skinseg" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
163
null
https://papers.miccai.org/miccai-2024/paper/1181_paper.pdf
@InProceedings{ Hao_EMFformer_MICCAI2024, author = { Hao, Zhaoquan and Quan, Hongyan and Lu, Yinbin }, title = { { EMF-former: An Efficient and Memory-Friendly Transformer for Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Medical image segmentation is of significant importance for computer-aided diagnosis. In this task, methods based on Convolutional Neural Networks(CNNs) have shown good performance in extracting local features. However, they cannot capture global dependencies, which is crucial for medical image. On the other hand, Transformer-based methods can establish global dependencies through self-attention, providing a supplement to local convolution. However, the expensive matrix multiplication in the self-attention of a vanilla transformer and the memory usage is still a bottleneck. In this work, we propose a segmentation model named EMF-former. By combining DWConv, channel shuffle and PWConv, we design a Depthwise Separable Shuffled Convolution Module(DSPConv) to reduce the parameter count of convolutions. Additionally, we employ an efficient Vector Aggregation Attention (VAA) that substitutes key-value interactions with element-wise multiplication after broadcasting two vectors to reduce computational complexity. Moreover, we substitute the parallel multi-head attention module with the Serial Multi-Head Attention Module (S-MHA) to reduce feature redundancy and memory usage in multi-head attention. Combining the above modules, EMF-former could perform the medical image segmentation efficiently with fewer parameter counts, lower computational complexity and lower memory usage while preserving segmentation accuracy. We conduct experimental evaluations on ACDC and Hippocampus dataset, achieving mIOU values of 80.5% and 78.8%, respectively.
EMF-former: An Efficient and Memory-Friendly Transformer for Medical Image Segmentation
[ "Hao, Zhaoquan", "Quan, Hongyan", "Lu, Yinbin" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
164
null
https://papers.miccai.org/miccai-2024/paper/2202_paper.pdf
@InProceedings{ Mil_AutoSkull_MICCAI2024, author = { Milojevic, Aleksandar and Peter, Daniel and Huber, Niko B. and Azevedo, Luis and Latyshev, Andrei and Sailer, Irena and Gross, Markus and Thomaszewski, Bernhard and Solenthaler, Barbara and Gözcü, Baran }, title = { { AutoSkull: Learning-based Skull Estimation for Automated Pipelines } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
In medical imaging, accurately representing facial features is crucial for applications such as radiation-free medical visualizations and treatment simulations. We aim to predict skull shapes from 3D facial scans with high accuracy, prioritizing simplicity for seamless integration into automated pipelines. Our method trains an MLP network on PCA coefficients using data from registered skin- and skull-mesh pairs obtained from CBCT scans, which is then used to infer the skull shape for a given skin surface. By incorporating teeth positions as additional prior information extracted from intraoral scans, we further improve the accuracy of the model, outperforming previous work. We showcase a clinical application of our work, where the inferred skull information is used in an FEM model to compute the outcome of an orthodontic treatment.
AutoSkull: Learning-based Skull Estimation for Automated Pipelines
[ "Milojevic, Aleksandar", "Peter, Daniel", "Huber, Niko B.", "Azevedo, Luis", "Latyshev, Andrei", "Sailer, Irena", "Gross, Markus", "Thomaszewski, Bernhard", "Solenthaler, Barbara", "Gözcü, Baran" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
165
null
https://papers.miccai.org/miccai-2024/paper/2262_paper.pdf
@InProceedings{ Zha_Biophysics_MICCAI2024, author = { Zhang, Lipei and Cheng, Yanqi and Liu, Lihao and Schönlieb, Carola-Bibiane and Aviles-Rivero, Angelica I }, title = { { Biophysics Informed Pathological Regularisation for Brain Tumour Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Recent advancements in deep learning have significantly improved brain tumour segmentation techniques; however, the results still lack confidence and robustness as they solely consider image data without biophysical priors or pathological information. Integrating biophysics-informed regularisation is one effective way to change this situation, as it provides an prior regularisation for automated end-to-end learning. In this paper, we propose a novel approach that designs brain tumour growth Partial Differential Equation (PDE) models as a regularisation with deep learning, operational with any network model. Our method introduces tumour growth PDE models directly into the segmentation process, improving accuracy and robustness, especially in data-scarce scenarios. This system estimates tumour cell density using a periodic activation function. By effectively integrating this estimation with biophysical models, we achieve a better capture of tumour characteristics. This approach not only aligns the segmentation closer to actual biological behaviour but also strengthens the model’s performance under limited data conditions. We demonstrate the effectiveness of our framework through extensive experiments on the BraTS 2023 dataset, showcasing significant improvements in both precision and reliability of tumour segmentation.
Biophysics Informed Pathological Regularisation for Brain Tumour Segmentation
[ "Zhang, Lipei", "Cheng, Yanqi", "Liu, Lihao", "Schönlieb, Carola-Bibiane", "Aviles-Rivero, Angelica I" ]
Conference
2403.09136
[ "https://github.com/uceclz0/biophy_brats" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
166
null
https://papers.miccai.org/miccai-2024/paper/1108_paper.pdf
@InProceedings{ Hu_AScanning_MICCAI2024, author = { Hu, Yichen and Wang, Chao and Song, Weitao and Tiulpin, Aleksei and Liu, Qing }, title = { { A Scanning Laser Ophthalmoscopy Image Database and Trustworthy Retinal Disease Detection Method } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Scanning laser ophthalmoscopy (SLO) images provide ophthalmologists with a non-invasive way to examine the retina for diagnostic and treatment purposes. Manual reading SLO images by ophthalmologists is a tedious task. Thus, developing trustworthy disease detection algorithms becomes urgent. However, up to now, there are no large-scale SLO image databases. In this paper, we collect and release a new SLO image dataset, named Retina-SLO, containing 7943 images of 4102 eyes from 2440 subjects with labels of three diseases, i.e., macular edema (ME), diabetic retinopathy (DR), and glaucoma. To our knowledge, Retina-SLO is the largest publicly available SLO image dataset for multiple retinal disease detection. While numerous deep learning-based methods for disease detection with medical images have been proposed, they ignore the model trust. Particularly, from a user’s perspective, the detection model is highly untrustworthy if it makes inconsistent predictions on different SLO images of the same eye captured within relatively short time intervals. To solve this issue, we propose TrustDetector, a novel disease detection method, leveraging eye-wise consistency learning and rank-based contrastive learning to ensure consistent predictions and ordered representations aligned with disease severity levels on SLO images. Experimental results show that our TrustDetector achieves better detection performances and higher consistency than the state-of-the-arts. Dataset and code are available at https://drive.google.com/drive/TrustDetector/Retina-SLO.
A Scanning Laser Ophthalmoscopy Image Database and Trustworthy Retinal Disease Detection Method
[ "Hu, Yichen", "Wang, Chao", "Song, Weitao", "Tiulpin, Aleksei", "Liu, Qing" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
167
null
https://papers.miccai.org/miccai-2024/paper/1926_paper.pdf
@InProceedings{ Wan_LSSNet_MICCAI2024, author = { Wang, Wei and Sun, Huiying and Wang, Xin }, title = { { LSSNet: A Method for Colon Polyp Segmentation Based on Local Feature Supplementation and Shallow Feature Supplementation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Accurate polyp segmentation methods are essential for colon polyp screening and colorectal cancer diagnosis. However, polyp segmentation faces the following challenges: (1) Small-sized polyps are easily lost during the identification process. (2) The boundaries separating the polyp from its surroundings are fuzzy. (3) Additional distracting information is introduced during the colonoscopy procedure, resulting in noise in the colonoscopy image and influencing the segmentation outcomes. To cope with these three challenges, a method for colon polyp segmentation based on local feature supplementation and shallow feature supplementation (LSSNet) is proposed by incorporating feature supplementation structures in the encoder-decoder structure. The multiscale feature extraction (MFE) module is designed to extract local features, the interlayer attention fusion (IAF) module is designed to fuse supplementary features with the current layer features, and the semantic gap reduction (SGR) module is designed to reduce the semantic gaps between the layers, which together form the local feature supplementation structure. The shallow feature supplementation (SFS) module is designed to supplement the features in the fuzzy areas. Based on these four modules LSSNet is proposed. LSSNet is evaluated on five datasets: ClinicDB, KvasirSEG, ETIS, ColonDB, and EndoScene. The results show that mDice scores are improved by 1.33%, 0.74%, 2.65%, 1.08%, and 0.62% respectively over the compared state-of-the-art methods. The codes are available at https://github.com/heyeying/LSSNet.
LSSNet: A Method for Colon Polyp Segmentation Based on Local Feature Supplementation and Shallow Feature Supplementation
[ "Wang, Wei", "Sun, Huiying", "Wang, Xin" ]
Conference
[ "https://github.com/heyeying/LSSNet" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
168
null
https://papers.miccai.org/miccai-2024/paper/0366_paper.pdf
@InProceedings{ Liu_FedFMS_MICCAI2024, author = { Liu, Yuxi and Luo, Guibo and Zhu, Yuesheng }, title = { { FedFMS: Exploring Federated Foundation Models for Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Medical image segmentation is crucial for clinical diagnosis. The Segmentation Anything Model (SAM) serves as a powerful foundation model for visual segmentation and can be adapted for medical image segmentation. However, medical imaging data typically contain privacy-sensitive information, making it challenging to train foundation models with centralized storage and sharing. To date, there are few foundation models tailored for medical image deployment within the federated learning framework, and the segmentation performance, as well as the efficiency of communication and training, remain unexplored. In response to these issues, we developed Federated Foundation models for Medical image Segmentation (FedFMS), which includes the Federated SAM (FedSAM) and a communication and training-efficient Federated SAM with Medical SAM Adapter (FedMSA). Comprehensive experiments on diverse datasets are conducted to investigate the performance disparities between centralized training and federated learning across various configurations of FedFMS. The experiments revealed that FedFMS could achieve performance comparable to models trained via centralized training methods while maintaining privacy. Furthermore, FedMSA demonstrated the potential to enhance communication and training efficiency. Our model implementation codes are available at https://github.com/LIU-YUXI/FedFMS.
FedFMS: Exploring Federated Foundation Models for Medical Image Segmentation
[ "Liu, Yuxi", "Luo, Guibo", "Zhu, Yuesheng" ]
Conference
2403.05408
[ "https://github.com/LMIAPC/FednnU-Net" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
169
null
https://papers.miccai.org/miccai-2024/paper/1131_paper.pdf
@InProceedings{ Esh_ESPA_MICCAI2024, author = { Eshaghzadeh Torbati, Mahbaneh and Minhas, Davneet S. and Tafti, Ahmad P. and DeCarli, Charles S. and Tudorascu, Dana L. and Hwang, Seong Jae }, title = { { ESPA: An Unsupervised Harmonization Framework via Enhanced Structure Preserving Augmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
The rising interest in pooling neuroimaging data from various sources presents challenges regarding scanner variability, known as scanner effects. While numerous harmonization methods aim to tackle these effects, they face issues with model robustness, brain structural modifications, and over-correction. To combat these issues, we propose a novel harmonization approach centered on simulating scanner effects through augmentation methods. This strategy enhances model robustness by providing extensive simulated matched data, comprising sets of images with similar brain but varying scanner effects. Our proposed method, ESPA, is an unsupervised harmonization framework via Enhanced Structure Preserving Augmentation. Additionally, we introduce two domain-adaptation augmentation: tissue-type contrast augmentation and GAN-based residual augmentation, both focusing on appearancebased changes to address structural modifications. While the former adapts images to the tissue-type contrast distribution of a target scanner, the latter generates residuals added to the original image for more complex scanner adaptation. These augmentations assist ESPA in mitigating over correction through data stratification or population matching strategies during augmentation configuration. Notably, we leverage our unique in-house matched dataset as a benchmark to compare ESPA against supervised and unsupervised state-of-the art (SOTA) harmonization methods. Our study marks the first attempt, to the best of our knowledge, to address harmonization by simulating scanner effects. Our results demonstrate the successful simulation of scanner effects, with ESPA outperforming SOTA methods using this harmonization approach.
ESPA: An Unsupervised Harmonization Framework via Enhanced Structure Preserving Augmentation
[ "Eshaghzadeh Torbati, Mahbaneh", "Minhas, Davneet S.", "Tafti, Ahmad P.", "DeCarli, Charles S.", "Tudorascu, Dana L.", "Hwang, Seong Jae" ]
Conference
[ "https://github.com/Mahbaneh/ESPA.git" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
170
null
https://papers.miccai.org/miccai-2024/paper/3622_paper.pdf
@InProceedings{ Hej_Conditional_MICCAI2024, author = { Hejrati, Behzad and Banerjee, Soumyanil and Glide-Hurst, Carri and Dong, Ming }, title = { { Conditional diffusion model with spatial attention and latent embedding for medical image segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15009 }, month = {October}, pages = { pending }, }
Diffusion models have been used extensively for high quality image and video generation tasks. In this paper, we propose a novel conditional diffusion model with spatial attention and latent embedding (cDAL) for medical image segmentation. In cDAL, a convolutional neural network (CNN) based discriminator is used at every time-step of the diffusion process to distinguish between the generated labels and the real ones. A spatial attention map is computed based on the features learned by the discriminator to help cDAL generate more accurate segmentation of discriminative regions in an input image. Additionally, we incorporated a random latent embedding into each layer of our model to significantly reduce the number of training and sampling time-steps, thereby making it much faster than other diffusion models for image segmentation. We applied cDAL on 3 publicly available medical image segmentation datasets (MoNuSeg, Chest X-ray and Hippocampus) and observed significant qualitative and quantitative improvements with higher Dice scores and mIoU over the state-of-the-art algorithms. The source code is publicly available at https://github.com/Hejrati/cDAL/.
Conditional diffusion model with spatial attention and latent embedding for medical image segmentation
[ "Hejrati, Behzad", "Banerjee, Soumyanil", "Glide-Hurst, Carri", "Dong, Ming" ]
Conference
[ "https://github.com/Hejrati/cDAL/" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
171
null
https://papers.miccai.org/miccai-2024/paper/1464_paper.pdf
@InProceedings{ Jau_Anatomyguided_MICCAI2024, author = { Jaus, Alexander and Seibold, Constantin and Reiß, Simon and Heine, Lukas and Schily, Anton and Kim, Moon and Bahnsen, Fin Hendrik and Herrmann, Ken and Stiefelhagen, Rainer and Kleesiek, Jens }, title = { { Anatomy-guided Pathology Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Pathological structures in medical images are typically deviations from the expected anatomy of a patient. While clinicians consider this interplay between anatomy and pathology, recent deep learning algorithms specialize in recognizing either one of the two, rarely considering the patient’s body from such a joint perspective. In this paper, we develop a generalist segmentation model that combines anatomical and pathological information, aiming to enhance the segmentation accuracy of pathological features. Our Anatomy-Pathology Exchange (APEx) training utilizes a query-based segmentation transformer which decodes a joint feature space into query-representations for human anatomy and interleaves them via a mixing strategy into the pathology-decoder for anatomy-informed pathology predictions. In doing so, we are able to report the best results across the board on FDG-PET-CT and Chest X-Ray pathology segmentation tasks with a margin of up to 3.3% as compared to strong baseline methods. Code and models are available at github.com/alexanderjaus/APEx.
Anatomy-guided Pathology Segmentation
[ "Jaus, Alexander", "Seibold, Constantin", "Reiß, Simon", "Heine, Lukas", "Schily, Anton", "Kim, Moon", "Bahnsen, Fin Hendrik", "Herrmann, Ken", "Stiefelhagen, Rainer", "Kleesiek, Jens" ]
Conference
2407.05844
[ "https://github.com/alexanderjaus/APEx" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
172
null
https://papers.miccai.org/miccai-2024/paper/1002_paper.pdf
@InProceedings{ Xia_Mitigating_MICCAI2024, author = { Xia, Tian and Roschewitz, Mélanie and De Sousa Ribeiro, Fabio and Jones, Charles and Glocker, Ben }, title = { { Mitigating attribute amplification in counterfactual image generation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Causal generative modelling is gaining interest in medical imaging due to its ability to answer interventional and counterfactual queries. Most work focuses on generating counterfactual images that look plausible, using auxiliary classifiers to enforce effectiveness of simulated interventions. We investigate pitfalls in this approach, discovering the issue of attribute amplification, where unrelated attributes are spuriously affected during interventions, leading to biases across protected characteristics and disease status. We show that attribute amplification is caused by the use of hard labels in the counterfactual training process and propose soft counterfactual fine-tuning to mitigate this issue. Our method substantially reduces the amplification effect while maintaining effectiveness of generated images, demonstrated on a large chest X-ray dataset. Our work makes an important advancement towards more faithful and unbiased causal modelling in medical imaging. Code available at https://github.com/biomedia-mira/attribute-amplification.
Mitigating attribute amplification in counterfactual image generation
[ "Xia, Tian", "Roschewitz, Mélanie", "De Sousa Ribeiro, Fabio", "Jones, Charles", "Glocker, Ben" ]
Conference
2403.09422
[ "https://github.com/biomedia-mira/attribute-amplification" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
173
null
https://papers.miccai.org/miccai-2024/paper/2587_paper.pdf
@InProceedings{ Qia_Towards_MICCAI2024, author = { Qian, Kui and Qiao, Litao and Friedman, Beth and O’Donnell, Edward and Kleinfeld, David and Freund, Yoav }, title = { { Towards Explainable Automated Neuroanatomy } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15003 }, month = {October}, pages = { pending }, }
We present a novel method for quantifying the microscopic structure of brain tissue. It is based on the automated recognition of interpretable features obtained by analyzing the shapes of cells. This contrasts with prevailing methods of brain anatomical analysis in two ways. First, contemporary methods use gray-scale values derived from smoothed version of the anatomical images, which dissipated valuable information from the texture of the images. Second, contemporary analysis uses the output of black-box Convolutional Neural Networks, while our system makes decisions based on interpretable features obtained by analyzing the shapes of individual cells. An important benefit of this open-box approach is that the anatomist can understand and correct the decisions made by the computer. Our proposed system can accurately localize and identify existing brain structures. This can be used to align and coregistar brains and will facilitate connectomic studies for reverse engineering of brain circuitry.
Towards Explainable Automated Neuroanatomy
[ "Qian, Kui", "Qiao, Litao", "Friedman, Beth", "O’Donnell, Edward", "Kleinfeld, David", "Freund, Yoav" ]
Conference
2404.05814
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
174
null
https://papers.miccai.org/miccai-2024/paper/1569_paper.pdf
@InProceedings{ Liu_CUTS_MICCAI2024, author = { Liu, Chen and Amodio, Matthew and Shen, Liangbo L. and Gao, Feng and Avesta, Arman and Aneja, Sanjay and Wang, Jay C. and Del Priore, Lucian V. and Krishnaswamy, Smita }, title = { { CUTS: A Deep Learning and Topological Framework for Multigranular Unsupervised Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15008 }, month = {October}, pages = { pending }, }
Segmenting medical images is critical to facilitating both patient diagnoses and quantitative research. A major limiting factor is the lack of labeled data, as obtaining expert annotations for each new set of imaging data and task can be labor intensive and inconsistent among annotators. We present CUTS, an unsupervised deep learning framework for medical image segmentation. CUTS operates in two stages. For each image, it produces an embedding map via intra-image contrastive learning and local patch reconstruction. Then, these embeddings are partitioned at dynamic granularity levels that correspond to the data topology. CUTS yields a series of coarse-to-fine-grained segmentations that highlight features at various granularities. We applied CUTS to retinal fundus images and two types of brain MRI images to delineate structures and patterns at different scales. When evaluated against predefined anatomical masks, CUTS improved the dice coefficient and Hausdorff distance by at least 10% compared to existing unsupervised methods. Finally, CUTS showed performance on par with Segment Anything Models (SAM, MedSAM, SAM-Med2D) pre-trained on gigantic labeled datasets.
CUTS: A Deep Learning and Topological Framework for Multigranular Unsupervised Medical Image Segmentation
[ "Liu, Chen", "Amodio, Matthew", "Shen, Liangbo L.", "Gao, Feng", "Avesta, Arman", "Aneja, Sanjay", "Wang, Jay C.", "Del Priore, Lucian V.", "Krishnaswamy, Smita" ]
Conference
2209.11359
[ "https://github.com/KrishnaswamyLab/CUTS" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
175
null
https://papers.miccai.org/miccai-2024/paper/3636_paper.pdf
@InProceedings{ Aka_CheXtriev_MICCAI2024, author = { Akash R. J., Naren and Tadanki, Arihanth and Sivaswamy, Jayanthi }, title = { { CheXtriev: Anatomy-Centered Representation for Case-Based Retrieval of Chest Radiographs } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
We present CheXtriev, a graph-based, anatomy-aware framework for chest radiograph retrieval. Unlike prior methods focussed on global features, our method leverages graph transformers to extract informative features from specific anatomical regions. Furthermore, it captures spatial context and the interplay between anatomical location and findings. This contextualization, grounded in evidence-based anatomy, results in a richer anatomy-aware representation and leads to more accurate, effective and efficient retrieval, particularly for less prevalent findings. CheXtriv outperforms state-of-the-art global and local approaches by 18% to 26% in retrieval accuracy and 11% to 23% in ranking quality.
CheXtriev: Anatomy-Centered Representation for Case-Based Retrieval of Chest Radiographs
[ "Akash R. J., Naren", "Tadanki, Arihanth", "Sivaswamy, Jayanthi" ]
Conference
[ "https://github.com/cvit-mip/chextriev" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
176
null
https://papers.miccai.org/miccai-2024/paper/4152_paper.pdf
@InProceedings{ Ju_Universal_MICCAI2024, author = { Ju, Lie and Wu, Yicheng and Feng, Wei and Yu, Zhen and Wang, Lin and Zhu, Zhuoting and Ge, Zongyuan }, title = { { Universal Semi-Supervised Learning for Medical Image Classification } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Semi-supervised learning (SSL) has attracted much attention since it reduces the expensive costs of collecting adequate well-labeled training data, especially for deep learning methods. However, traditional SSL is built upon an assumption that labeled and unlabeled data should be from the same distribution e.g., classes and domains. However, in practical scenarios, unlabeled data would be from unseen classes or unseen domains, and it is still challenging to exploit them by existing SSL methods. Therefore, in this paper, we proposed a unified framework to leverage these unseen unlabeled data for open-scenario semi-supervised medical image classification. We first design a novel scoring mechanism, called dual-path outliers estimation, to identify samples from unseen classes. Meanwhile, to extract unseen-domain samples, we then apply an effective variational autoencoder (VAE) pre-training. After that, we conduct domain adaptation to fully exploit the value of the detected unseen-domain samples to boost semi-supervised training. We evaluated our proposed framework on dermatology and ophthalmology tasks. Extensive experiments demonstrate our model can achieve superior classification performance in various medical SSL scenarios.
Universal Semi-Supervised Learning for Medical Image Classification
[ "Ju, Lie", "Wu, Yicheng", "Feng, Wei", "Yu, Zhen", "Wang, Lin", "Zhu, Zhuoting", "Ge, Zongyuan" ]
Conference
2304.04059
[ "https://github.com/PyJulie/USSL4MIC" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
177
null
https://papers.miccai.org/miccai-2024/paper/3350_paper.pdf
@InProceedings{ Hu_LGA_MICCAI2024, author = { Hu, Jihong and Li, Yinhao and Sun, Hao and Song, Yu and Zhang, Chujie and Lin, Lanfen and Chen, Yen-Wei }, title = { { LGA: A Language Guide Adapter for Advancing the SAM Model’s Capabilities in Medical Image Segmentation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
In addressing the unique challenges of medical image segmentation, foundation models like the Segment Anything Model (SAM), originally developed for natural image, often falter due to the distinct nature of medical images. This study introduces the Language Guided Adapter (LGA), a paremeter efficient fine-tuning approach that extends SAM’s utility to medical segmentation tasks. Through the integration of textual data from medical reports via a pretrained Bert model into embeddings, LGA combines these embeddings with the image features in SAM’s image encoder using Feature Fusion Modules (FFM). Our method significantly enhances model performance and reduces computational overhead by freezing most parameters during the fine-tuning process. Evaluated on the CT-based MosMedData+ and the X-ray dataset QaTa-COV19, LGA demonstrates its effectiveness and adaptability, achieving competitive results with a significant reduction in the number of parameters required for fine-tuning compared to SOTA medical segmentation models. This enhancement underscores the potential of foundation models, leveraging the integration of multimodal knowledge as a pivotal approach for application in specialized medical tasks, thus charting a course towards more precise and adaptable diagnostic methodologies.
LGA: A Language Guide Adapter for Advancing the SAM Model’s Capabilities in Medical Image Segmentation
[ "Hu, Jihong", "Li, Yinhao", "Sun, Hao", "Song, Yu", "Zhang, Chujie", "Lin, Lanfen", "Chen, Yen-Wei" ]
Conference
[ "https://github.com/JiHooooo/LGA/" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
178
null
https://papers.miccai.org/miccai-2024/paper/3227_paper.pdf
@InProceedings{ Pen_Advancing_MICCAI2024, author = { Peng, Qiong and Lin, Weiping and Hu, Yihuang and Bao, Ailisi and Lian, Chenyu and Wei, Weiwei and Yue, Meng and Liu, Jingxin and Yu, Lequan and Wang, Liansheng }, title = { { Advancing H&E-to-IHC Virtual Staining with Task-Specific Domain Knowledge for HER2 Scoring } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
The assessment of HER2 expression is crucial in diagnosing breast cancer. Staining pathological tissues with immunohistochemistry (IHC) is a critically pivotal step in the assessment procedure, while it is expensive and time-consuming. Recently, generative models have emerged as a novel paradigm for virtual staining from hematoxylin-eosin (H&E) to IHC. Unlike traditional image translation tasks, virtual staining in IHC for HER2 scoring requires greater attention to regions like nuclei and stained membranes, informed by task-specific domain knowledge. Unfortunately, most existing virtual staining methods overlook this point. In this paper, we propose a novel generative adversarial network (GAN) based solution that incorporates specific knowledge of HER2 scoring, i.e., nuclei distribution and membrane staining intensity. We introduce a nuclei density estimator to learn the nuclei distribution and thus facilitate the cell alignment between the real and generated images by an auxiliary regularization branch. Moreover, another branch is tailored to focus on the stained membranes, ensuring a more consistent membrane staining intensity. We collect RegH2I, a dataset comprising 2592 pairs of registered H&E-IHC images and conduct extensive experiments to evaluate our approach, including H&E-to-IHC virtual staining on internal and external datasets, nuclei distribution and membrane staining intensity analysis, as well as downstream tasks for generated images. The results demonstrate that our method achieves superior performance than existing methods. Code and dataset are released at https://github.com/balball/TDKstain.
Advancing H E-to-IHC Virtual Staining with Task-Specific Domain Knowledge for HER2 Scoring
[ "Peng, Qiong", "Lin, Weiping", "Hu, Yihuang", "Bao, Ailisi", "Lian, Chenyu", "Wei, Weiwei", "Yue, Meng", "Liu, Jingxin", "Yu, Lequan", "Wang, Liansheng" ]
Conference
[ "https://github.com/balball/TDKstain" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
179
null
https://papers.miccai.org/miccai-2024/paper/3238_paper.pdf
@InProceedings{ Guo_Unsupervised_MICCAI2024, author = { Guo, Juncheng and Lin, Jianxin and Tan, Guanghua and Lu, Yuhuan and Gao, Zhan and Li, Shengli and Li, Kenli }, title = { { Unsupervised Ultrasound Image Quality Assessment with Score Consistency and Relativity Co-learning } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Selecting an optimal standard plane in prenatal ultrasound is crucial for improving the accuracy of AI-assisted diagnosis. Existing approaches, typically dependent on detecting the presence of anatomical structures as defined by clinical protocols, have been constrained by a lack of consideration for image perceptual quality. Although supervised training with manually labeled quality scores seems feasible, the subjective nature and unclear definition of these scores make such learning error-prone and manual labeling excessively time-consuming. In this paper, we present an unsupervised ultrasound image quality assessment method with score consistency and relativity co-learning (CRL-UIQA). Our approach generates pseudo-labels by calculating feature distribution distances between ultrasound images and high-quality standard planes, leveraging consistency and relativity for training regression networks in quality prediction. Extensive experiments on the dataset demonstrate the impressive performance of the proposed CRL-UIQA, showcasing excellent generalization across diverse plane images.
Unsupervised Ultrasound Image Quality Assessment with Score Consistency and Relativity Co-learning
[ "Guo, Juncheng", "Lin, Jianxin", "Tan, Guanghua", "Lu, Yuhuan", "Gao, Zhan", "Li, Shengli", "Li, Kenli" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
180
null
https://papers.miccai.org/miccai-2024/paper/1448_paper.pdf
@InProceedings{ Zen_Realistic_MICCAI2024, author = { Zeng, Tianle and Loza Galindo, Gerardo and Hu, Junlei and Valdastri, Pietro and Jones, Dominic }, title = { { Realistic Surgical Image Dataset Generation Based On 3D Gaussian Splatting } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Computer vision technologies markedly enhance the automation capabilities of robotic-assisted minimally invasive surgery (RAMIS) through advanced tool tracking, detection, and localization. However, the limited availability of comprehensive surgical datasets for training represents a significant challenge in this field. This research introduces a novel method that employs 3D Gaussian Splatting to generate synthetic surgical datasets. We propose a method for extracting and combining 3D Gaussian representations of surgical instruments and background operating environments, transforming and combining them to generate high-fidelity synthetic surgical scenarios. We developed a data recording system capable of acquiring images alongside tool and camera poses in a surgical scene. Using this pose data, we synthetically replicate the scene, thereby enabling direct comparisons of the synthetic image quality (27.796±1.796 PSNR). As a further validation, we compared two YOLOv5 models trained on the synthetic and real data, respectively, and assessed their performance in an unseen real-world test dataset. Comparing the performances, we observe an improvement in neural network performance, with the synthetic-trained model outperforming the real-world trained model by 12%, testing both on real-world data.
Realistic Surgical Image Dataset Generation Based On 3D Gaussian Splatting
[ "Zeng, Tianle", "Loza Galindo, Gerardo", "Hu, Junlei", "Valdastri, Pietro", "Jones, Dominic" ]
Conference
2407.14846
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
181
null
https://papers.miccai.org/miccai-2024/paper/0225_paper.pdf
@InProceedings{ Cui_EndoDAC_MICCAI2024, author = { Cui, Beilei and Islam, Mobarakol and Bai, Long and Wang, An and Ren, Hongliang }, title = { { EndoDAC: Efficient Adapting Foundation Model for Self-Supervised Depth Estimation from Any Endoscopic Camera } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Depth estimation plays a crucial role in various tasks within endoscopic surgery, including navigation, surface reconstruction, and augmented reality visualization. Despite the significant achievements of foundation models in vision tasks, including depth estimation, their direct application to the medical domain often results in suboptimal performance. This highlights the need for efficient adaptation methods to adapt these models to endoscopic depth estimation. We propose Endoscopic Depth Any Camera (EndoDAC) which is an efficient self-supervised depth estimation framework that adapts foundation models to endoscopic scenes. Specifically, we develop the Dynamic Vector-Based Low-Rank Adaptation (DV-LoRA) and employ Convolutional Neck blocks to tailor the foundational model to the surgical domain, utilizing remarkably few trainable parameters. Given that camera information is not always accessible, we also introduce a self-supervised adaptation strategy that estimates camera intrinsics using the pose encoder. Our framework is capable of being trained solely on monocular surgical videos from any camera, ensuring minimal training costs. Experiments demonstrate that our approach obtains superior performance even with fewer training epochs and unaware of the ground truth camera intrinsics. Code is available at https://github.com/BeileiCui/EndoDAC.
EndoDAC: Efficient Adapting Foundation Model for Self-Supervised Depth Estimation from Any Endoscopic Camera
[ "Cui, Beilei", "Islam, Mobarakol", "Bai, Long", "Wang, An", "Ren, Hongliang" ]
Conference
2405.08672
[ "https://github.com/BeileiCui/EndoDAC" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
182
null
https://papers.miccai.org/miccai-2024/paper/0254_paper.pdf
@InProceedings{ Sun_Continually_MICCAI2024, author = { Sun, Yihua and Khor, Hee Guan and Wang, Yuanzheng and Wang, Zhuhao and Zhao, Hongliang and Zhang, Yu and Ma, Longfei and Zheng, Zhuozhao and Liao, Hongen }, title = { { Continually Tuning a Large Language Model for Multi-domain Radiology Report Generation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15005 }, month = {October}, pages = { pending }, }
Large language models (LLMs) have demonstrated potential across various tasks, including vision-language applications like chest X-ray (XR) report generation (RG) in healthcare. Recent RG approaches focus on optimizing model performance for a single dataset with a single XR modality, often neglecting the critical area of computed tomography (CT) report generation. The challenge is compounded by medical datasets being isolated across different centers, making comprehensive collection difficult. Furthermore, LLMs trained on datasets sequentially can experience catastrophic forgetting. In this paper, we move beyond conventional approaches of training on a single dataset, and focus on improving the overall performance on sequentially collected multi-center datasets. We incorporate four datasets with diverse languages and image modalities for the experiments. Our approach utilizes a minimal number of task-specific learnable weights within an LLM-based RG method for each domain, maintaining the majority of weights frozen to avoid forgetting. Utilizing LLMs’ multilingual generalizability, we align models and facilitate knowledge sharing through a multi-label supervised contrastive loss within the LLM hidden space. We design a 2D-3D adapter for the image encoder to transfer from XR to CT RG tasks. A CT disease graph is established for transferring knowledge from XR to CT RG tasks, using CT’s most relevant XR disease class centers in a triplet loss. Extensive experiments validate our design.
Continually Tuning a Large Language Model for Multi-domain Radiology Report Generation
[ "Sun, Yihua", "Khor, Hee Guan", "Wang, Yuanzheng", "Wang, Zhuhao", "Zhao, Hongliang", "Zhang, Yu", "Ma, Longfei", "Zheng, Zhuozhao", "Liao, Hongen" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
183
null
https://papers.miccai.org/miccai-2024/paper/0793_paper.pdf
@InProceedings{ Li_Semisupervised_MICCAI2024, author = { Li, Haoshen and Wang, Yirui and Zhu, Jie and Guo, Dazhou and Yu, Qinji and Yan, Ke and Lu, Le and Ye, Xianghua and Zhang, Li and Wang, Qifeng and Jin, Dakai }, title = { { Semi-supervised Lymph Node Metastasis Classification with Pathology-guided Label Sharpening and Two-streamed Multi-scale Fusion } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15011 }, month = {October}, pages = { pending }, }
Diagnosis of lymph node (LN) metastasis in CT scans is an essential yet challenging task for esophageal cancer staging and treatment planning. Deep learning methods can potentially address this issue by learning from large-scale, accurately labeled data. However, even for highly experienced physicians, only a portion of LN metastases can be accurately determined in CT. Previous work conducted supervised training with a relatively small number of annotated LNs and achieved limited performance. In our work, we leverage the teacher-student semi-supervised paradigm and explore the potential of using a large amount of unlabeled LNs in performance improvement. For unlabeled LNs, pathology reports can indicate the presence of LN metastases within the lymph node station (LNS). Hence, we propose a pathology-guided label sharpening loss by combining the metastasis status of LNS from pathology reports with predictions of the teacher model. This combination assigns pseudo labels for LNs with high confidence and then the student model is updated for better performance. Besides, to improve the initial performance of the teacher model, we propose a two-stream multi-scale feature fusion deep network that effectively fuses the local and global LN characteristics to learn from labeled LNs. Extensive four-fold cross-validation is conducted on a patient cohort of 1052 esophageal cancer patients with corresponding pathology reports and 9961 LNs (3635 labeled and 6326 unlabeled). The results demonstrate that our proposed method markedly outperforms previous state-of-the-art methods by 2.95\% (from 90.23\% to 93.18\%) in terms of the area under the receiver operating characteristic curve (AUROC) metric on this challenging task.
Semi-supervised Lymph Node Metastasis Classification with Pathology-guided Label Sharpening and Two-streamed Multi-scale Fusion
[ "Li, Haoshen", "Wang, Yirui", "Zhu, Jie", "Guo, Dazhou", "Yu, Qinji", "Yan, Ke", "Lu, Le", "Ye, Xianghua", "Zhang, Li", "Wang, Qifeng", "Jin, Dakai" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
184
null
https://papers.miccai.org/miccai-2024/paper/2542_paper.pdf
@InProceedings{ Xia_Conditional_MICCAI2024, author = { Xiao, Qing and Yoon, Siyeop and Ren, Hui and Tivnan, Matthew and Sun, Lichao and Li, Quanzheng and Liu, Tianming and Zhang, Yu and Li, Xiang }, title = { { Conditional Score-Based Diffusion Model for Cortical Thickness Trajectory Prediction } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Alzheimer’s Disease (AD) is a neurodegenerative condition characterized by diverse progression rates among individuals, with changes in cortical thickness (CTh) closely linked to its progression. Accurately forecasting CTh trajectories can significantly enhance early diagnosis and intervention strategies, providing timely care. However, the longitudinal data essential for these studies often suffer from temporal sparsity and incompleteness, presenting substantial challenges in modeling the disease’s progression accurately. Existing methods are limited, focusing primarily on datasets without missing entries or requiring predefined assumptions about CTh progression. To overcome these obstacles, we propose a conditional score-based diffusion model specifically designed to generate CTh trajectories with the given baseline information, such as age, sex, and initial diagnosis. Our conditional diffusion model utilizes all available data during the training phase to make predictions based solely on baseline information during inference without needing prior history about CTh progression. The prediction accuracy of the proposed CTh prediction pipeline using a conditional score-based model was compared for sub-groups consisting of cognitively normal, mild cognitive impairment, and AD subjects. The Bland-Altman analysis shows our diffusion-based prediction model has a near-zero bias with narrow 95% confidential interval compared to the ground-truth CTh in 6-36 months. In addition, our conditional diffusion model has a stochastic generative nature, therefore, we demonstrated an uncertainty analysis of patient-specific CTh prediction through multiple realizations. Our code is available at https://github.com/siyeopyoon/Diffusion-Cortical-Thickness-Trajectory.
Conditional Score-Based Diffusion Model for Cortical Thickness Trajectory Prediction
[ "Xiao, Qing", "Yoon, Siyeop", "Ren, Hui", "Tivnan, Matthew", "Sun, Lichao", "Li, Quanzheng", "Liu, Tianming", "Zhang, Yu", "Li, Xiang" ]
Conference
2403.06940
[ "https://github.com/siyeopyoon/Diffusion-Cortical-Thickness-Trajectory" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
185
null
https://papers.miccai.org/miccai-2024/paper/1531_paper.pdf
@InProceedings{ Bir_HUP3D_MICCAI2024, author = { Birlo, Manuel and Caramalau, Razvan and Edwards, Philip J. “Eddie” and Dromey, Brian and Clarkson, Matthew J. and Stoyanov, Danail }, title = { { HUP-3D: A 3D multi-view synthetic dataset for assisted-egocentric hand-ultrasound-probe pose estimation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
We present HUP-3D, a 3D multiview multimodal synthetic dataset for hand ultrasound (US) probe pose estimation in the context of obstetric ultrasound. Egocentric markerless 3D joint pose estimation has potential applications in mixed reality medical education. The ability to understand hand and probe movements opens the door to tailored guidance and mentoring applications. Our dataset consists of over 31k sets of RGB, depth, and segmentation mask frames, including pose-related reference data, with an emphasis on image diversity and complexity. Adopting a camera viewpoint-based sphere concept allows us to capture a variety of views and generate multiple hand grasps poses using a pre-trained network. Additionally, our approach includes a software-based image rendering concept, enhancing diversity with various hand and arm textures, lighting conditions, and background images. We validated our proposed dataset with state-of-the-art learning models and we obtained the lowest hand-object keypoint errors. The supplementary material details the parameters for sphere-based camera view angles and the grasp generation and rendering pipeline configuration. The source code for our grasp generation and rendering pipeline, along with the dataset, is publicly available at https://manuelbirlo.github.io/HUP-3D/.
HUP-3D: A 3D multi-view synthetic dataset for assisted-egocentric hand-ultrasound-probe pose estimation
[ "Birlo, Manuel", "Caramalau, Razvan", "Edwards, Philip J. “Eddie”", "Dromey, Brian", "Clarkson, Matthew J.", "Stoyanov, Danail" ]
Conference
[ "https://github.com/manuelbirlo/US_GrabNet_grasp_generation" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
186
null
https://papers.miccai.org/miccai-2024/paper/2885_paper.pdf
@InProceedings{ Zhe_Misaligned_MICCAI2024, author = { Zheng, Jieyu and Li, Xiaojian and Mo, Hangjie and Li, Ling and Ma, Xiang }, title = { { Misaligned 3D Texture Optimization in MIS Utilizing Generative Framework } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
Three-dimensional reconstruction of the surgical area based on intraoperative laparoscopic videos can restore 2D information to 3D space, providing a solid technical foundation for many applications in computer-assisted surgery. SLAM methods often suffer from imperfect pose estimation and tissue motion, leading to the loss of original texture information. On the other hand, methods like Neural Radiance Fields and 3D Gaussian Split require offline processing and lack generalization capabilities. To overcome these limitations, we explore a texture optimization method that generates high resolution and continuous texture. It designs a mechanism for transforming 3D point clouds into 2D texture space and utilizes a generative network architecture to design 2D registration and image fusion modules. Experimental results and comparisons with state-of-the-art techniques demonstrate the effectiveness of this method in preserving the high-fidelity texture.
Misaligned 3D Texture Optimization in MIS Utilizing Generative Framework
[ "Zheng, Jieyu", "Li, Xiaojian", "Mo, Hangjie", "Li, Ling", "Ma, Xiang" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
187
null
https://papers.miccai.org/miccai-2024/paper/0787_paper.pdf
@InProceedings{ Yan_AllInOne_MICCAI2024, author = { Yang, Zhiwen and Chen, Haowei and Qian, Ziniu and Yi, Yang and Zhang, Hui and Zhao, Dan and Wei, Bingzheng and Xu, Yan }, title = { { All-In-One Medical Image Restoration via Task-Adaptive Routing } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
Although single-task medical image restoration (MedIR) has witnessed remarkable success, the limited generalizability of these methods poses a substantial obstacle to wider application. In this paper, we focus on the task of all-in-one medical image restoration, aiming to address multiple distinct MedIR tasks with a single universal model. Nonetheless, due to significant differences between different MedIR tasks, training a universal model often encounters task interference issues, where different tasks with shared parameters may conflict with each other in the gradient update direction. This task interference leads to deviation of the model update direction from the optimal path, thereby affecting the model’s performance. To tackle this issue, we propose a task-adaptive routing strategy, allowing conflicting tasks to select different network paths in spatial and channel dimensions, thereby mitigating task interference. Experimental results demonstrate that our proposed \textbf{A}ll-in-one \textbf{M}edical \textbf{I}mage \textbf{R}estoration (\textbf{AMIR}) network achieves state-of-the-art performance in three MedIR tasks: MRI super-resolution, CT denoising, and PET synthesis, both in single-task and all-in-one settings. The code and data will be available at \href{https://github.com/Yaziwel/All-In-One-Medical-Image-Restoration-via-Task-Adaptive-Routing.git}{https://github.com/Yaziwel/AMIR}.
All-In-One Medical Image Restoration via Task-Adaptive Routing
[ "Yang, Zhiwen", "Chen, Haowei", "Qian, Ziniu", "Yi, Yang", "Zhang, Hui", "Zhao, Dan", "Wei, Bingzheng", "Xu, Yan" ]
Conference
2405.19769
[ "https://github.com/Yaziwel/All-In-One-Medical-Image-Restoration-via-Task-Adaptive-Routing.git" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
188
null
https://papers.miccai.org/miccai-2024/paper/2053_paper.pdf
@InProceedings{ Isl_ANovel_MICCAI2024, author = { Islam, Saahil and Murthy, Venkatesh N. and Neumann, Dominik and Cimen, Serkan and Sharma, Puneet and Maier, Andreas and Comaniciu, Dorin and Ghesu, Florin C. }, title = { { A Novel Tracking Framework for Devices in X-ray Leveraging Supplementary Cue-Driven Self-Supervised Features } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15006 }, month = {October}, pages = { pending }, }
To restore proper blood flow in blocked coronary arteries via angioplasty procedure, accurate placement of devices such as catheters, balloons, and stents under live Fluoroscopy or diagnostic Angiography is crucial. Identified Balloon markers help in enhancing stent visibility in X-ray sequences, while the Catheter tip aids in precise navigation and co-registering vessel structures, reducing the need for contrast in angiography. However, accurate detection of these devices in interventional X-ray sequences faces significant challenges, particularly due to occlusions from contrasted vessels and other devices and distractions from surrounding, resulting in the failure to track such small objects. While most tracking methods rely on spatial correlation of past and current appearance, they often lack strong motion comprehension essential for navigating through these challenging conditions, and fail to effectively detect multiple instances in the scene. To overcome these limitations, we propose a self-supervised learning approach that enhances its spatio-temporal understanding by incorporating supplementary cues and learning across multiple representation spaces on a large dataset. Followed by that, we introduce a generic real-time tracking framework that effectively leverages the pretrained spatio-temporal network and also takes the historical appearance and trajectory data into account. This results in enhanced localization of multiple instances of device landmarks. Our method outperforms state-of-the-art methods in interventional X-ray device tracking, especially stability and robustness, achieving an 87% reduction in max error for balloon marker detection and a 61% reduction in max error for catheter tip detection.
A Novel Tracking Framework for Devices in X-ray Leveraging Supplementary Cue-Driven Self-Supervised Features
[ "Islam, Saahil", "Murthy, Venkatesh N.", "Neumann, Dominik", "Cimen, Serkan", "Sharma, Puneet", "Maier, Andreas", "Comaniciu, Dorin", "Ghesu, Florin C." ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
189
null
https://papers.miccai.org/miccai-2024/paper/2125_paper.pdf
@InProceedings{ Dom_Diffusion_MICCAI2024, author = { Domínguez, Marina and Velikova, Yordanka and Navab, Nassir and Azampour, Mohammad Farid }, title = { { Diffusion as Sound Propagation: Physics-inspired Model for Ultrasound Image Generation } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Deep learning (DL) methods typically require large datasets to effectively learn data distributions. However, in the medical field, data is often limited in quantity, and acquiring labeled data can be costly. To mitigate this data scarcity, data augmentation techniques are commonly employed. Among these techniques, generative models play a pivotal role in expanding datasets. However, when it comes to ultrasound (US) imaging, the authenticity of generated data often diminishes due to the oversight of ultrasound physics.
Diffusion as Sound Propagation: Physics-inspired Model for Ultrasound Image Generation
[ "Domínguez, Marina", "Velikova, Yordanka", "Navab, Nassir", "Azampour, Mohammad Farid" ]
Conference
2407.05428
[ "https://github.com/marinadominguez/diffusion-for-us-images" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Oral
190
null
https://papers.miccai.org/miccai-2024/paper/2245_paper.pdf
@InProceedings{ Li_Nonrigid_MICCAI2024, author = { Li, Qi and Shen, Ziyi and Yang, Qianye and Barratt, Dean C. and Clarkson, Matthew J. and Vercauteren, Tom and Hu, Yipeng }, title = { { Nonrigid Reconstruction of Freehand Ultrasound without a Tracker } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15004 }, month = {October}, pages = { pending }, }
Reconstructing 2D freehand Ultrasound (US) frames into 3D space without using a tracker has recently seen advances with deep learning. Predicting good frame-to-frame rigid transformations is often accepted as the learning objective, especially when the ground-truth labels from spatial tracking devices are inherently rigid transformations. Motivated by a) the observed nonrigid deformation due to soft tissue motion during scanning, and b) the highly sensitive prediction of rigid transformation, this study investigates the methods and their benefits in predicting nonrigid transformations for reconstructing 3D US. We propose a novel co-optimisation algorithm for simultaneously estimating rigid transformations among US frames, supervised by ground-truth from a tracker, and a nonrigid deformation, optimised by a regularised registration network. We show that these two objectives can be either optimised using meta-learning or combined by weighting. A fast scattered data interpolation is also developed for enabling frequent reconstruction and registration of non-parallel US frames, during training. With a new data set containing over 357,000 frames in 720 scans, acquired from 60 subjects, the experiments demonstrate that, due to an expanded thus easier-to-optimise solution space, the generalisation is improved with the added deformation estimation, with respect to the rigid ground-truth. The global pixel reconstruction error (assessing accumulative prediction) is lowered from 18.48 to 16.51 mm, compared with baseline rigid-transformation-predicting methods. Using manually identified landmarks, the proposed co-optimisation also shows potentials in compensating nonrigid tissue motion at inference, which is not measurable by tracker-provided ground-truth. The code and data used in this paper are made publicly available at https://github.com/QiLi111/NR-Rec-FUS.
Nonrigid Reconstruction of Freehand Ultrasound without a Tracker
[ "Li, Qi", "Shen, Ziyi", "Yang, Qianye", "Barratt, Dean C.", "Clarkson, Matthew J.", "Vercauteren, Tom", "Hu, Yipeng" ]
Conference
2407.05767
[ "https://github.com/QiLi111/NR-Rec-FUS" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
191
null
https://papers.miccai.org/miccai-2024/paper/2750_paper.pdf
@InProceedings{ Hua_Finegrained_MICCAI2024, author = { Huang, Yijin and Cheng, Pujin and Tam, Roger and Tang, Xiaoying }, title = { { Fine-grained Prompt Tuning: A Parameter and Memory Efficient Transfer Learning Method for High-resolution Medical Image Classification } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Parameter-efficient transfer learning (PETL) is proposed as a cost-effective way to transfer pre-trained models to downstream tasks, avoiding the high cost of updating entire large-scale pre-trained models (LPMs). In this work, we present Fine-grained Prompt Tuning (FPT), a novel PETL method for medical image classification. FPT significantly reduces memory consumption compared to other PETL methods, especially in high-resolution input contexts. To achieve this, we first freeze the weights of the LPM and construct a learnable lightweight side network. The frozen LPM takes high-resolution images as input to extract fine-grained features, while the side network is fed low-resolution images to reduce memory usage. To allow the side network to access pre-trained knowledge, we introduce fine-grained prompts that summarize information from the LPM through a fusion module. Important tokens selection and preloading techniques are employed to further reduce training cost and memory requirements. We evaluate FPT on four medical datasets with varying sizes, modalities, and complexities. Experimental results demonstrate that FPT achieves comparable performance to fine-tuning the entire LPM while using only 1.8% of the learnable parameters and 13% of the memory costs of an encoder ViT-B model with a 512 x 512 input resolution.
Fine-grained Prompt Tuning: A Parameter and Memory Efficient Transfer Learning Method for High-resolution Medical Image Classification
[ "Huang, Yijin", "Cheng, Pujin", "Tam, Roger", "Tang, Xiaoying" ]
Conference
2403.07576
[ "https://github.com/YijinHuang/FPT" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
192
null
https://papers.miccai.org/miccai-2024/paper/3416_paper.pdf
@InProceedings{ Siv_LiverUSRecon_MICCAI2024, author = { Sivayogaraj, Kaushalya and Guruge, Sahan I. T. and Liyanage, Udari A. and Udupihille, Jeevani J. and Jayasinghe, Saroj and Fernando, Gerard M. X. and Rodrigo, Ranga and Liyanaarachchi, Rukshani }, title = { { LiverUSRecon: Automatic 3D Reconstruction and Volumetry of the Liver with a Few Partial Ultrasound Scans } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15007 }, month = {October}, pages = { pending }, }
3D reconstruction of the liver for volumetry is important for qualitative analysis and disease diagnosis. Liver volumetry using ultrasound (US) scans, although advantageous due to less acquisition time and safety, is challenging due to the inherent noisiness in US scans, blurry boundaries, and partial liver visibility. We address these challenges by using the segmentation masks of a few incomplete sagittal-plane US scans of the liver in conjunction with a statistical shape model (SSM) built using a set of CT scans of the liver. We compute the shape parameters needed to warp this canonical SSM to fit the US scans through a parametric regression network. The resulting 3D liver reconstruction is accurate and leads to automatic liver volume calculation. We evaluate the accuracy of the estimated liver volumes with respect to CT segmentation volumes using RMSE. Our volume computation is statistically much closer to the volume estimated using CT scans than the volume computed using Childs’ method by radiologists: p-value of 0.094 (> 0.05) says that there is no significant difference between CT segmentation volumes and ours in contrast to Childs’ method. We validate our method using investigations (ablation studies) on the US image resolution, the number of CT scans used for SSM, the number of principal components, and the number of input US scans. To the best of our knowledge, this is the first automatic liver volumetry system using a few incomplete US scans given a set of CT scans of livers for SSM. Code and models are available at https://diagnostics4u.github.io/
LiverUSRecon: Automatic 3D Reconstruction and Volumetry of the Liver with a Few Partial Ultrasound Scans
[ "Sivayogaraj, Kaushalya", "Guruge, Sahan I. T.", "Liyanage, Udari A.", "Udupihille, Jeevani J.", "Jayasinghe, Saroj", "Fernando, Gerard M. X.", "Rodrigo, Ranga", "Liyanaarachchi, Rukshani" ]
Conference
2406.19336
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
193
null
https://papers.miccai.org/miccai-2024/paper/1234_paper.pdf
@InProceedings{ Hwa_Improving_MICCAI2024, author = { Hwang, Joonil and Park, Sangjoon and Park, NaHyeon and Cho, Seungryong and Kim, Jin Sung }, title = { { Improving cone-beam CT Image Quality with Knowledge Distillation-Enhanced Diffusion Model in Imbalanced Data Settings } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
In radiation therapy (RT), the reliance on pre-treatment computed tomography (CT) images encounters challenges due to anatomical changes, necessitating adaptive planning. Daily cone-beam CT (CBCT) imaging, pivotal for therapy adjustment, falls short in tissue density accuracy. To address this, our innovative approach integrates diffusion models for CT image generation, offering precise control over data synthesis. Leveraging a self-training method with knowledge distillation, we maximize CBCT data during therapy, complemented by sparse paired fan-beam CTs. This strategy, incorporated into state-of-the-art diffusion-based models, surpasses conventional methods like Pix2pix and CycleGAN. A meticulously curated dataset of 2800 paired CBCT and CT scans, supplemented by 4200 CBCT scans, undergoes preprocessing and teacher model training, including the Brownian Bridge Diffusion Model (BBDM). Pseudo-label CT images are generated, resulting in a dataset combining 5600 CT images with corre-sponding CBCT images. Thorough evaluation using MSE, SSIM, PSNR and LPIPS demonstrates superior performance against Pix2pix and CycleGAN. Our approach shows promise in generating high-quality CT images from CBCT scans in RT.
Improving cone-beam CT Image Quality with Knowledge Distillation-Enhanced Diffusion Model in Imbalanced Data Settings
[ "Hwang, Joonil", "Park, Sangjoon", "Park, NaHyeon", "Cho, Seungryong", "Kim, Jin Sung" ]
Conference
2409.12539
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
194
null
https://papers.miccai.org/miccai-2024/paper/2857_paper.pdf
@InProceedings{ Zha_Deeplearningbased_MICCAI2024, author = { Zhang, Yi and Zhao, Yidong and Huang, Lu and Xia, Liming and Tao, Qian }, title = { { Deep-learning-based groupwise registration for motion correction of cardiac T1 mapping } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Quantitative $T_1$ mapping by MRI is an increasingly important tool for clinical assessment of cardiovascular diseases. The cardiac $T_1$ map is derived by fitting a known signal model to a series of baseline images, while the quality of this map can be deteriorated by involuntary respiratory and cardiac motion. To correct motion, a template image is often needed to register all baseline images, but the choice of template is nontrivial, leading to inconsistent performance sensitive to image contrast. In this work, we propose a novel deep-learning-based groupwise registration framework, which omits the need for a template, and registers all baseline images simultaneously. We design two groupwise losses for this registration framework: the first is a linear principal component analysis (PCA) loss that enforces alignment of baseline images irrespective of the intensity variation, and the second is an auxiliary relaxometry loss that enforces adherence of intensity profile to the signal model. We extensively evaluated our method, termed ``PCA-Relax’’, and other baseline methods on an in-house cardiac MRI dataset including both pre- and post-contrast $T_1$ sequences. All methods were evaluated under three distinct training-and-evaluation strategies, namely, standard, one-shot, and test-time-adaptation. The proposed PCA-Relax showed further improved performance of registration and mapping over well-established baselines. The proposed groupwise framework is generic and can be adapted to applications involving multiple images.
Deep-learning-based groupwise registration for motion correction of cardiac T1 mapping
[ "Zhang, Yi", "Zhao, Yidong", "Huang, Lu", "Xia, Liming", "Tao, Qian" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
195
null
https://papers.miccai.org/miccai-2024/paper/2884_paper.pdf
@InProceedings{ Xia_IMGGCN_MICCAI2024, author = { Xia, Jing and Chan, Yi Hao and Girish, Deepank and Rajapakse, Jagath C. }, title = { { IMG-GCN: Interpretable Modularity-Guided Structure-Function Interactions Learning for Brain Cognition and Disorder Analysis } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15010 }, month = {October}, pages = { pending }, }
Brain structure-function interaction is crucial for cognition and brain disorder analysis, and it is inherently more complex than a simple region-to-region coupling. It exhibits homogeneity at the modular level, with regions of interest (ROIs) within the same module showing more similar neural mechanisms than those across modules. Leveraging modular-level guidance to capture complex structure-function interactions is essential, but such studies are still scarce. Therefore, we propose an interpretable modularity-guided graph convolution network (IMG-GCN) to extract the structure-function interactions across ROIs and highlight the most discriminative interactions relevant to fluid cognition and Parkinson’s disease (PD). Specifically, we design a modularity-guided interactive network that defines modularity-specific convolution operation to learn interactions between structural and functional ROIs according to modular homogeneity. Then, an MLP-based attention model is introduced to identify the most contributed interactions. The interactions are inserted as edges linking structural and functional ROIs to construct a unified combined graph, and GCN is applied for final tasks. Experiments on HCP and PPMI datasets indicate that our proposed method outperforms state-of-the-art multi-model methods in fluid cognition prediction and PD classification. The attention maps reveal that the frontoparietal and default mode structures interacting with visual function are discriminative for fluid cognition, while the subcortical structures interacting with widespread functional modules are associated with PD.
IMG-GCN: Interpretable Modularity-Guided Structure-Function Interactions Learning for Brain Cognition and Disorder Analysis
[ "Xia, Jing", "Chan, Yi Hao", "Girish, Deepank", "Rajapakse, Jagath C." ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
196
null
https://papers.miccai.org/miccai-2024/paper/2230_paper.pdf
@InProceedings{ Son_DINOReg_MICCAI2024, author = { Song, Xinrui and Xu, Xuanang and Yan, Pingkun }, title = { { DINO-Reg: General Purpose Image Encoder for Training-free Multi-modal Deformable Medical Image Registration } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15002 }, month = {October}, pages = { pending }, }
Existing medical image registration algorithms rely on either dataset-specific training or local texture-based features to align images. The former cannot be reliably implemented without large modality-specific training datasets, while the latter lacks global semantics and thus could be easily trapped at local minima. In this paper, we present a training-free deformable image registration method, DINO-Reg, leveraging the general purpose image encoder for image feature extraction. The DINOv2 encoder was trained using the ImageNet data containing natural images, but the encoder’s ability to capture semantic information is generalizable even to unseen domains. We present a training-free deep learning-based deformable medical image registration framework based on the DINOv2 encoder. With such semantically rich features, our method can achieve accurate coarse-to-fine registration through simple feature pairing and conventional gradient descent optimization. We conducted a series of experiments to understand the behavior and role of such a general purpose image encoder in the application of image registration. Our method shows state-of-the-art performance in multiple registration datasets. To our knowledge, this is the first application of general vision foundation models in medical image registration.
DINO-Reg: General Purpose Image Encoder for Training-free Multi-modal Deformable Medical Image Registration
[ "Song, Xinrui", "Xu, Xuanang", "Yan, Pingkun" ]
Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
197
null
https://papers.miccai.org/miccai-2024/paper/3680_paper.pdf
@InProceedings{ Stu_SynCellFactory_MICCAI2024, author = { Sturm, Moritz and Cerrone, Lorenzo and Hamprecht, Fred A. }, title = { { SynCellFactory: Generative Data Augmentation for Cell Tracking } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15012 }, month = {October}, pages = { pending }, }
Cell tracking remains a pivotal yet challenging task in biomedical research. The full potential of deep learning for this purpose is often untapped due to the limited availability of comprehensive and varied training data sets. In this paper, we present SynCellFactory, a generative method for cell video augmentation. At the heart of SynCellFactory lies the ControlNet architecture, which has been fine-tuned to synthesize cell imagery with photorealistic accuracy in style and motion patterns. This technique enables the creation of synthetic, annotated cell videos that mirror the complexity of authentic microscopy time-lapses. Our experiments demonstrate that SynCellFactory boosts the performance of well-established deep learning models for cell tracking, particularly when original training data is sparse.
SynCellFactory: Generative Data Augmentation for Cell Tracking
[ "Sturm, Moritz", "Cerrone, Lorenzo", "Hamprecht, Fred A." ]
Conference
2404.16421
[ "https://github.com/sciai-lab/SynCellFactory" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
198
null
https://papers.miccai.org/miccai-2024/paper/2943_paper.pdf
@InProceedings{ Ram_Geometric_MICCAI2024, author = { Ramesh, Jayroop and Dinsdale, Nicola and Yeung, Pak-Hei and Namburete, Ana I. L. }, title = { { Geometric Transformation Uncertainty for Improving 3D Fetal Brain Pose Prediction from Freehand 2D Ultrasound Videos } }, booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024}, year = {2024}, publisher = {Springer Nature Switzerland}, volume = { LNCS 15001 }, month = {October}, pages = { pending }, }
Accurately localizing two-dimensional (2D) ultrasound (US) fetal brain images in the 3D brain, using minimal computational resources, is an important task for automated US analysis of fetal growth and development. We propose an uncertainty-aware deep learning model for automated 3D plane localization in 2D fetal brain images. Specifically, a multi-head network is trained to jointly regress 3D plane pose from 2D images in terms of different geometric transformations. The model explicitly learns to predict uncertainty to allocate higher weight to inputs with low variances across different transformations to improve performance. Our proposed method, QAERTS, demonstrates superior pose estimation accuracy than the state-of-the-art and most of the uncertainty-based approaches, leading to 9% improvement on plane angle (PA) for localization accuracy, and 8% on normalized cross-correlation (NCC) for sampled image quality. QAERTS also demonstrates efficiency, containing 5× fewer parameters than ensemble-based approach, making it advantageous in resource-constrained settings. In addition, QAERTS proves to be more robust to noise effects observed in freehand US scanning by leveraging rotational discontinuities and explicit output uncertainties.
Geometric Transformation Uncertainty for Improving 3D Fetal Brain Pose Prediction from Freehand 2D Ultrasound Videos
[ "Ramesh, Jayroop", "Dinsdale, Nicola", "Yeung, Pak-Hei", "Namburete, Ana I. L." ]
Conference
2405.13235
[ "https://github.com/jayrmh/QAERTS.git" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
Poster
199