Bioengineering, Vol. 12, Pages 738: M3AE-Distill: An Efficient Distilled Model for Medical Vision–Language Downstream Tasks


Bioengineering, Vol. 12, Pages 738: M3AE-Distill: An Efficient Distilled Model for Medical Vision–Language Downstream Tasks

Bioengineering doi: 10.3390/bioengineering12070738

Authors:
Xudong Liang
Jiang Xie
Mengfei Zhang
Zhuo Bi

Multi-modal masked autoencoder (M3AE) are widely studied medical vision–language (VL) models that can be applied to various clinical tasks. However, its large parameter size poses challenges for deployment in real-world settings. Knowledge distillation (KD) has proven effective for compressing task-specific uni-modal models, yet its application to medical VL backbone models during pre-training remains underexplored. To address this, M3AE-Distill, a lightweight medical VL model, is proposed to uphold high performance while enhancing efficiency. During pre-training, two key strategies are developed: (1) both hidden state and attention map distillation are employed to guide the student model, and (2) an attention-guided masking strategy is designed to enhance fine-grained image–text alignment. Extensive experiments on five medical VL datasets across three tasks validate the effectiveness of M3AE-Distill. Two student variants, M3AE-Distill-Small and M3AE-Distill-Base, are provided to support a flexible trade-off between efficiency and accuracy. M3AE-Distill-Base consistently outperforms existing models and achieves performance comparable to the teacher model, while delivering 2.11× and 2.61× speedups during inference and fine-tuning, respectively.



Source link

Xudong Liang www.mdpi.com