In medical image retrieval, accurately retrieving relevant images significantly impacts clinical decision making and diagnostics. Traditional image-retrieval systems primarily rely on single-dimensional image data, while current deep-hashing methods are capable of learning complex feature representations. However, retrieval accuracy and efficiency are hindered by diverse modalities and limited sample sizes.
OBJECTIVE: To address this, we propose a novel deep learning-based hashing model, the Deep Attention Fusion Hashing (DAFH) model, which integrates advanced attention mechanisms with medical imaging data.
METHODS: The DAFH model enhances retrieval performance by integrating multi-modality medical imaging data and employing attention mechanisms to optimize the feature extraction process. Utilizing multimodal medical image data from the Cancer Imaging Archive (TCIA), this study constructed and trained a deep hashing network that achieves high-precision classification of various cancer types.
RESULTS: At hash code lengths of 16, 32, and 48 bits, the model respectively attained Mean Average Precision (MAP@10) values of 0.711, 0.754, and 0.762, highlighting the potential and advantage of the DAFH model in medical image retrieval.
CONCLUSIONS: The DAFH model demonstrates significant improvements in the efficiency and accuracy of medical image retrieval, proving to be a valuable tool in clinical settings.