Your browser doesn't support javascript.
loading
Multimodal Medical Image Fusion Utilizing Two-scale Image Decomposition via Saliency Detection.
Kaur, Harmanpreet; Vig, Renu; Kumar, Naresh; Sharma, Apoorav; Dogra, Ayush; Goyal, Bhawna.
Affiliation
  • Kaur H; Department of Electronics and Communication Engineering, UIET, Panjab University, Chandigarh 160014, India.
  • Vig R; Department of Electronics and Communication Engineering, UIET, Panjab University, Chandigarh 160014, India.
  • Kumar N; Department of Electronics and Communication Engineering, UIET, Panjab University, Chandigarh 160014, India.
  • Sharma A; Department of Electronics and Communication Engineering, UIET, Panjab University, Chandigarh 160014, India.
  • Dogra A; Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India.
  • Goyal B; Department of UCRD and ECE, Chandigarh University, Mohali, Punjab 140413, India.
Curr Med Imaging ; 20: 1-13, 2024.
Article in En | MEDLINE | ID: mdl-38389343
ABSTRACT

BACKGROUND:

Modern medical imaging modalities used by clinicians have many applications in the diagnosis of complicated diseases. These imaging technologies reveal the internal anatomy and physiology of the body. The fundamental idea behind medical image fusion is to increase the image's global and local contrast, enhance the visual impact, and change its format so that it is better suited for computer processing or human viewing while preventing noise magnification and accomplishing excellent real-time performance.

OBJECTIVE:

The top goal is to combine data from various modal images (CT/MRI and MR-T1/MR-T2) into a solitary image that, to the greatest degree possible, retains the key characteristics (prominent features) of the source images.

METHODS:

The clinical accuracy of medical issues is compromised because innumerable classical fusion methods struggle to conserve all the prominent features of the original images. Furthermore, complex implementation, high computation time, and more memory requirements are key problems of transform domain methods. With the purpose of solving these problems, this research suggests a fusion framework for multimodal medical images that makes use of a multi-scale edge-preserving filter and visual saliency detection. The source images are decomposed using a two-scale edge-preserving filter into base and detail layers. Base layers are combined using the addition fusion rule, while detail layers are fused using weight maps constructed using the maximum symmetric surround saliency detection algorithm.

RESULTS:

The resultant image constructed by the presumed method has improved objective evaluation metrics than other classical methods, as well as unhindered edge contour, more global contrast, and no ringing effect or artifacts.

CONCLUSION:

The methodology offers a dominant and symbiotic arsenal of clinical symptomatic, therapeutic, and biomedical research competencies that have the prospective to considerably strengthen medical practice and biological understanding.
Subject(s)
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Algorithms / Magnetic Resonance Imaging Limits: Humans Language: En Journal: Curr Med Imaging Year: 2024 Document type: Article Affiliation country: Country of publication:

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Algorithms / Magnetic Resonance Imaging Limits: Humans Language: En Journal: Curr Med Imaging Year: 2024 Document type: Article Affiliation country: Country of publication: