Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
J Imaging Inform Med ; 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-39147889

RESUMO

Multi-modal medical image (MI) fusion assists in generating collaboration images collecting complement features through the distinct images of several conditions. The images help physicians to diagnose disease accurately. Hence, this research proposes a novel multi-modal MI fusion modal named guided filter-based interactive multi-scale and multi-modal transformer (Trans-IMSM) fusion approach to develop high-quality computed tomography-magnetic resonance imaging (CT-MRI) fused images for brain tumor detection. This research utilizes the CT and MRI brain scan dataset to gather the input CT and MRI images. At first, the data preprocessing is carried out to preprocess these input images to improve the image quality and generalization ability for further analysis. Then, these preprocessed CT and MRI are decomposed into detail and base components utilizing the guided filter-based MI decomposition approach. This approach involves two phases: such as acquiring the image guidance and decomposing the images utilizing the guided filter. A canny operator is employed to acquire the image guidance comprising robust edge for CT and MRI images, and the guided filter is applied to decompose the guidance and preprocessed images. Then, by applying the Trans-IMSM model, fuse the detail components, while a weighting approach is used for the base components. The fused detail and base components are subsequently processed through a gated fusion and reconstruction network, and the final fused images for brain tumor detection are generated. Extensive tests are carried out to compute the Trans-IMSM method's efficacy. The evaluation results demonstrated the robustness and effectiveness, achieving an accuracy of 98.64% and an SSIM of 0.94.

2.
J Imaging Inform Med ; 2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-39020154

RESUMO

This paper presents an innovative automatic fusion imaging system that combines 3D CT/MR images with real-time ultrasound acquisition. The system eliminates the need for external physical markers and complex training, making image fusion feasible for physicians with different experience levels. The integrated system involves a portable 3D camera for patient-specific surface acquisition, an electromagnetic tracking system, and US components. The fusion algorithm comprises two main parts: skin segmentation and rigid co-registration, both integrated into the US machine. The co-registration aligns the surface extracted from CT/MR images with the 3D surface acquired by the camera, facilitating rapid and effective fusion. Experimental tests in different settings, validate the system's accuracy, computational efficiency, noise robustness, and operator independence.

3.
Sensors (Basel) ; 24(13)2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-39000834

RESUMO

The fusion of multi-modal medical images has great significance for comprehensive diagnosis and treatment. However, the large differences between the various modalities of medical images make multi-modal medical image fusion a great challenge. This paper proposes a novel multi-scale fusion network based on multi-dimensional dynamic convolution and residual hybrid transformer, which has better capability for feature extraction and context modeling and improves the fusion performance. Specifically, the proposed network exploits multi-dimensional dynamic convolution that introduces four attention mechanisms corresponding to four different dimensions of the convolutional kernel to extract more detailed information. Meanwhile, a residual hybrid transformer is designed, which activates more pixels to participate in the fusion process by channel attention, window attention, and overlapping cross attention, thereby strengthening the long-range dependence between different modes and enhancing the connection of global context information. A loss function, including perceptual loss and structural similarity loss, is designed, where the former enhances the visual reality and perceptual details of the fused image, and the latter enables the model to learn structural textures. The whole network adopts a multi-scale architecture and uses an unsupervised end-to-end method to realize multi-modal image fusion. Finally, our method is tested qualitatively and quantitatively on mainstream datasets. The fusion results indicate that our method achieves high scores in most quantitative indicators and satisfactory performance in visual qualitative analysis.

4.
Comput Biol Med ; 179: 108771, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38970832

RESUMO

Multimodal medical image fusion fuses images with different modalities and provides more comprehensive and integrated diagnostic information. However, current multimodal image fusion methods cannot effectively model non-local contextual feature relationships, and due to direct aggregation of the extracted features, they introduce unnecessary implicit noise into the fused images. To solve the above problems, this paper proposes a novel dual-branch hybrid fusion network called EMOST for medical image fusion that combines a convolutional neural network (CNN) and a transformer. First, to extract more comprehensive feature information, an effective feature extraction module is proposed, which consists of an efficient dense block (EDB), an attention module (AM), a multiscale convolution block (MCB), and three sparse transformer blocks (STB). Meanwhile, a lightweight efficient model (EMO) is used in the feature extraction module to exploit the efficiency of the CNN with the dynamic modeling capability of the transformer. Additionally, the STB is incorporated to adaptively maintain the most useful self-attention values and remove as much redundant noise as possible by developing the top-k selection operator. Moreover, a novel feature fusion rule is designed to efficiently integrate the features. Experiments are conducted on four types of multimodal medical images. The proposed method shows higher performance than the art-of-the-state methods in terms of quantitative and qualitative evaluations. The code of the proposed method is available at https://github.com/XUTauto/EMOST.


Assuntos
Redes Neurais de Computação , Humanos , Imagem Multimodal/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
5.
Sensors (Basel) ; 24(11)2024 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-38894335

RESUMO

Multi-modal medical image fusion (MMIF) is crucial for disease diagnosis and treatment because the images reconstructed from signals collected by different sensors can provide complementary information. In recent years, deep learning (DL) based methods have been widely used in MMIF. However, these methods often adopt a serial fusion strategy without feature decomposition, causing error accumulation and confusion of characteristics across different scales. To address these issues, we have proposed the Coupled Image Reconstruction and Fusion (CIRF) strategy. Our method parallels the image fusion and reconstruction branches which are linked by a common encoder. Firstly, CIRF uses the lightweight encoder to extract base and detail features, respectively, through the Vision Transformer (ViT) and the Convolutional Neural Network (CNN) branches, where the two branches interact to supplement information. Then, two types of features are fused separately via different blocks and finally decoded into fusion results. In the loss function, both the supervised loss from the reconstruction branch and the unsupervised loss from the fusion branch are included. As a whole, CIRF increases its expressivity by adding multi-task learning and feature decomposition. Additionally, we have also explored the impact of image masking on the network's feature extraction ability and validated the generalization capability of the model. Through experiments on three datasets, it has been demonstrated both subjectively and objectively, that the images fused by CIRF exhibit appropriate brightness and smooth edge transition with more competitive evaluation metrics than those fused by several other traditional and DL-based methods.

6.
EJNMMI Rep ; 8(1): 17, 2024 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-38872028

RESUMO

OBJECTIVES: 3D-visualization of the segmented contacts of directional deep brain stimulation (DBS) electrodes is desirable since knowledge about the position of every segmented contact could shorten the timespan for electrode programming. CT cannot yield images fitting that purpose whereas highly resolved flat detector computed tomography (FDCT) can accurately image the inner structure of the electrode. This study aims to demonstrate the applicability of image fusion of highly resolved FDCT and CT to produce highly resolved images that preserve anatomical context for subsequent fusion to preoperative MRI for eventually displaying segmented contactswithin anatomical context in future studies. MATERIAL AND METHODS: Retrospectively collected datasets from 15 patients who underwent bilateral directional DBS electrode implantation were used. Subsequently, after image analysis, a semi-automated 3D-registration of CT and highly resolved FDCT followed by image fusion was performed. The registration accuracy was assessed by computing the target registration error. RESULTS: Our work demonstrated the feasibility of highly resolved FDCT to visualize segmented electrode contacts in 3D. Semiautomatic image registration to CT was successfully implemented in all cases. Qualitative evaluation by two experts revealed good alignment regarding intracranial osseous structures. Additionally, the average for the mean of the target registration error over all patients, based on the assessments of two raters, was computed to be 4.16 mm. CONCLUSION: Our work demonstrated the applicability of image fusion of highly resolved FDCT to CT for a potential workflow regarding subsequent fusion to MRI in the future to put the electrodes in an anatomical context.

7.
Comput Biol Med ; 174: 108463, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38640634

RESUMO

Medical image fusion can provide doctors with more detailed data and thus improve the accuracy of disease diagnosis. In recent years, deep learning has been widely used in the field of medical image fusion. The traditional method of medical image fusion is to operate by superimposing and other methods of pixels. The introduction of deep learning methods has improved the effectiveness of medical image fusion. However, these methods still have problems such as edge blurring and information redundancy. In this paper, we propose a deep learning network model based on Transformer and an improved DenseNet network module integration that can be applied to medical images and solve the above problems. At the same time, the method can be moved to natural images. The use of Transformer and dense concatenation enhances the feature extraction capability of the method by limiting the feature loss which reduces the risk of edge blurring. We compared several representative traditional methods and more advanced deep learning methods with this method. The experimental results show that the Transformer and the improved DenseNet network module have a strong capability of feature extraction. The method yields good results both in terms of visual quality and objective image evaluation metrics.


Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
8.
Comput Biol Med ; 173: 108381, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38569237

RESUMO

Multimodal medical image fusion (MMIF) technology plays a crucial role in medical diagnosis and treatment by integrating different images to obtain fusion images with comprehensive information. Deep learning-based fusion methods have demonstrated superior performance, but some of them still encounter challenges such as imbalanced retention of color and texture information and low fusion efficiency. To alleviate the above issues, this paper presents a real-time MMIF method, called a lightweight residual fusion network. First, a feature extraction framework with three branches is designed. Two independent branches are used to fully extract brightness and texture information. The fusion branch enables different modal information to be interactively fused at a shallow level, thereby better retaining brightness and texture information. Furthermore, a lightweight residual unit is designed to replace the conventional residual convolution in the model, thereby improving the fusion efficiency and reducing the overall model size by approximately 5 times. Finally, considering that the high-frequency image decomposed by the wavelet transform contains abundant edge and texture information, an adaptive strategy is proposed for assigning weights to the loss function based on the information content in the high-frequency image. This strategy effectively guides the model toward preserving intricate details. The experimental results on MRI and functional images demonstrate that the proposed method exhibits superior fusion performance and efficiency compared to alternative approaches. The code of LRFNet is available at https://github.com/HeDan-11/LRFNet.


Assuntos
Processamento de Imagem Assistida por Computador , Análise de Ondaletas
9.
Med Biol Eng Comput ; 62(9): 2629-2651, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38656734

RESUMO

This paper proposes a medical image fusion method in the non-subsampled shearlet transform (NSST) domain to combine a gray-scale image with the respective pseudo-color image obtained through different imaging modalities. The proposed method applies a novel improved dual-channel pulse-coupled neural network (IDPCNN) model to fuse the high-pass sub-images, whereas the Prewitt operator is combined with maximum regional energy (MRE) to construct the fused low-pass sub-image. First, the gray-scale image and luminance of the pseudo-color image are decomposed using NSST to find the respective sub-images. Second, the low-pass sub-images are fused by the Prewitt operator and MRE-based rule. Third, the proposed IDPCNN is utilized to get the fused high-pass sub-images from the respective high-pass sub-images. Fourth, the luminance of the fused image is obtained by applying inverse NSST on the fused sub-images, which is combined with the chrominance components of the pseudo-color image to construct the fused image. A total of 28 diverse medical image pairs, 11 existing methods, and nine objective metrics are used in the experiment. Qualitative and quantitative fusion results show that the proposed method is competitive with and even outpaces some of the existing medical fusion approaches. It is also shown that the proposed method efficiently combines two gray-scale images.


Assuntos
Imagem Multimodal , Redes Neurais de Computação , Humanos , Imagem Multimodal/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Imageamento por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X/métodos
10.
Artigo em Inglês | MEDLINE | ID: mdl-38644712

RESUMO

BACKGROUND: Diseases are medical situations that are allied with specific signs and symptoms. A disease may be instigated by internal dysfunction or external factors like pathogens. Cerebrovascular disease can progress from diverse causes, comprising thrombosis, atherosclerosis, cerebral venous thrombosis, or embolic arterial blood clot. OBJECTIVE: In this paper, authors have proposed a robust framework for the detection of cerebrovascular diseases employing two different proposals which were validated by use of other dataset. METHODS: In proposed model 1, the Discrete Fourier transform is used for the fusion of CT and MR images which was classified them using machine learning techniques and pre-trained models while in proposed model 2, the cascaded model was proposed. The performance evaluation parameters like accuracy and losses were evaluated. RESULTS: 92% accuracy was obtained using Support Vector Machine using Gray Level Difference Statistics and Shape features with Principal Component Analysis as a feature selection technique while Inception V3 resulted in 95.6% accuracy while the cascaded model resulted in 96.21% accuracy. CONCLUSION: The cascaded model is later validated on other datasets which results in 0.11% and 0.14% accuracy improvement for TCIA and BRaTS datasets respectively.

11.
BMC Med Imaging ; 24(1): 24, 2024 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-38267874

RESUMO

With the rapid development of medical imaging technology and computer technology, the medical imaging artificial intelligence of computer-aided diagnosis based on machine learning has become an important part of modern medical diagnosis. With the application of medical image security technology, people realize that the difficulty of its development is the inherent defect of advanced image processing technology. This paper introduces the background of colorectal cancer diagnosis and monitoring, and then carries out academic research on the medical imaging artificial intelligence of colorectal cancer diagnosis and monitoring and machine learning, and finally summarizes it with the advanced computational intelligence system for the application of safe medical imaging.In the experimental part, this paper wants to carry out the staging preparation stage. It was concluded that the staging preparation stage of group Y was higher than that of group X and the difference was statistically significant. Then the overall accuracy rate of multimodal medical image fusion was 69.5% through pathological staging comparison. Finally, the diagnostic rate, the number of patients with effective treatment and satisfaction were analyzed. Finally, the average diagnostic rate of the new diagnosis method was 8.75% higher than that of the traditional diagnosis method. With the development of computer science and technology, the application field was expanding constantly. Computer aided diagnosis technology combining computer and medical images has become a research hotspot.


Assuntos
Inteligência Artificial , Neoplasias Colorretais , Humanos , Aprendizado de Máquina , Diagnóstico por Computador , Processamento de Imagem Assistida por Computador , Neoplasias Colorretais/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA