Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
J Imaging Inform Med ; 2024 Aug 20.
Artigo em Inglês | MEDLINE | ID: mdl-39164454

RESUMO

In clinical practice, the anatomical classification of pulmonary veins plays a crucial role in the preoperative assessment of atrial fibrillation radiofrequency ablation surgery. Accurate classification of pulmonary vein anatomy assists physicians in selecting appropriate mapping electrodes and avoids causing pulmonary arterial hypertension. Due to the diverse and subtly different anatomical classifications of pulmonary veins, as well as the imbalance in data distribution, deep learning models often exhibit poor expression capability in extracting deep features, leading to misjudgments and affecting classification accuracy. Therefore, in order to solve the problem of unbalanced classification of left atrial pulmonary veins, this paper proposes a network integrating multi-scale feature-enhanced attention and dual-feature extraction classifiers, called DECNet. The multi-scale feature-enhanced attention utilizes multi-scale information to guide the reinforcement of deep features, generating channel weights and spatial weights to enhance the expression capability of deep features. The dual-feature extraction classifier assigns a fixed number of channels to each category, equally evaluating all categories, thus alleviating the learning bias and overfitting caused by data imbalance. By combining the two, the expression capability of deep features is strengthened, achieving accurate classification of left atrial pulmonary vein morphology and providing support for subsequent clinical treatment. The proposed method is evaluated on datasets provided by the People's Hospital of Liaoning Province and the publicly available DermaMNIST dataset, achieving average accuracies of 78.81% and 83.44%, respectively, demonstrating the effectiveness of the proposed approach.

2.
J Imaging Inform Med ; 2024 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-39075250

RESUMO

In the domain of medical image segmentation, traditional diffusion probabilistic models are hindered by local inductive biases stemming from convolutional operations, constraining their ability to model long-term dependencies and leading to inaccurate mask generation. Conversely, Transformer offers a remedy by obviating the local inductive biases inherent in convolutional operations, thereby enhancing segmentation precision. Currently, the integration of Transformer and convolution operations mainly occurs in two forms: nesting and stacking. However, both methods address the bias elimination at a relatively large granularity, failing to fully leverage the advantages of both approaches. To address this, this paper proposes a conditional diffusion segmentation model named TransDiffSeg, which combines Transformer with convolution operations from traditional diffusion models in a parallel manner. This approach eliminates the accumulated local inductive bias of convolution operations at a finer granularity within each layer. Additionally, an adaptive feature fusion block is employed to merge conditional semantic features and noise features, enhancing global semantic information and reducing the Transformer's sensitivity to noise features. To validate the impact of granularity in bias elimination on performance and the impact of Transformer in alleviating the accumulated local inductive biases of convolutional operations in diffusion probabilistic models, experiments are conducted on the AMOS22 dataset and BTCV dataset. Experimental results demonstrate that eliminating local inductive bias at a finer granularity significantly improves the segmentation performance of diffusion probabilistic models. Furthermore, the results confirm that the finer the granularity of bias elimination, the better the segmentation performance.

3.
Med Biol Eng Comput ; 62(10): 2999-3012, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38727759

RESUMO

In clinical practice, the morphology of the left atrial appendage (LAA) plays an important role in the selection of LAA closure devices for LAA closure procedures. The morphology determination is influenced by the segmentation results. The LAA occupies only a small part of the entire 3D medical image, and the segmentation results are more likely to be biased towards the background region, making the segmentation of the LAA challenging. In this paper, we propose a lightweight attention mechanism called fusion attention, which imitates human visual behavior. We process the 3D image of the LAA using a method that involves overview observation followed by detailed observation. In the overview observation stage, the image features are pooled along the three dimensions of length, width, and height. The obtained features from the three dimensions are then separately input into the spatial attention and channel attention modules to learn the regions of interest. In the detailed observation stage, the attention results from the previous stage are fused using element-wise multiplication and combined with the original feature map to enhance feature learning. The fusion attention mechanism was evaluated on a left atrial appendage dataset provided by Liaoning Provincial People's Hospital, resulting in an average Dice coefficient of 0.8855. The results indicate that the fusion attention mechanism achieves better segmentation results on 3D images compared to existing lightweight attention mechanisms.


Assuntos
Apêndice Atrial , Imageamento Tridimensional , Humanos , Apêndice Atrial/diagnóstico por imagem , Imageamento Tridimensional/métodos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Fibrilação Atrial/cirurgia , Fibrilação Atrial/fisiopatologia , Fibrilação Atrial/diagnóstico por imagem
4.
J Imaging Inform Med ; 37(3): 1-16, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38347391

RESUMO

Convolutional Neural Networks have been widely applied in medical image segmentation. However, the existence of local inductive bias in convolutional operations restricts the modeling of long-term dependencies. The introduction of Transformer enables the modeling of long-term dependencies and partially eliminates the local inductive bias in convolutional operations, thereby improving the accuracy of tasks such as segmentation and classification. Researchers have proposed various hybrid structures combining Transformer and Convolutional Neural Networks. One strategy is to stack Transformer blocks and convolutional blocks to concentrate on eliminating the accumulated local bias of convolutional operations. Another strategy is to nest convolutional blocks and Transformer blocks to eliminate bias within each nested block. However, due to the granularity of bias elimination operations, these two strategies cannot fully exploit the potential of Transformer. In this paper, a parallel hybrid model is proposed for segmentation, which includes a Transformer branch and a Convolutional Neural Network branch in encoder. After parallel feature extraction, inter-layer information fusion and exchange of complementary information are performed between the two branches, simultaneously extracting local and global features while eliminating the local bias generated by convolutional operations within the current layer. A pure convolutional operation is used in decoder to obtain final segmentation results. To validate the impact of the granularity of bias elimination operations on the effectiveness of local bias elimination, the experiments in this paper were conducted on Flare21 dataset and Amos22 dataset. The average Dice coefficient reached 92.65% on Flare21 dataset, and 91.61% on Amos22 dataset, surpassing comparative methods. The experimental results demonstrate that smaller granularity of bias elimination operations leads to better performance.


Assuntos
Redes Neurais de Computação , Humanos , Abdome/diagnóstico por imagem , Abdome/anatomia & histologia , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Tomografia Computadorizada por Raios X , Bases de Dados Factuais
5.
Math Biosci Eng ; 19(12): 14074-14085, 2022 09 23.
Artigo em Inglês | MEDLINE | ID: mdl-36654080

RESUMO

Accurate abdomen tissues segmentation is one of the crucial tasks in radiation therapy planning of related diseases. However, abdomen tissues segmentation (liver, kidney) is difficult because the low contrast between abdomen tissues and their surrounding organs. In this paper, an attention-based deep learning method for automated abdomen tissues segmentation is proposed. In our method, image cropping is first applied to the original images. U-net model with attention mechanism is then constructed to obtain the initial abdomen tissues. Finally, level set evolution which consists of three energy terms is used for optimize the initial abdomen segmentation. The proposed model is evaluated across 470 subsets. For liver segmentation, the mean dice are 96.2 and 95.1% for the FLARE21 datasets and the LiTS datasets, respectively. For kidney segmentation, the mean dice are 96.6 and 95.7% for the FLARE21 datasets and the LiTS datasets, respectively. Experimental evaluation exhibits that the proposed method can obtain better segmentation results than other methods.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Abdome/diagnóstico por imagem , Fígado/diagnóstico por imagem , Tomografia Computadorizada por Raios X
6.
J Appl Clin Med Phys ; 23(1): e13482, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34873831

RESUMO

Accurate liver segmentation is essential for radiation therapy planning of hepatocellular carcinoma and absorbed dose calculation. However, liver segmentation is a challenging task due to the anatomical variability in both shape and size and the low contrast between liver and its surrounding organs. Thus we propose a convolutional neural network (CNN) for automated liver segmentation. In our method, fractional differential enhancement is firstly applied for preprocessing. Subsequently, an initial liver segmentation is obtained by using a CNN. Finally, accurate liver segmentation is achieved by the evolution of an active contour model. Experimental results show that the proposed method outperforms existing methods. One hundred fifty CT scans are evaluated for the experiment. For liver segmentation, Dice of 95.8%, true positive rate of 95.1%, positive predictive value of 93.2%, and volume difference of 7% are calculated. In addition, the values of these evaluation measures show that the proposed method is able to provide a precise and robust segmentation estimate, which can also assist the manual liver segmentation task.


Assuntos
Aprendizado Profundo , Neoplasias Hepáticas , Humanos , Processamento de Imagem Assistida por Computador , Neoplasias Hepáticas/diagnóstico por imagem , Tomografia Computadorizada por Raios X
7.
Comput Methods Programs Biomed ; 212: 106423, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34673377

RESUMO

BACKGROUND AND OBJECTIVES: The existing CNN-based methods for object segmentation use the regions of objects alone as the labels for training networks, and the potentially useful boundaries annotated by radiologists are not used directly during the training. Thus, we proposed a framework of two cascaded networks to integrate both the region and boundary information for improving the accuracy of object segmentation. METHODS: The first network was used to extract the boundary from original images. The predicted dilated boundary from the first network and the corresponding original image were employed to train the second network for final segmentation. Compared with the object regions, the boundaries may provide additional useful local information for improved object segmentation. The two cascaded networks were evaluated on three datasets, including 40 CT scans for segmenting the esophagus, heart, trachea, and aorta, 247 chest radiographs for segmenting the lung, heart, and clavicle, and 101 retinal images for segmenting the optical disk and cup. The mean values of Dices, 90% Hausdorff distance, and Euclidean distance were employed to quantitatively evaluate the segmentation results. RESULTS: Compared with the baseline method of the conventional U-Net, the two cascaded networks consistently improved the mean Dices and reduced the mean 90% Hausdorff distances and Euclidean distances for all objects, and the reduction rate of the 90% Hausdorff distance was as high as ten times for certain objects. CONCLUSIONS: The boundary is very useful information for object segmentation, and the integration of object boundary and region would improve the segmentation results compared with the use of object region alone.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X
8.
Health Inf Sci Syst ; 9(1): 10, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33643612

RESUMO

The COVID-19 coronavirus has spread rapidly around the world and has caused global panic. Chest CT images play a major role in confirming positive COVID-19 patients. The computer aided diagnosis of COVID-19 from CT images based on artificial intelligence have been developed and deployed in some hospitals. But environmental influences and the movement of lung will affect the image quality, causing the lung parenchyma and pneumonia area unclear in CT images. Therefore, the performance of COVID-19's artificial intelligence diagnostic algorithm is reduced. If chest CT images are reconstructed, the accuracy and performance of the aided diagnostic algorithm may be improved. In this paper, a new aided diagnostic algorithm for COVID-19 based on super-resolution reconstructed images and convolutional neural network is presented. Firstly, the SRGAN neural network is used to reconstruct super-resolution images from original chest CT images. Then COVID-19 images and Non-COVID-19 images are classified from super-resolution chest CT images by VGG16 neural network. Finally, the performance of this method is verified by the pubic COVID-CT dataset and compared with other aided diagnosis methods of COVID-19. The experimental results show that improving the data quality through SRGAN neural network can greatly improve the final classification accuracy when the data quality is low. This proves that this method can obtain high accuracy, sensitivity and specificity in the examined test image datasets and has similar performance to other state-of-the-art deep learning aided algorithms.

9.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 36(3): 453-459, 2019 Jun 25.
Artigo em Chinês | MEDLINE | ID: mdl-31232549

RESUMO

A multi-label based level set model for multiple sclerosis lesion segmentation is proposed based on the shape, position and other information of lesions from magnetic resonance image. First, fuzzy c-means model is applied to extract the initial lesion region. Second, an intensity prior information term and a label fusion term are constructed using intensity information of the initial lesion region, the above two terms are integrated into a region-based level set model. The final lesion segmentation is achieved by evolving the level set contour. The experimental results show that the proposed method can accurately and robustly extract brain lesions from magnetic resonance images. The proposed method helps to reduce the work of radiologists significantly, which is useful in clinical application.


Assuntos
Imageamento por Ressonância Magnética , Esclerose Múltipla/diagnóstico por imagem , Algoritmos , Humanos
10.
Adv Vis Comput ; 9474: 521-530, 2015 12.
Artigo em Inglês | MEDLINE | ID: mdl-29034370

RESUMO

The detection of multiple sclerosis lesion is important for many neuroimaging studies. In this paper, a new automatic robust algorithm for lesion segmentation based on MR images is proposed. This method takes full advantage of the decomposition of MR images into the true image that characterizes a physical property of the tissues and the bias field that accounts for the intensity inhomogeneity. An energy function is defined in term of the property of true image and bias field. The energy minimization is proposed for seeking the optimal segmentation result of lesions and white matter. Then postprocessing operations is used to select the most plausible lesions in the obtained hyperintense signals. The experimental results show that our approach is effective and robust for the lesion segmentation.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA