Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Biol Med ; 167: 107610, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37883853

RESUMO

Magnetic resonance imaging (MRI) is an essential diagnostic tool that suffers from prolonged scan times. Reconstruction methods can alleviate this limitation by recovering clinically usable images from accelerated acquisitions. In particular, learning-based methods promise performance leaps by employing deep neural networks as data-driven priors. A powerful approach uses scan-specific (SS) priors that leverage information regarding the underlying physical signal model for reconstruction. SS priors are learned on each individual test scan without the need for a training dataset, albeit they suffer from computationally burdening inference with nonlinear networks. An alternative approach uses scan-general (SG) priors that instead leverage information regarding the latent features of MRI images for reconstruction. SG priors are frozen at test time for efficiency, albeit they require learning from a large training dataset. Here, we introduce a novel parallel-stream fusion model (PSFNet) that synergistically fuses SS and SG priors for performant MRI reconstruction in low-data regimes, while maintaining competitive inference times to SG methods. PSFNet implements its SG prior based on a nonlinear network, yet it forms its SS prior based on a linear network to maintain efficiency. A pervasive framework for combining multiple priors in MRI reconstruction is algorithmic unrolling that uses serially alternated projections, causing error propagation under low-data regimes. To alleviate error propagation, PSFNet combines its SS and SG priors via a novel parallel-stream architecture with learnable fusion parameters. Demonstrations are performed on multi-coil brain MRI for varying amounts of training data. PSFNet outperforms SG methods in low-data regimes, and surpasses SS methods with few tens of training samples. On average across tasks, PSFNet achieves 3.1 dB higher PSNR, 2.8% higher SSIM, and 0.3 × lower RMSE than baselines. Furthermore, in both supervised and unsupervised setups, PSFNet requires an order of magnitude lower samples compared to SG methods, and enables an order of magnitude faster inference compared to SS methods. Thus, the proposed model improves deep MRI reconstruction with elevated learning and computational efficiency.


Assuntos
Processamento de Imagem Assistida por Computador , Rios , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Cintilografia , Imageamento por Ressonância Magnética/métodos
2.
Med Image Anal ; 88: 102872, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37384951

RESUMO

Deep MRI reconstruction is commonly performed with conditional models that de-alias undersampled acquisitions to recover images consistent with fully-sampled data. Since conditional models are trained with knowledge of the imaging operator, they can show poor generalization across variable operators. Unconditional models instead learn generative image priors decoupled from the operator to improve reliability against domain shifts related to the imaging operator. Recent diffusion models are particularly promising given their high sample fidelity. Nevertheless, inference with a static image prior can perform suboptimally. Here we propose the first adaptive diffusion prior for MRI reconstruction, AdaDiff, to improve performance and reliability against domain shifts. AdaDiff leverages an efficient diffusion prior trained via adversarial mapping over large reverse diffusion steps. A two-phase reconstruction is executed following training: a rapid-diffusion phase that produces an initial reconstruction with the trained prior, and an adaptation phase that further refines the result by updating the prior to minimize data-consistency loss. Demonstrations on multi-contrast brain MRI clearly indicate that AdaDiff outperforms competing conditional and unconditional methods under domain shifts, and achieves superior or on par within-domain performance.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Humanos , Processamento de Imagem Assistida por Computador/métodos , Reprodutibilidade dos Testes , Imageamento por Ressonância Magnética/métodos , Neuroimagem , Aprendizagem , Encéfalo/diagnóstico por imagem
3.
IEEE Trans Med Imaging ; 42(12): 3524-3539, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37379177

RESUMO

Imputation of missing images via source-to-target modality translation can improve diversity in medical imaging protocols. A pervasive approach for synthesizing target images involves one-shot mapping through generative adversarial networks (GAN). Yet, GAN models that implicitly characterize the image distribution can suffer from limited sample fidelity. Here, we propose a novel method based on adversarial diffusion modeling, SynDiff, for improved performance in medical image translation. To capture a direct correlate of the image distribution, SynDiff leverages a conditional diffusion process that progressively maps noise and source images onto the target image. For fast and accurate image sampling during inference, large diffusion steps are taken with adversarial projections in the reverse diffusion direction. To enable training on unpaired datasets, a cycle-consistent architecture is devised with coupled diffusive and non-diffusive modules that bilaterally translate between two modalities. Extensive assessments are reported on the utility of SynDiff against competing GAN and diffusion models in multi-contrast MRI and MRI-CT translation. Our demonstrations indicate that SynDiff offers quantitatively and qualitatively superior performance against competing baselines.


Assuntos
Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos
4.
IEEE J Biomed Health Inform ; 26(9): 4679-4690, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35767499

RESUMO

Melanoma is a fatal skin cancer that is curable and has dramatically increasing survival rate when diagnosed at early stages. Learning-based methods hold significant promise for the detection of melanoma from dermoscopic images. However, since melanoma is a rare disease, existing databases of skin lesions predominantly contain highly imbalanced numbers of benign versus malignant samples. In turn, this imbalance introduces substantial bias in classification models due to the statistical dominance of the majority class. To address this issue, we introduce a deep clustering approach based on the latent-space embedding of dermoscopic images. Clustering is achieved using a novel center-oriented margin-free triplet loss (COM-Triplet) enforced on image embeddings from a convolutional neural network backbone. The proposed method aims to form maximally-separated cluster centers as opposed to minimizing classification error, so it is less sensitive to class imbalance. To avoid the need for labeled data, we further propose to implement COM-Triplet based on pseudo-labels generated by a Gaussian mixture model (GMM). Comprehensive experiments show that deep clustering with COM-Triplet loss outperforms clustering with triplet loss, and competing classifiers in both supervised and unsupervised settings.


Assuntos
Melanoma , Neoplasias Cutâneas , Análise por Conglomerados , Humanos , Melanoma/diagnóstico por imagem , Melanoma/patologia , Redes Neurais de Computação , Distribuição Normal , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia
5.
Int J Imaging Syst Technol ; 31(1): 5-15, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32904960

RESUMO

Necessary screenings must be performed to control the spread of the COVID-19 in daily life and to make a preliminary diagnosis of suspicious cases. The long duration of pathological laboratory tests and the suspicious test results led the researchers to focus on different fields. Fast and accurate diagnoses are essential for effective interventions for COVID-19. The information obtained by using X-ray and Computed Tomography (CT) images is vital in making clinical diagnoses. Therefore it is aimed to develop a machine learning method for the detection of viral epidemics by analyzing X-ray and CT images. In this study, images belonging to six situations, including coronavirus images, are classified using a two-stage data enhancement approach. Since the number of images in the dataset is deficient and unbalanced, a shallow image augmentation approach was used in the first phase. It is more convenient to analyze these images with hand-crafted feature extraction methods because the dataset newly created is still insufficient to train a deep architecture. Therefore, the Synthetic minority over-sampling technique algorithm is the second data enhancement step of this study. Finally, the feature vector is reduced in size by using a stacked auto-encoder and principal component analysis methods to remove interconnected features in the feature vector. According to the obtained results, it is seen that the proposed method has leveraging performance, especially to make the diagnosis of COVID-19 in a short time and effectively. Also, it is thought to be a source of inspiration for future studies for deficient and unbalanced datasets.

6.
J Biomed Inform ; 113: 103638, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33271341

RESUMO

nowadays, considering the number of patients per specialist doctor, the size of the need for automatic medical image analysis methods can be understood. These systems, which are very advantageous compared to manual systems both in terms of cost and time, benefit from artificial intelligence (AI). AI mechanisms that mimic the decision-making process of a specialist increase their diagnosis performance day by day, depending on technological developments. In this study, an AI method is proposed to effectively classify Gastrointestinal (GI) Tract Image datasets containing a small number of labeled data. The proposed AI method uses the convolutional neural network (CNN) architecture, which is accepted as the most successful automatic classification method of today, as a backbone. According to our approach, a shallowly trained CNN architecture needs to be supported by a strong classifier to classify unbalanced datasets robustly. For this purpose, the features in each pooling layer in the CNN architecture are transmitted to an LSTM layer. A classification is made by combining all LSTM layers. All experiments are carried out using AlexNet, GoogLeNet, and ResNet to evaluate the contribution of the proposed residual LSTM structure fairly. Besides, three different experiments are carried out with 2000, 4000, and 6000 samples to determine the effect of sample number change on the proposed method. The performance of the proposed method is higher than other state-of-the-art methods.


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Trato Gastrointestinal/diagnóstico por imagem , Humanos
7.
Int J Med Inform ; 144: 104300, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33069058

RESUMO

OBJECTIVE: Hospital performance evaluation is vital in terms of managing hospitals and informing patients about hospital possibilities. Also, it plays a key role in planning essential issues such as electrical energy management and cybersecurity in hospitals. In addition to being able to make this measurement objectively with the help of various indicators, it can become very complicated with the participation of subjective expert thoughts in the process. METHOD: As a result of budget cuts in health expenditures worldwide, the necessity of using hospital resources most efficiently emerges. The most effective way to do this is to determine the evaluation criteria effectively. Machine learning (ML) is the current method to determine these criteria, determined by consulting with experts in the past. ML methods, which can remain utterly objective concerning all indicators, offer fair and reliable results quickly and automatically. Based on this idea, this study provides an automated healthcare system evaluation framework by automatically assigning weights to specific indicators. First, the ability of hands to be used as input and output is measured. RESULTS: As a result of this measurement, indicators are divided into only input group (group A) and both input and output group (group B). In the second step, the total effect of each input on the output is calculated by using the indicators in group B as output sequentially using the random forest of the regression tree model. CONCLUSION: Finally, the total effect of each indicator on the healthcare system is determined. Thus, the whole system is evaluated objectively instead of a subjective evaluation based on a single output.


Assuntos
Segurança Computacional , Hospitais , Atenção à Saúde , Humanos , Aprendizado de Máquina
8.
J Digit Imaging ; 33(4): 958-970, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32378058

RESUMO

Recently, the incidence of skin cancer has increased considerably and is seriously threatening human health. Automatic detection of this disease, where early detection is critical to human life, is quite challenging. Factors such as undesirable residues (hair, ruler markers), indistinct boundaries, variable contrast, shape differences, and color differences in the skin lesion images make automatic analysis quite difficult. To overcome these challenges, a highly effective segmentation method based on a fully convolutional network (FCN) is presented in this paper. The proposed improved FCN (iFCN) architecture is used for the segmentation of full-resolution skin lesion images without any pre- or post-processing. It is to support the residual structure of the FCN architecture with spatial information. This situation, which creates a more advanced residual system, enables more precise detection of details on the edges of the lesion, and an analysis independent of skin color can be performed. It offers two contributions: determining the center of the lesion and clarifying the edge details despite the undesirable effects. Two publicly available datasets, the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 Challenge and PH2 datasets, are used to evaluate the performance of the iFCN method. The mean Jaccard index is 78.34%, the mean Dice score is 88.64%, and the mean accuracy value is 95.30% for the proposed method for the ISBI 2017 test dataset. Furthermore, the mean Jaccard index is 87.1%, the mean Dice score is 93.02%, and the mean accuracy value is 96.92% for the proposed method for the PH2 test dataset.


Assuntos
Dermatopatias , Algoritmos , Dermoscopia , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...