Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 111
Filtrar
1.
Neuroimage ; 285: 120490, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38103624

RESUMO

Identifying the location, the spatial extent and the electrical activity of distributed brain sources in the context of epilepsy through ElectroEncephaloGraphy (EEG) recordings is a challenging task because of the highly ill-posed nature of the underlying Electrophysiological Source Imaging (ESI) problem. To guarantee a unique solution, most existing ESI methods pay more attention to solve this inverse problem by imposing physiological constraints. This paper proposes an efficient ESI approach based on simulation-driven deep learning. Epileptic High-resolution 256-channels scalp EEG (Hr-EEG) signals are simulated in a realistic manner to train the proposed patient-specific model. More particularly, a computational neural mass model developed in our team is used to generate the temporal dynamics of the activity of each dipole while the forward problem is solved using a patient-specific three-shell realistic head model and the boundary element method. A Temporal Convolutional Network (TCN) is considered in the proposed model to capture local spatial patterns. To enable the model to observe the EEG signals from different scale levels, the multi-scale strategy is leveraged to capture the overall features and fine-grain features by adjusting the convolutional kernel size. Then, the Long Short-Term Memory (LSTM) is used to extract temporal dependencies among the computed spatial features. The performance of the proposed method is evaluated through three different scenarios of realistic synthetic interictal Hr-EEG data as well as on real interictal Hr-EEG data acquired in three patients with drug-resistant partial epilepsy, during their presurgical evaluation. A performance comparison study is also conducted with two other deep learning-based methods and four classical ESI techniques. The proposed model achieved a Dipole Localization Error (DLE) of 1.39 and Normalized Hamming Distance (NHD) of 0.28 in the case of one patch with SNR of 10 dB. In the case of two uncorrelated patches with an SNR of 10 dB, obtained DLE and NHD were respectively 1.50 and 0.28. Even in the more challenging scenario of two correlated patches with an SNR of 10 dB, the proposed approach still achieved a DLE of 3.74 and an NHD of 0.43. The results obtained on simulated data demonstrate that the proposed method outperforms the existing methods for different signal-to-noise and source configurations. The good behavior of the proposed method is also confirmed on real interictal EEG data. The robustness with respect to noise makes it a promising and alternative tool to localize epileptic brain areas and to reconstruct their electrical activities from EEG signals.


Assuntos
Aprendizado Profundo , Epilepsia Resistente a Medicamentos , Epilepsia , Humanos , Encéfalo/diagnóstico por imagem , Epilepsia/diagnóstico por imagem , Eletroencefalografia/métodos , Epilepsia Resistente a Medicamentos/diagnóstico por imagem , Mapeamento Encefálico/métodos
2.
MAGMA ; 36(5): 837-847, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36715885

RESUMO

OBJECTIVES: To access the performances of different algorithms for quantification of Intravoxel incoherent motion (IVIM) parameters D, f, [Formula: see text] in Vertebral Bone Marrow (VBM). MATERIALS AND METHODS: Five algorithms were studied: four deterministic algorithms (the One-Step and three segmented methods: Two-Step, Three-Step, and Fixed-[Formula: see text] algorithm) based on the least-squares (LSQ) method and a Bayesian probabilistic algorithm. Numerical simulations and quantification of IVIM parameters D, f, [Formula: see text] in vivo in vertebral bone marrow, were done on six healthy volunteers. The One-way repeated-measures analysis of variance (ANOVA) followed by Bonferroni's multiple comparison test (p value = 0.05) was applied. RESULTS: In numerical simulations, the Bayesian algorithm provided the best estimation of D, f, [Formula: see text] compared to the deterministic algorithms. In vivo VBM-IVIM, the values of D and f estimated by the Bayesian algorithm were close to those of the One-Step method, in contrast to the three segmented methods. DISCUSSION: The comparison of the five algorithms indicates that the Bayesian algorithm provides the best estimation of VBM-IVIM parameters, in both numerical simulations and in vivo data.


Assuntos
Imagem de Difusão por Ressonância Magnética , Processamento de Imagem Assistida por Computador , Humanos , Imagem de Difusão por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Medula Óssea/diagnóstico por imagem , Teorema de Bayes , Algoritmos , Movimento (Física)
3.
J Digit Imaging ; 36(4): 1808-1825, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-36914854

RESUMO

Computed tomography (CT) is an imaging technique extensively used in medical treatment, but too much radiation dose in a CT scan will cause harm to the human body. Decreasing the dose of radiation will result in increased noise and artifacts in the reconstructed image, blurring the internal tissue and edge details. To get high-quality CT images, we present a multi-scale feature fusion network (MSFLNet) for low-dose CT (LDCT) denoising. In our MSFLNet, we combined multiple feature extraction modules, effective noise reduction modules, and fusion modules constructed using the attention mechanism to construct a horizontally connected multi-scale structure as the overall architecture of the network, which is used to construct different levels of feature maps at all scales. We innovatively define a composite loss function composed of pixel-level loss based on MS-SSIM-L1 and edge-based edge loss for LDCT denoising. In short, our approach learns a rich set of features that combine contextual information from multiple scales while maintaining the spatial details of denoised CT images. Our laboratory results indicate that compared with the existing methods, the peak signal-to-noise ratio (PSNR) value of CT images of the AAPM dataset processed by the new model is 33.6490, and the structural similarity (SSIM) value is 0.9174, which also achieves good results on the Piglet dataset with different doses. The results also show that the method removes noise and artifacts while effectively preserving CT images' architecture and grain information.


Assuntos
Artefatos , Tomografia Computadorizada por Raios X , Animais , Humanos , Suínos , Doses de Radiação , Tomografia Computadorizada por Raios X/métodos , Razão Sinal-Ruído , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
4.
J Xray Sci Technol ; 31(4): 757-775, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37212059

RESUMO

BACKGROUND: In view of the underlying health risks posed by X-ray radiation, the main goal of the present research is to achieve high-quality CT images at the same time as reducing x-ray radiation. In recent years, convolutional neural network (CNN) has shown excellent performance in removing low-dose CT noise. However, previous work mainly focused on deepening and feature extraction work on CNN without considering fusion of features from frequency domain and image domain. OBJECTIVE: To address this issue, we propose to develop and test a new LDCT image denoising method based on a dual-domain fusion deep convolutional neural network (DFCNN). METHODS: This method deals with two domains, namely, the DCT domain and the image domain. In the DCT domain, we design a new residual CBAM network to enhance the internal and external relations of different channels while reducing noise to promote richer image structure information. For the image domain, we propose a top-down multi-scale codec network as a denoising network to obtain more acceptable edges and textures while obtaining multi-scale information. Then, the feature images of the two domains are fused by a combination network. RESULTS: The proposed method was validated on the Mayo dataset and the Piglet dataset. The denoising algorithm is optimal in both subjective and objective evaluation indexes as compared to other state-of-the-art methods reported in previous studies. CONCLUSIONS: The study results demonstrate that by applying the new fusion model denoising, denoising results in both image domain and DCT domain are better than other models developed using features extracted in the single image domain.


Assuntos
Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Animais , Suínos , Tomografia Computadorizada por Raios X/métodos , Razão Sinal-Ruído , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
5.
BMC Oral Health ; 23(1): 191, 2023 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-37005593

RESUMO

BACKGROUND: The purpose of this study was to evaluate the accuracy of automatic cephalometric landmark localization and measurements using cephalometric analysis via artificial intelligence (AI) compared with computer-assisted manual analysis. METHODS: Reconstructed lateral cephalograms (RLCs) from cone-beam computed tomography (CBCT) in 85 patients were selected. Computer-assisted manual analysis (Dolphin Imaging 11.9) and AI automatic analysis (Planmeca Romexis 6.2) were used to locate 19 landmarks and obtain 23 measurements. Mean radial error (MRE) and successful detection rate (SDR) values were calculated to assess the accuracy of automatic landmark digitization. Paired t tests and Bland‒Altman plots were used to compare the differences and consistencies in cephalometric measurements between manual and automatic analysis programs. RESULTS: The MRE for 19 cephalometric landmarks was 2.07 ± 1.35 mm with the automatic program. The average SDR within 1 mm, 2 mm, 2.5 mm, 3 and 4 mm were 18.82%, 58.58%, 71.70%, 82.04% and 91.39%, respectively. Soft tissue landmarks (1.54 ± 0.85 mm) had the most consistency, while dental landmarks (2.37 ± 1.55 mm) had the most variation. In total, 15 out of 23 measurements were within the clinically acceptable level of accuracy, 2 mm or 2°. The rates of consistency within the 95% limits of agreement were all above 90% for all measurement parameters. CONCLUSION: Automatic analysis software collects cephalometric measurements almost effectively enough to be acceptable in clinical work. Nevertheless, automatic cephalometry is not capable of completely replacing manual tracing. Additional manual supervision and adjustment for automatic programs can increase accuracy and efficiency.


Assuntos
Inteligência Artificial , Software , Cefalometria/métodos , Reprodutibilidade dos Testes , Radiografia , Tomografia Computadorizada de Feixe Cônico/métodos , Imageamento Tridimensional/métodos
6.
J Opt Soc Am A Opt Image Sci Vis ; 39(10): 1929-1938, 2022 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-36215566

RESUMO

In low-dose computed tomography (LDCT) denoising tasks, it is often difficult to balance edge/detail preservation and noise/artifact reduction. To solve this problem, we propose a dual convolutional neural network (CNN) based on edge feature extraction (Ed-DuCNN) for LDCT. Ed-DuCNN consists of two branches. One branch is the edge feature extraction subnet (Edge_Net) that can fully extract the edge details in the image. The other branch is the feature fusion subnet (Fusion_Net) that introduces an attention mechanism to fuse edge features and noisy image features. Specifically, first, shallow edge-specific detail features are extracted by trainable Sobel convolutional blocks and then are integrated into Edge_Net together with the LDCT images to obtain deep edge detail features. Finally, the input image, shallow edge detail, and deep edge detail features are fused in Fusion_Net to generate the final denoised image. The experimental results show that the proposed Ed-DuCNN can achieve competitive performance in terms of quantitative metrics and visual perceptual quality compared with that of state-of-the-art methods.


Assuntos
Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X/métodos
7.
Hum Brain Mapp ; 42(12): 3922-3933, 2021 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-33969930

RESUMO

The pathophysiology of major depressive disorder (MDD) has been explored to be highly associated with the dysfunctional integration of brain networks. It is therefore imperative to explore neuroimaging biomarkers to aid diagnosis and treatment. In this study, we developed a spatiotemporal graph convolutional network (STGCN) framework to learn discriminative features from functional connectivity for automatic diagnosis and treatment response prediction of MDD. Briefly, dynamic functional networks were first obtained from the resting-state fMRI with the sliding temporal window method. Secondly, a novel STGCN approach was proposed by introducing the modules of spatial graph attention convolution (SGAC) and temporal fusion. A novel SGAC was proposed to improve the feature learning ability and special anatomy prior guided pooling was developed to enable the feature dimension reduction. A temporal fusion module was proposed to capture the dynamic features of functional connectivity between adjacent sliding windows. Finally, the STGCN proposed approach was utilized to the tasks of diagnosis and antidepressant treatment response prediction for MDD. Performances of the framework were comprehensively examined with large cohorts of clinical data, which demonstrated its effectiveness in classifying MDD patients and predicting the treatment response. The sound performance suggests the potential of the STGCN for the clinical use in diagnosis and treatment prediction.


Assuntos
Encéfalo/diagnóstico por imagem , Conectoma/métodos , Aprendizado Profundo , Transtorno Depressivo Maior/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Rede Nervosa/diagnóstico por imagem , Adulto , Encéfalo/fisiopatologia , Transtorno Depressivo Maior/fisiopatologia , Humanos , Rede Nervosa/fisiopatologia , Prognóstico
8.
Ann Vasc Surg ; 74: 220-228, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33508451

RESUMO

BACKGROUND: Stanford type-B aortic dissection (TBAD) is commonly treated by thoracic endovascular aortic repair (TEVAR). Usually, the implanted stent-grafts will not cover the entire dissection-affected region for those patients with dissection extending beyond the thoracic aorta, thus the fate of the uncovered aortic segment is uncertain. This study used 3-dimensional measurement of aortic morphological changes to classify the different remodeling effects of TBAD patients after TEVAR, and hypothesized that not only initial morphological features, but also their change over time at follow-up are associated with the remodeling. METHODS: Forty-one TBAD patients underwent TEVAR and CT-angiography before and after the intervention (twice or more follow-ups) were included in this study. According to the false-lumen volume variations post-TEVAR, patients who had abdominal aortic expansion at the second follow-up were classified into the Enlarged (n =12, 29%) and remaining into the Stable group (n = 29, 71%). 3D morphological parameters were extracted on precise reconstruction of imaging datasets. Statistical differences in 3D morphological parameters over time between the 2 groups and the relationship among these parameters were analyzed. RESULTS: In the Enlarged group, the number of all tears before TEVAR was significantly higher (P = 0.022), and the size of all tears at the first and second follow-up post-TEVAR were significantly higher than that in the Stable group (P = 0.008 and P = 0.007). The location of the primary tear was significantly higher (P = 0.031) in the Stable group. The cross-sectional analysis of several slices below the primary tear before TEVAR shows different shape features of the false lumen in the Stable (cone-like) and Enlarged (hourglass-like) groups. The number of tears before TEVAR has a positive correlation with the post-TEVAR development of dissection (r = 0.683, P = 0.00). CONCLUSION: The results in this study indicated that the TBAD patients with larger tear areas, more re-entry tears and with the primary tear proximal to the arch would face a higher risk of negative remodeling after TEVAR.


Assuntos
Aorta Abdominal/diagnóstico por imagem , Aorta Torácica/cirurgia , Aneurisma da Aorta Torácica/cirurgia , Dissecção Aórtica/cirurgia , Aortografia , Implante de Prótese Vascular , Angiografia por Tomografia Computadorizada , Procedimentos Endovasculares , Imageamento Tridimensional , Adulto , Idoso , Dissecção Aórtica/diagnóstico por imagem , Dissecção Aórtica/fisiopatologia , Aorta Abdominal/fisiopatologia , Aorta Torácica/diagnóstico por imagem , Aorta Torácica/fisiopatologia , Aneurisma da Aorta Torácica/diagnóstico por imagem , Aneurisma da Aorta Torácica/fisiopatologia , Prótese Vascular , Implante de Prótese Vascular/efeitos adversos , Implante de Prótese Vascular/instrumentação , Dilatação Patológica , Procedimentos Endovasculares/efeitos adversos , Procedimentos Endovasculares/instrumentação , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Estudos Retrospectivos , Medição de Risco , Fatores de Risco , Stents , Fatores de Tempo , Resultado do Tratamento , Remodelação Vascular
9.
BMC Med Imaging ; 21(1): 141, 2021 10 02.
Artigo em Inglês | MEDLINE | ID: mdl-34600478

RESUMO

BACKGROUND: The determination of the right x-ray angiography viewing angle is an important issue during the treatment of thoracic endovascular aortic repair (TEVAR). An inaccurate projection angle (manually determined today by the physicians according to their personal experience) may affect the placement of the stent and cause vascular occlusion or endoleak. METHODS: Based on the acquisition of a computed tomography angiography (CTA) image before TEVAR, an adaptive optimization algorithm is proposed to determine the optimal viewing angle of the angiogram automatically. This optimal view aims at avoiding any overlapping between the left common carotid artery and the left subclavian artery. Moreover, the proposed optimal procedure exploits the patient-specific morphology to adaptively reduce the potential foreshortening effect. RESULTS: Experimental results conducted on thirty-five patients demonstrate that the optimal angiographic viewing angle based on the proposed method has no significant difference when compared with the expert practice (p = 0.0678). CONCLUSION: We propose a method that utilizes the CTA image acquired before TEVAR to automatically calculate the optimal C-arm angle. This method has the potential to assist surgeons during their interventional procedure by providing a shorter procedure time, less radiation exposure, and less contrast injection.


Assuntos
Algoritmos , Aorta Torácica/diagnóstico por imagem , Aneurisma da Aorta Torácica/cirurgia , Dissecção Aórtica/cirurgia , Aortografia/métodos , Angiografia por Tomografia Computadorizada/métodos , Procedimentos Endovasculares , Dissecção Aórtica/diagnóstico por imagem , Aorta Torácica/cirurgia , Aneurisma da Aorta Torácica/diagnóstico por imagem , Procedimentos Endovasculares/métodos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Stents , Artéria Subclávia/diagnóstico por imagem
10.
BMC Med Imaging ; 20(1): 37, 2020 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-32293303

RESUMO

BACKGROUND: Renal cancer is one of the 10 most common cancers in human beings. The laparoscopic partial nephrectomy (LPN) is an effective way to treat renal cancer. Localization and delineation of the renal tumor from pre-operative CT Angiography (CTA) is an important step for LPN surgery planning. Recently, with the development of the technique of deep learning, deep neural networks can be trained to provide accurate pixel-wise renal tumor segmentation in CTA images. However, constructing the training dataset with a large amount of pixel-wise annotations is a time-consuming task for the radiologists. Therefore, weakly-supervised approaches attract more interest in research. METHODS: In this paper, we proposed a novel weakly-supervised convolutional neural network (CNN) for renal tumor segmentation. A three-stage framework was introduced to train the CNN with the weak annotations of renal tumors, i.e. the bounding boxes of renal tumors. The framework includes pseudo masks generation, group and weighted training phases. Clinical abdominal CT angiographic images of 200 patients were applied to perform the evaluation. RESULTS: Extensive experimental results show that the proposed method achieves a higher dice coefficient (DSC) of 0.826 than the other two existing weakly-supervised deep neural networks. Furthermore, the segmentation performance is close to the fully supervised deep CNN. CONCLUSIONS: The proposed strategy improves not only the efficiency of network training but also the precision of the segmentation.


Assuntos
Angiografia por Tomografia Computadorizada/métodos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Renais/diagnóstico por imagem , Competência Clínica , Humanos , Neoplasias Renais/irrigação sanguínea , Redes Neurais de Computação , Período Pré-Operatório , Aprendizado de Máquina Supervisionado
11.
Entropy (Basel) ; 21(2)2019 Feb 18.
Artigo em Inglês | MEDLINE | ID: mdl-33266904

RESUMO

This paper introduces a new nonrigid registration approach for medical images applying an information theoretic measure based on Arimoto entropy with gradient distributions. A normalized dissimilarity measure based on Arimoto entropy is presented, which is employed to measure the independence between two images. In addition, a regularization term is integrated into the cost function to obtain the smooth elastic deformation. To take the spatial information between voxels into account, the distance of gradient distributions is constructed. The goal of nonrigid alignment is to find the optimal solution of a cost function including a dissimilarity measure, a regularization term, and a distance term between the gradient distributions of two images to be registered, which would achieve a minimum value when two misaligned images are perfectly registered using limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization scheme. To evaluate the test results of our presented algorithm in non-rigid medical image registration, experiments on simulated three-dimension (3D) brain magnetic resonance imaging (MR) images, real 3D thoracic computed tomography (CT) volumes and 3D cardiac CT volumes were carried out on elastix package. Comparison studies including mutual information (MI) and the approach without considering spatial information were conducted. These results demonstrate a slight improvement in accuracy of non-rigid registration.

12.
BMC Med Imaging ; 18(1): 9, 2018 05 09.
Artigo em Inglês | MEDLINE | ID: mdl-29739350

RESUMO

BACKGROUND: Accurate segmentation of brain tissues from magnetic resonance imaging (MRI) is of significant importance in clinical applications and neuroscience research. Accurate segmentation is challenging due to the tissue heterogeneity, which is caused by noise, bias filed and partial volume effects. METHODS: To overcome this limitation, this paper presents a novel algorithm for brain tissue segmentation based on supervoxel and graph filter. Firstly, an effective supervoxel method is employed to generate effective supervoxels for the 3D MRI image. Secondly, the supervoxels are classified into different types of tissues based on filtering of graph signals. RESULTS: The performance is evaluated on the BrainWeb 18 dataset and the Internet Brain Segmentation Repository (IBSR) 18 dataset. The proposed method achieves mean dice similarity coefficient (DSC) of 0.94, 0.92 and 0.90 for the segmentation of white matter (WM), grey matter (GM) and cerebrospinal fluid (CSF) for BrainWeb 18 dataset, and mean DSC of 0.85, 0.87 and 0.57 for the segmentation of WM, GM and CSF for IBSR18 dataset. CONCLUSIONS: The proposed approach can well discriminate different types of brain tissues from the brain MRI image, which has high potential to be applied for clinical applications.


Assuntos
Encéfalo/anatomia & histologia , Imageamento por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Bases de Dados Factuais , Humanos
13.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 35(5): 665-671, 2018 10 25.
Artigo em Chinês | MEDLINE | ID: mdl-30370703

RESUMO

The objective is to deal with brain effective connectivity among epilepsy electroencephalogram (EEG) signals recorded by use of depth electrodes in the cerebral cortex of patients suffering from refractory epilepsy during their epileptic seizures. The Wiener-Granger Causality Index (WGCI) is a well-known effective measure that can be useful to detect causal relations of interdependence in these kinds of EEG signals. It is based on the linear autoregressive model, and the issue of the estimation of the model parameters plays an important role in the calculation accuracy and robustness of WGCI to do research on brain effective connectivity. Focusing on this issue, a modified Akaike's information criterion algorithm is introduced in the computation of the WGCI to estimate the orders involved in the underlying models and in order to advance the performance of WGCI to detect brain effective connectivity. Experimental results support the interesting performance of the proposed algorithm to characterize the information flow both in a linear stochastic system and a physiology-based model.

14.
J Acoust Soc Am ; 141(1): EL38, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-28147571

RESUMO

Multiple raypaths propagation is caused by reflection and refraction at the surface and the bottom of the water column. In this study, an active wideband higher-order separation is proposed, which enables the separation of raypaths interrupted by colored noise (as traditionally found in ocean environments) in the angle-vs-time domain. A comparative study shows that the proposed algorithm achieves a more accurate separation than other algorithms. Moreover, with the proposed approach, it is not necessary to assume that the number of sensors is larger than that of the sources. Furthermore, numerical results validate the noise suppression property of the proposed method.

15.
J Acoust Soc Am ; 142(4): EL408, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-29092564

RESUMO

The application of the root multiple signal classification algorithm for raypath separation was motivated by the dramatic reduction in processing time of the multiple-signal classification algorithm. However, the algorithm provides classification only in the direction of the arrival domain and fails to separate raypaths arriving at the array with similar directions of arrival. Moreover, for many applications in shallow water (such as ocean acoustic tomography and active sonar), the emitted signal is known and can be used as a priori information to improve the resolution. Thus, in this study, a two-dimensional active wideband classification algorithm is developed using the examination of the roots of the spectrum polynomial in the angle versus time domain. A two-step strategy is developed to enable extension to the two-dimensional case. The results of simulations confirm that the proposed algorithm achieves almost identical resolution as the existing two-dimensional algorithms while offering a significant reduction in computation time.

16.
Biomed Eng Online ; 15: 5, 2016 Jan 13.
Artigo em Inglês | MEDLINE | ID: mdl-26758740

RESUMO

BACKGROUND: The low quality of diffusion tensor image (DTI) could affect the accuracy of oncology diagnosis. METHODS: We present a novel sparse representation based denoising method for three dimensional DTI by learning adaptive dictionary with the context redundancy between neighbor slices. In this study, the context redundancy among the adjacent slices of the diffusion weighted imaging volumes is utilized to train sparsifying dictionaries. Therefore, higher redundancy could be achieved for better description of image with lower computation complexity. The optimization problem is solved efficiently using an iterative block-coordinate relaxation method. RESULTS: The effectiveness of our proposed method has been assessed on both simulated and real experimental DTI datasets. Qualitative and quantitative evaluations demonstrate the performance of the proposed method on the simulated data. The experiments on real datasets with different b-values also show the effectiveness of the proposed method for noise reduction of DTI. CONCLUSIONS: The proposed approach well removes the noise in the DTI, which has high potential to be applied for clinical oncology applications.


Assuntos
Imagem de Tensor de Difusão , Aumento da Imagem/métodos , Aprendizado de Máquina , Razão Sinal-Ruído , Animais , Encéfalo , Haplorrinos , Humanos , Imageamento Tridimensional
17.
Opt Express ; 22(5): 4932-43, 2014 Mar 10.
Artigo em Inglês | MEDLINE | ID: mdl-24663832

RESUMO

This paper describes a novel algorithm to encrypt double color images into a single undistinguishable image in quaternion gyrator domain. By using an iterative phase retrieval algorithm, the phase masks used for encryption are obtained. Subsequently, the encrypted image is generated via cascaded quaternion gyrator transforms with different rotation angles. The parameters in quaternion gyrator transforms and phases serve as encryption keys. By knowing these keys, the original color images can be fully restituted. Numerical simulations have demonstrated the validity of the proposed encryption system as well as its robustness against loss of data and additive Gaussian noise.

18.
Quant Imaging Med Surg ; 14(3): 2370-2390, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38545083

RESUMO

Background: Dual-energy computed tomography (CT) can provide a range of image information beyond conventional CT through virtual monoenergetic images (VMIs). The purpose of this study was to investigate the impact of material decomposition in detector-based spectral CT on radiomics features and effectiveness of using deep learning-based image synthesis to improve the reproducibility of radiomics features. Methods: In this paper, spectral CT image data from 45 esophageal cancer patients were collected for investigation retrospectively. First, we computed the correlation coefficient of radiomics features between conventional kilovoltage peak (kVp) CT images and VMI. Then, a wavelet loss-enhanced CycleGAN (WLL-CycleGAN) with paired loss terms was developed to synthesize virtual monoenergetic CT images from the corresponding conventional single-energy CT (SECT) images for improving radiomics reproducibility. Finally, the radiomic features in 6 different categories, including gray-level co-occurrence matrix (GLCM), gray-level difference matrix (GLDM), gray-level run-length matrix (GLRLM), gray-level size-zone matrix (GLSZM), neighborhood gray-tone difference matrix (NGTDM), and wavelet, were extracted from the gross tumor volumes from conventional single energy CT, synthetic virtual monoenergetic CT images, and virtual monoenergetic CT images. Comparison between errors in the VMI and synthetic VMI (sVMI) suggested that the performance of our proposed deep learning method improved the radiomic feature accuracy. Results: Material decomposition of dual-layer dual-energy CT (DECT) can substantially influence the reproducibility of the radiomic features, and the degree of impact is feature dependent. The average reduction of radiomics errors for 15 patients in testing sets was 96.9% for first-order, 12.1% for GLCM, 12.9% for GLDM, 15.7% for GLRLM, 50.3% for GLSZM, 53.4% for NGTDM, and 6% for wavelet features. Conclusions: The work revealed that material decomposition has a significant effect on the radiomic feature values. The deep learning-based method reduced the influence of material decomposition in VMIs and might improve the robustness and reproducibility of radiomic features in esophageal cancer. Quantitative results demonstrated that our proposed wavelet loss-enhanced paired CycleGAN outperforms the original CycleGAN.

19.
Artigo em Inglês | MEDLINE | ID: mdl-38809720

RESUMO

The Segment Anything Model (SAM) is a foundational model that has demonstrated impressive results in the field of natural image segmentation. However, its performance remains suboptimal for medical image segmentation, particularly when delineating lesions with irregular shapes and low contrast. This can be attributed to the significant domain gap between medical images and natural images on which SAM was originally trained. In this paper, we propose an adaptation of SAM specifically tailored for lesion segmentation termed LeSAM. LeSAM first learns medical-specific domain knowledge through an efficient adaptation module and integrates it with the general knowledge obtained from the pre-trained SAM. Subsequently, we leverage this merged knowledge to generate lesion masks using a modified mask decoder implemented as a lightweight U-shaped network design. This modification enables better delineation of lesion boundaries while facilitating ease of training. We conduct comprehensive experiments on various lesion segmentation tasks involving different image modalities such as CT scans, MRI scans, ultrasound images, dermoscopic images, and endoscopic images. Our proposed method achieves superior performance compared to previous state-of-the-art methods in 8 out of 12 lesion segmentation tasks while achieving competitive performance in the remaining 4 datasets. Additionally, ablation studies are conducted to validate the effectiveness of our proposed adaptation modules and modified decoder.

20.
Cogn Neurodyn ; 18(3): 1215-1225, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38826671

RESUMO

An epileptic seizure can usually be divided into three stages: interictal, preictal, and ictal. However, the seizure underlying the transition from interictal to ictal activities in the brain involves complex interactions between inhibition and excitation in groups of neurons. To explore this mechanism at the level of a single population, this paper employed a neural mass model, named the complete physiology-based model (cPBM), to reconstruct electroencephalographic (EEG) signals and to infer the changes in excitatory/inhibitory connections related to excitation-inhibition (E-I) balance based on an open dataset recorded for ten epileptic patients. Since epileptic signals display spectral characteristics, spectral dynamic causal modelling (DCM) was applied to quantify these frequency characteristics by maximizing the free energy in the framework of power spectral density (PSD) and estimating the cPBM parameters. In addition, to address the local maximum problem that DCM may suffer from, a hybrid deterministic DCM (H-DCM) approach was proposed, with a deterministic annealing-based scheme applied in two directions. The H-DCM approach adjusts the temperature introduced in the objective function by gradually decreasing the temperature to obtain relatively good initialization and then gradually increasing the temperature to search for a better estimation after each maximization. The results showed that (i) reconstructed EEG signals belonging to the three stages together with their PSDs can be reproduced from the estimated parameters of the cPBM; (ii) compared to DCM, traditional D-DCM and anti D-DCM, the proposed H-DCM shows higher free energies and lower root mean square error (RMSE), and it provides the best performance for all stages (e.g., the RMSEs between the reconstructed PSD computed from the reconstructed EEG signal and the sample PSD obtained from the real EEG signal are 0.33 ± 0.08, 0.67 ± 0.37 and 0.78 ± 0.57 in the interictal, preictal and ictal stages, respectively); and (iii) the transition from interictal to ictal activity can be explained by an increase in the connections between pyramidal cells and excitatory interneurons and between pyramidal cells and fast inhibitory interneurons, as well as a decrease in the self-loop connection of the fast inhibitory interneurons in the cPBM. Moreover, the E-I balance, defined as the ratio between the excitatory connection from pyramidal cells to fast inhibitory interneurons and the inhibitory connection with the self-loop of fast inhibitory interneurons, is also significantly increased during the epileptic seizure transition. Supplementary Information: The online version contains supplementary material available at 10.1007/s11571-023-09976-6.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa