Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 109
Filtrar
1.
Quant Imaging Med Surg ; 14(3): 2370-2390, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38545083

RESUMO

Background: Dual-energy computed tomography (CT) can provide a range of image information beyond conventional CT through virtual monoenergetic images (VMIs). The purpose of this study was to investigate the impact of material decomposition in detector-based spectral CT on radiomics features and effectiveness of using deep learning-based image synthesis to improve the reproducibility of radiomics features. Methods: In this paper, spectral CT image data from 45 esophageal cancer patients were collected for investigation retrospectively. First, we computed the correlation coefficient of radiomics features between conventional kilovoltage peak (kVp) CT images and VMI. Then, a wavelet loss-enhanced CycleGAN (WLL-CycleGAN) with paired loss terms was developed to synthesize virtual monoenergetic CT images from the corresponding conventional single-energy CT (SECT) images for improving radiomics reproducibility. Finally, the radiomic features in 6 different categories, including gray-level co-occurrence matrix (GLCM), gray-level difference matrix (GLDM), gray-level run-length matrix (GLRLM), gray-level size-zone matrix (GLSZM), neighborhood gray-tone difference matrix (NGTDM), and wavelet, were extracted from the gross tumor volumes from conventional single energy CT, synthetic virtual monoenergetic CT images, and virtual monoenergetic CT images. Comparison between errors in the VMI and synthetic VMI (sVMI) suggested that the performance of our proposed deep learning method improved the radiomic feature accuracy. Results: Material decomposition of dual-layer dual-energy CT (DECT) can substantially influence the reproducibility of the radiomic features, and the degree of impact is feature dependent. The average reduction of radiomics errors for 15 patients in testing sets was 96.9% for first-order, 12.1% for GLCM, 12.9% for GLDM, 15.7% for GLRLM, 50.3% for GLSZM, 53.4% for NGTDM, and 6% for wavelet features. Conclusions: The work revealed that material decomposition has a significant effect on the radiomic feature values. The deep learning-based method reduced the influence of material decomposition in VMIs and might improve the robustness and reproducibility of radiomic features in esophageal cancer. Quantitative results demonstrated that our proposed wavelet loss-enhanced paired CycleGAN outperforms the original CycleGAN.

2.
Neuroimage ; 285: 120490, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38103624

RESUMO

Identifying the location, the spatial extent and the electrical activity of distributed brain sources in the context of epilepsy through ElectroEncephaloGraphy (EEG) recordings is a challenging task because of the highly ill-posed nature of the underlying Electrophysiological Source Imaging (ESI) problem. To guarantee a unique solution, most existing ESI methods pay more attention to solve this inverse problem by imposing physiological constraints. This paper proposes an efficient ESI approach based on simulation-driven deep learning. Epileptic High-resolution 256-channels scalp EEG (Hr-EEG) signals are simulated in a realistic manner to train the proposed patient-specific model. More particularly, a computational neural mass model developed in our team is used to generate the temporal dynamics of the activity of each dipole while the forward problem is solved using a patient-specific three-shell realistic head model and the boundary element method. A Temporal Convolutional Network (TCN) is considered in the proposed model to capture local spatial patterns. To enable the model to observe the EEG signals from different scale levels, the multi-scale strategy is leveraged to capture the overall features and fine-grain features by adjusting the convolutional kernel size. Then, the Long Short-Term Memory (LSTM) is used to extract temporal dependencies among the computed spatial features. The performance of the proposed method is evaluated through three different scenarios of realistic synthetic interictal Hr-EEG data as well as on real interictal Hr-EEG data acquired in three patients with drug-resistant partial epilepsy, during their presurgical evaluation. A performance comparison study is also conducted with two other deep learning-based methods and four classical ESI techniques. The proposed model achieved a Dipole Localization Error (DLE) of 1.39 and Normalized Hamming Distance (NHD) of 0.28 in the case of one patch with SNR of 10 dB. In the case of two uncorrelated patches with an SNR of 10 dB, obtained DLE and NHD were respectively 1.50 and 0.28. Even in the more challenging scenario of two correlated patches with an SNR of 10 dB, the proposed approach still achieved a DLE of 3.74 and an NHD of 0.43. The results obtained on simulated data demonstrate that the proposed method outperforms the existing methods for different signal-to-noise and source configurations. The good behavior of the proposed method is also confirmed on real interictal EEG data. The robustness with respect to noise makes it a promising and alternative tool to localize epileptic brain areas and to reconstruct their electrical activities from EEG signals.


Assuntos
Aprendizado Profundo , Epilepsia Resistente a Medicamentos , Epilepsia , Humanos , Encéfalo/diagnóstico por imagem , Epilepsia/diagnóstico por imagem , Eletroencefalografia/métodos , Epilepsia Resistente a Medicamentos/diagnóstico por imagem , Mapeamento Encefálico/métodos
3.
Radiat Oncol ; 18(1): 149, 2023 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-37697360

RESUMO

BACKGROUND: This study aims to validate the effectiveness of linear regression for motion prediction of internal organs or tumors on 2D cine-MR and to present an online gating signal prediction scheme that can improve the accuracy of MR-guided radiotherapy for liver and lung cancer. MATERIALS AND METHODS: We collected 2D cine-MR sequences of 21 liver cancer patients and 10 lung cancer patients to develop a binary gating signal prediction algorithm that forecasts the crossing-time of tumor motion traces relative to the target threshold. Both 0.4 s and 0.6 s prediction windows were tested using three linear predictors and three recurrent neural networks (RNNs), given the system delay of 0.5 s. Furthermore, an adaptive linear regression model was evaluated using only the first 30 s as the burn-in period, during which the model parameters were adapted during the online prediction process. The accuracy of the predicted traces was measured using amplitude metrics (MAE, RMSE, and R2), and in addition, we proposed three temporal metrics, namely crossing error, gating error, and gating accuracy, which are more relevant to the nature of the gating signals. RESULTS: In both 0.6 s and 0.4 s prediction cases, linear regression outperformed other methods, demonstrating significantly smaller amplitude errors compared to the RNNs (P < 0.05). The proposed algorithm with adaptive linear regression had the best performance with an average gating accuracy of 98.3% and 98.0%, a gating error of 44 ms and 45 ms, for liver cancer and lung cancer patients, respectively. CONCLUSION: A functional online gating control scheme was developed with an adaptive linear regression that is both more cost-efficient and accurate than sophisticated RNN based methods in all studied metrics.


Assuntos
Neoplasias Hepáticas , Neoplasias Pulmonares , Radioterapia (Especialidade) , Humanos , Movimento , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/radioterapia , Imageamento por Ressonância Magnética
4.
Med Phys ; 50(12): 7779-7790, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37387645

RESUMO

BACKGROUND: The main application of [18F] FDG-PET (18 FDG-PET) and CT images in oncology is tumor identification and quantification. Combining PET and CT images to mine pulmonary perfusion information for functional lung avoidance radiation therapy (FLART) is desirable but remains challenging. PURPOSE: To develop a deep-learning-based (DL) method to combine 18 FDG-PET and CT images for producing pulmonary perfusion images (PPI). METHODS: Pulmonary technetium-99 m-labeled macroaggregated albumin SPECT (PPISPECT ), 18 FDG-PET, and CT images obtained from 53 patients were enrolled. CT and PPISPECT images were rigidly registered, and registration displacement was subsequently used to align 18 FDG-PET and PPISPECT images. The left/right lung was separated and rigidly registered again to improve the registration accuracy. A DL model based on 3D Unet architecture was constructed to directly combine multi-modality 18 FDG-PET and CT images for producing PPI (PPIDLM ). 3D Unet architecture was used as the basic architecture, and the input was expanded from a single-channel to a dual-channel to combine multi-modality images. For comparative evaluation, 18 FDG-PET images were also used alone to generate PPIDLPET . Sixty-seven samples were randomly selected for training and cross-validation, and 36 were used for testing. The Spearman correlation coefficient (rs ) and multi-scale structural similarity index measure (MS-SSIM) between PPIDLM /PPIDLPET and PPISPECT were computed to assess the statistical and perceptual image similarities. The Dice similarity coefficient (DSC) was calculated to determine the similarity between high-/low- functional lung (HFL/LFL) volumes. RESULTS: The voxel-wise rs and MS-SSIM of PPIDLM /PPIDLPET were 0.78 ± 0.04/0.57 ± 0.03, 0.93 ± 0.01/0.89 ± 0.01 for cross-validation and 0.78 ± 0.11/0.55 ± 0.18, 0.93 ± 0.03/0.90 ± 0.04 for testing. PPIDLM /PPIDLPET achieved averaged DSC values of 0.78 ± 0.03/0.64 ± 0.02 for HFL and 0.83 ± 0.01/0.72 ± 0.03 for LFL in the training dataset and 0.77 ± 0.11/0.64 ± 0.12, 0.82 ± 0.05/0.72 ± 0.06 in the testing dataset. PPIDLM yielded a stronger correlation and higher MS-SSIM with PPISPECT than PPIDLPET (p < 0.001). CONCLUSIONS: The DL-based method integrates lung metabolic and anatomy information for producing PPI and significantly improved the accuracy over methods based on metabolic information alone. The generated PPIDLM can be applied for pulmonary perfusion volume segmentation, which is potentially beneficial for FLART treatment plan optimization.


Assuntos
Aprendizado Profundo , Fluordesoxiglucose F18 , Humanos , Pulmão , Perfusão , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos
5.
Comput Methods Programs Biomed ; 238: 107614, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37244233

RESUMO

BACKGROUND AND OBJECTIVE: Accurate and efficient segmentation of thyroid nodules on ultrasound images is critical for computer-aided nodule diagnosis and treatment. For ultrasound images, Convolutional neural networks (CNNs) and Transformers, which are widely used in natural images, cannot obtain satisfactory segmentation results, because they either cannot obtain precise boundaries or segment small objects. METHODS: To address these issues, we propose a novel Boundary-preserving assembly Transformer UNet (BPAT-UNet) for ultrasound thyroid nodule segmentation. In the proposed network, a Boundary point supervision module (BPSM), which adopts two novel self-attention pooling approaches, is designed to enhance boundary features and generate ideal boundary points through a novel method. Meanwhile, an Adaptive multi-scale feature fusion module (AMFFM) is constructed to fuse features and channel information at different scales. Finally, to fully integrate the characteristics of high-frequency local and low-frequency global, the Assembled transformer module (ATM) is placed at the bottleneck of the network. The correlation between deformable features and features-among computation is characterized by introducing them into the above two modules of AMFFM and ATM. As the design goal and eventually demonstrated, BPSM and ATM promote the proposed BPAT-UNet to further constrain boundaries, whereas AMFFM assists to detect small objects. RESULTS: Compared to other classical segmentation networks, the proposed BPAT-UNet displays superior segmentation performance in visualization results and evaluation metrics. Significant improvement of segmentation accuracy was shown on the public thyroid dataset of TN3k with Dice similarity coefficient (DSC) of 81.64% and 95th percentage of the asymmetric Hausdorff distance (HD95) of 14.06, whereas those on our private dataset were with DSC of 85.63% and HD95 of 14.53, respectively. CONCLUSIONS: This paper presents a method for thyroid ultrasound image segmentation, which achieves high accuracy and meets the clinical requirements. Code is available at https://github.com/ccjcv/BPAT-UNet.


Assuntos
Nódulo da Glândula Tireoide , Humanos , Nódulo da Glândula Tireoide/diagnóstico por imagem , Ultrassonografia , Benchmarking , Diagnóstico por Computador , Processamento de Imagem Assistida por Computador
6.
J Xray Sci Technol ; 31(4): 757-775, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37212059

RESUMO

BACKGROUND: In view of the underlying health risks posed by X-ray radiation, the main goal of the present research is to achieve high-quality CT images at the same time as reducing x-ray radiation. In recent years, convolutional neural network (CNN) has shown excellent performance in removing low-dose CT noise. However, previous work mainly focused on deepening and feature extraction work on CNN without considering fusion of features from frequency domain and image domain. OBJECTIVE: To address this issue, we propose to develop and test a new LDCT image denoising method based on a dual-domain fusion deep convolutional neural network (DFCNN). METHODS: This method deals with two domains, namely, the DCT domain and the image domain. In the DCT domain, we design a new residual CBAM network to enhance the internal and external relations of different channels while reducing noise to promote richer image structure information. For the image domain, we propose a top-down multi-scale codec network as a denoising network to obtain more acceptable edges and textures while obtaining multi-scale information. Then, the feature images of the two domains are fused by a combination network. RESULTS: The proposed method was validated on the Mayo dataset and the Piglet dataset. The denoising algorithm is optimal in both subjective and objective evaluation indexes as compared to other state-of-the-art methods reported in previous studies. CONCLUSIONS: The study results demonstrate that by applying the new fusion model denoising, denoising results in both image domain and DCT domain are better than other models developed using features extracted in the single image domain.


Assuntos
Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Animais , Suínos , Tomografia Computadorizada por Raios X/métodos , Razão Sinal-Ruído , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
7.
BMC Oral Health ; 23(1): 191, 2023 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-37005593

RESUMO

BACKGROUND: The purpose of this study was to evaluate the accuracy of automatic cephalometric landmark localization and measurements using cephalometric analysis via artificial intelligence (AI) compared with computer-assisted manual analysis. METHODS: Reconstructed lateral cephalograms (RLCs) from cone-beam computed tomography (CBCT) in 85 patients were selected. Computer-assisted manual analysis (Dolphin Imaging 11.9) and AI automatic analysis (Planmeca Romexis 6.2) were used to locate 19 landmarks and obtain 23 measurements. Mean radial error (MRE) and successful detection rate (SDR) values were calculated to assess the accuracy of automatic landmark digitization. Paired t tests and Bland‒Altman plots were used to compare the differences and consistencies in cephalometric measurements between manual and automatic analysis programs. RESULTS: The MRE for 19 cephalometric landmarks was 2.07 ± 1.35 mm with the automatic program. The average SDR within 1 mm, 2 mm, 2.5 mm, 3 and 4 mm were 18.82%, 58.58%, 71.70%, 82.04% and 91.39%, respectively. Soft tissue landmarks (1.54 ± 0.85 mm) had the most consistency, while dental landmarks (2.37 ± 1.55 mm) had the most variation. In total, 15 out of 23 measurements were within the clinically acceptable level of accuracy, 2 mm or 2°. The rates of consistency within the 95% limits of agreement were all above 90% for all measurement parameters. CONCLUSION: Automatic analysis software collects cephalometric measurements almost effectively enough to be acceptable in clinical work. Nevertheless, automatic cephalometry is not capable of completely replacing manual tracing. Additional manual supervision and adjustment for automatic programs can increase accuracy and efficiency.


Assuntos
Inteligência Artificial , Software , Cefalometria/métodos , Reprodutibilidade dos Testes , Radiografia , Tomografia Computadorizada de Feixe Cônico/métodos , Imageamento Tridimensional/métodos
8.
J Med Imaging (Bellingham) ; 10(2): 024001, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36875637

RESUMO

Purpose: Segmentation of vascular structures in preoperative computed tomography (CT) is a preliminary step for computer-assisted endovascular navigation. It is a challenging issue when contrast medium enhancement is reduced or impossible, as in the case of endovascular abdominal aneurysm repair for patients with severe renal impairment. In non-contrast-enhanced CTs, the segmentation tasks are currently hampered by the problems of low contrast, similar topological form, and size imbalance. To tackle these problems, we propose a novel fully automatic approach based on convolutional neural network. Approach: The proposed method is implemented by fusing the features from different dimensions by three kinds of mechanisms, i.e., channel concatenation, dense connection, and spatial interpolation. The fusion mechanisms are regarded as the enhancement of features in non-contrast CTs where the boundary of aorta is ambiguous. Results: All of the networks are validated by three-fold cross-validation on our dataset of non-contrast CTs, which contains 5749 slices in total from 30 individual patients. Our methods achieve a Dice score of 88.7% as the overall performance, which is better than the results reported in the related works. Conclusions: The analysis indicates that our methods yield a competitive performance by overcoming the above-mentioned problems in most general cases. Further, experiments on our non-contrast CTs demonstrate the superiority of the proposed methods, especially in low-contrast, similar-shaped, and extreme-sized cases.

9.
J Digit Imaging ; 36(4): 1808-1825, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-36914854

RESUMO

Computed tomography (CT) is an imaging technique extensively used in medical treatment, but too much radiation dose in a CT scan will cause harm to the human body. Decreasing the dose of radiation will result in increased noise and artifacts in the reconstructed image, blurring the internal tissue and edge details. To get high-quality CT images, we present a multi-scale feature fusion network (MSFLNet) for low-dose CT (LDCT) denoising. In our MSFLNet, we combined multiple feature extraction modules, effective noise reduction modules, and fusion modules constructed using the attention mechanism to construct a horizontally connected multi-scale structure as the overall architecture of the network, which is used to construct different levels of feature maps at all scales. We innovatively define a composite loss function composed of pixel-level loss based on MS-SSIM-L1 and edge-based edge loss for LDCT denoising. In short, our approach learns a rich set of features that combine contextual information from multiple scales while maintaining the spatial details of denoised CT images. Our laboratory results indicate that compared with the existing methods, the peak signal-to-noise ratio (PSNR) value of CT images of the AAPM dataset processed by the new model is 33.6490, and the structural similarity (SSIM) value is 0.9174, which also achieves good results on the Piglet dataset with different doses. The results also show that the method removes noise and artifacts while effectively preserving CT images' architecture and grain information.


Assuntos
Artefatos , Tomografia Computadorizada por Raios X , Animais , Humanos , Suínos , Doses de Radiação , Tomografia Computadorizada por Raios X/métodos , Razão Sinal-Ruído , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
10.
Med Phys ; 50(5): 2816-2834, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36791315

RESUMO

BACKGROUND: With the rapid development of deep learning technology, deep neural networks can effectively enhance the performance of computed tomography (CT) reconstructions. One kind of commonly used method to construct CT reconstruction networks is to unroll the conventional iterative reconstruction (IR) methods to convolutional neural networks (CNNs). However, most unrolling methods primarily unroll the fidelity term of IR methods to CNNs, without unrolling the prior terms. The prior terms are always directly replaced by neural networks. PURPOSE: In conventional IR methods, the prior terms play a vital role in improving the visual quality of reconstructed images. Unrolling the hand-crafted prior terms to CNNs may provide a more specialized unrolling approach to further improve the performance of CT reconstruction. In this work, a primal-dual network (PD-Net) was proposed by unrolling both the data fidelity term and the total variation (TV) prior term, which effectively preserves the image edges and textures in the reconstructed images. METHODS: By further deriving the Chambolle-Pock (CP) algorithm instance for CT reconstruction, we discovered that the TV prior updates the reconstructed images with its divergences in each iteration of the solution process. Based on this discovery, CNNs were applied to yield the divergences of the feature maps for the reconstructed image generated in each iteration. Additionally, a loss function was applied to the predicted divergences of the reconstructed image to guarantee that the CNNs' results were the divergences of the corresponding feature maps in the iteration. In this manner, the proposed CNNs seem to play the same roles in the PD-Net as the TV prior in the IR methods. Thus, the TV prior in the CP algorithm instance can be directly unrolled to CNNs. RESULTS: The datasets from the Low-Dose CT Image and Projection Data and the Piglet dataset were employed to assess the effectiveness of our proposed PD-Net. Compared with conventional CT reconstruction methods, our proposed method effectively preserves the structural and textural information in reference to ground truth. CONCLUSIONS: The experimental results show that our proposed PD-Net framework is feasible for the implementation of CT reconstruction tasks. Owing to the promising results yielded by our proposed neural network, this study is intended to inspire further development of unrolling approaches by enabling the direct unrolling of hand-crafted prior terms to CNNs.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Animais , Suínos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Redes Neurais de Computação , Algoritmos , Imagens de Fantasmas
11.
Phys Med Biol ; 68(9)2023 04 26.
Artigo em Inglês | MEDLINE | ID: mdl-36652722

RESUMO

Accurate and robust anatomical landmark localization is a mandatory and crucial step in deformation diagnosis and treatment planning for patients with craniomaxillofacial (CMF) malformations. In this paper, we propose a trainable end-to-end cephalometric landmark localization framework on Cone-beam computed tomography (CBCT) scans, referred to as CMF-Net, which combines the appearance with transformers, geometric constraint, and adaptive wing (AWing) loss. More precisely: (1) we decompose the localization task into two branches: the appearance branch integrates transformers for identifying the exact positions of candidates, while the geometric constraint branch at low resolution allows the implicit spatial relationships to be effectively learned on the reduced training data. (2) We use the AWing loss to leverage the difference between the pixel values of the target heatmaps and the automatic prediction heatmaps. We verify our CMF-Net by identifying the 24 most relevant clinical landmarks on 150 dental CBCT scans with complicated scenarios collected from real-world clinics. Comprehensive experiments show that it performs better than the state-of-the-art deep learning methods, with an average localization error of 1.108 mm (the clinically acceptable precision range being 1.5 mm) and a correct landmark detection rate equal to 79.28%. Our CMF-Net is time-efficient and able to locate skull landmarks with high accuracy and significant robustness. This approach could be applied in 3D cephalometric measurement, analysis, and surgical planning.


Assuntos
Imageamento Tridimensional , Tomografia Computadorizada de Feixe Cônico Espiral , Humanos , Imageamento Tridimensional/métodos , Algoritmos , Pontos de Referência Anatômicos , Reprodutibilidade dos Testes , Tomografia Computadorizada de Feixe Cônico/métodos
12.
MAGMA ; 36(5): 837-847, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36715885

RESUMO

OBJECTIVES: To access the performances of different algorithms for quantification of Intravoxel incoherent motion (IVIM) parameters D, f, [Formula: see text] in Vertebral Bone Marrow (VBM). MATERIALS AND METHODS: Five algorithms were studied: four deterministic algorithms (the One-Step and three segmented methods: Two-Step, Three-Step, and Fixed-[Formula: see text] algorithm) based on the least-squares (LSQ) method and a Bayesian probabilistic algorithm. Numerical simulations and quantification of IVIM parameters D, f, [Formula: see text] in vivo in vertebral bone marrow, were done on six healthy volunteers. The One-way repeated-measures analysis of variance (ANOVA) followed by Bonferroni's multiple comparison test (p value = 0.05) was applied. RESULTS: In numerical simulations, the Bayesian algorithm provided the best estimation of D, f, [Formula: see text] compared to the deterministic algorithms. In vivo VBM-IVIM, the values of D and f estimated by the Bayesian algorithm were close to those of the One-Step method, in contrast to the three segmented methods. DISCUSSION: The comparison of the five algorithms indicates that the Bayesian algorithm provides the best estimation of VBM-IVIM parameters, in both numerical simulations and in vivo data.


Assuntos
Imagem de Difusão por Ressonância Magnética , Processamento de Imagem Assistida por Computador , Humanos , Imagem de Difusão por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Medula Óssea/diagnóstico por imagem , Teorema de Bayes , Algoritmos , Movimento (Física)
13.
Cancer Med ; 12(2): 1228-1236, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35766144

RESUMO

BACKGROUND: Manual cytological diagnosis for early esophageal squamous cell carcinoma (early ESCC) and high-grade intraepithelial neoplasia (HGIN) is unsatisfactory. Herein, we have introduced an artificial intelligence (AI)-assisted cytological diagnosis for such lesions. METHODS: Low-grade squamous intraepithelial lesion or worse was set as the diagnostic threshold for AI-assisted diagnosis. The performance of AI-assisted diagnosis was evaluated and compared to that of manual diagnosis. Feasibility in large-scale screening was also assessed. RESULTS: AI-assisted diagnosis for abnormal cells was superior to manual reading by presenting a higher efficiency for each slide (50.9 ± 0.8 s vs 236.8 ± 3.9 s, p = 1.52 × 10-76 ) and a better interobserver agreement (93.27% [95% CI, 92.76%-93.74%] vs 65.29% [95% CI, 64.35%-66.22%], p = 1.03 × 10-84 ). AI-assisted detection showed a higher diagnostic accuracy (96.89% [92.38%-98.57%] vs 72.54% [65.85%-78.35%], p = 1.42 × 10-14 ), sensitivity (99.35% [95.92%-99.97%] vs 68.39% [60.36%-75.48%], p = 7.11 × 10-15 ), and negative predictive value (NPV) (97.06% [82.95%-99.85%] vs 40.96% [30.46%-52.31%], p = 1.42 × 10-14 ). Specificity and positive predictive value (PPV) were not significantly differed. AI-assisted diagnosis demonstrated a smaller proportion of participants of interest (3.73%, [79/2117] vs.12.84% [272/2117], p = 1.59 × 10-58 ), a higher consistence between cytology and endoscopy (40.51% [32/79] vs. 12.13% [33/272], p = 1.54 × 10- 8), specificity (97.74% [96.98%-98.32%] vs 88.52% [87.05%-89.84%], p = 3.19 × 10-58 ), and PPV (40.51% [29.79%-52.15%] vs 12.13% [8.61%-16.75%], p = 1.54 × 10-8 ) in community-based screening. Sensitivity and NPV were not significantly differed. AI-assisted diagnosis as primary screening significantly reduced average cost for detecting positive cases. CONCLUSION: Our study provides a novel cytological method for detecting and screening early ESCC and HGIN.


Assuntos
Carcinoma in Situ , Neoplasias Esofágicas , Carcinoma de Células Escamosas do Esôfago , Lesões Intraepiteliais Escamosas , Neoplasias do Colo do Útero , Feminino , Humanos , Neoplasias do Colo do Útero/patologia , Carcinoma de Células Escamosas do Esôfago/diagnóstico , Neoplasias Esofágicas/diagnóstico , Inteligência Artificial , Lesões Intraepiteliais Escamosas/diagnóstico
14.
Phys Med Biol ; 67(24)2022 12 15.
Artigo em Inglês | MEDLINE | ID: mdl-36541494

RESUMO

Objective.Plan-of-the-day (PoD) adaptive radiation therapy (ART) is based on a library of treatment plans, among which, at each treatment fraction, the PoD is selected using daily images. However, this strategy is limited by PoD selection uncertainties. This work aimed to propose and evaluate a workflow to automatically and quantitatively identify the PoD for cervix cancer ART based on daily CBCT images.Approach.The quantification was based on the segmentation of the main structures of interest in the CBCT images (clinical target volume [CTV], rectum, bladder, and bowel bag) using a deep learning model. Then, the PoD was selected from the treatment plan library according to the geometrical coverage of the CTV. For the evaluation, the resulting PoD was compared to the one obtained considering reference CBCT delineations.Main results.In experiments on a database of 23 patients with 272 CBCT images, the proposed method obtained an agreement between the reference PoD and the automatically identified PoD for 91.5% of treatment fractions (99.6% when considering a 5% margin on CTV coverage).Significance.The proposed automatic workflow automatically selected PoD for ART using deep-learning methods. The results showed the ability of the proposed process to identify the optimal PoD in a treatment plan library.


Assuntos
Radioterapia de Intensidade Modulada , Tomografia Computadorizada de Feixe Cônico Espiral , Neoplasias do Colo do Útero , Feminino , Humanos , Neoplasias do Colo do Útero/diagnóstico por imagem , Neoplasias do Colo do Útero/radioterapia , Planejamento da Radioterapia Assistida por Computador/métodos , Bexiga Urinária , Radioterapia de Intensidade Modulada/métodos , Dosagem Radioterapêutica , Tomografia Computadorizada de Feixe Cônico/métodos
15.
J Opt Soc Am A Opt Image Sci Vis ; 39(10): 1929-1938, 2022 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-36215566

RESUMO

In low-dose computed tomography (LDCT) denoising tasks, it is often difficult to balance edge/detail preservation and noise/artifact reduction. To solve this problem, we propose a dual convolutional neural network (CNN) based on edge feature extraction (Ed-DuCNN) for LDCT. Ed-DuCNN consists of two branches. One branch is the edge feature extraction subnet (Edge_Net) that can fully extract the edge details in the image. The other branch is the feature fusion subnet (Fusion_Net) that introduces an attention mechanism to fuse edge features and noisy image features. Specifically, first, shallow edge-specific detail features are extracted by trainable Sobel convolutional blocks and then are integrated into Edge_Net together with the LDCT images to obtain deep edge detail features. Finally, the input image, shallow edge detail, and deep edge detail features are fused in Fusion_Net to generate the final denoised image. The experimental results show that the proposed Ed-DuCNN can achieve competitive performance in terms of quantitative metrics and visual perceptual quality compared with that of state-of-the-art methods.


Assuntos
Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X/métodos
16.
Front Oncol ; 12: 900340, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35965563

RESUMO

Prostate cancer diagnosis is performed under ultrasound-guided puncture for pathological cell extraction. However, determining accurate prostate location remains a challenge from two aspects: (1) prostate boundary in ultrasound images is always ambiguous; (2) the delineation of radiologists always occupies multiple pixels, leading to many disturbing points around the actual contour. We proposed a boundary structure-preserving U-Net (BSP U-Net) in this paper to achieve precise prostate contour. BSP U-Net incorporates prostate shape prior to traditional U-Net. The prior shape is built by the key point selection module, which is an active shape model-based method. Then, the module plugs into the traditional U-Net structure network to achieve prostate segmentation. The experiments were conducted on two datasets: PH2 + ISBI 2016 challenge and our private prostate ultrasound dataset. The results on PH2 + ISBI 2016 challenge achieved a Dice similarity coefficient (DSC) of 95.94% and a Jaccard coefficient (JC) of 88.58%. The results of prostate contour based on our method achieved a higher pixel accuracy of 97.05%, a mean intersection over union of 93.65%, a DSC of 92.54%, and a JC of 93.16%. The experimental results show that the proposed BSP U-Net has good performance on PH2 + ISBI 2016 challenge and prostate ultrasound image segmentation and outperforms other state-of-the-art methods.

17.
Med Phys ; 49(10): 6527-6537, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35917213

RESUMO

BACKGROUND: Radiomics has been considered an imaging marker for capturing quantitative image information (QII). The introduction of radiomics to image segmentation is desirable but challenging. PURPOSE: This study aims to develop and validate a radiomics-based framework for image segmentation (RFIS). METHODS: RFIS is designed using features extracted from volume (svfeatures) created by sliding window (swvolume). The 53 svfeatures are extracted from 11 phantom series. Outliers in the svfeature datasets are detected by isolation forest (iForest) and specified as the mean value. The percentage coefficient of variation (%COV) is calculated to evaluate the reproducibility of svfeatures. RFIS is constructed and applied to the gross target volume (GTV) segmentation from the peritumoral region (GTV with a 10 mm margin) to assess its feasibility. The 127 lung cancer images are enrolled. The test-retest method, correlation matrix, and Mann-Whitney U test (p < 0.05) are used to select non-redundant svfeatures of statistical significance from the reproducible svfeatures. The synthetic minority over-sampling technique is utilized to balance the minority group in the training sets. The support vector machine is employed for RFIS construction, which is tuned in the training set using 10-fold stratified cross-validation and then evaluated in the test sets. The swvolumes with the consistent classification results are grouped and merged. Mode filtering is performed to remove very small subvolumes and create relatively large regions of completely uniform character. In addition, RFIS performance is evaluated by the area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, specificity, and Dice similarity coefficient (DSC). RESULTS: 30249 phantom and 145008 patient image swvolumes were analyzed. Forty-nine (92.45% of 53) svfeatures represented excellent reproducibility(%COV<15). Forty-five features (91.84% of 49) included five categories that passed test-retest analysis. Thirteen svfeatures (28.89% of 45) svfeatures were selected for RFIS construction. RFIS showed an average (95% confidence interval) sensitivity of 0.848 (95% CI:0.844-0.883), a specificity of 0.821 (95% CI: 0.818-0.825), an accuracy of 83.48% (95% CI: 83.27%-83.70%), and an AUC of 0.906 (95% CI: 0.904-0.908) with cross-validation. The sensitivity, specificity, accuracy, and AUC were equal to 0.762 (95% CI: 0.754-0.770), 0.840 (95% CI: 0.837-0.844), 82.29% (95% CI: 81.90%-82.60%), and 0.877 (95% CI: 0.873-0.881) in the test set, respectively. GTV was segmented by grouping and merging swvolume with identical classification results. The mean DSC after mode filtering was 0.707 ± 0.093 in the training sets and 0.688 ± 0.072 in the test sets. CONCLUSION: Reproducible svfeatures can capture the differences in QII among swvolumes. RFIS can be applied to swvolume classification, which achieves image segmentation by grouping and merging the swvolume with similar QII.


Assuntos
Neoplasias Pulmonares , Tomografia Computadorizada por Raios X , Humanos , Imagens de Fantasmas , Reprodutibilidade dos Testes , Estudos Retrospectivos , Máquina de Vetores de Suporte , Tomografia Computadorizada por Raios X/métodos
18.
Int J Neural Syst ; 32(7): 2250032, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35695914

RESUMO

Epilepsy is one of the most common neurological diseases, which can seriously affect the patient's psychological well-being and quality of life. An accurate and reliable seizure prediction system can generate alarm before epileptic seizures to provide patients and their caregivers with sufficient time to take appropriate action. This study proposes an efficient seizure prediction system based on deep learning in order to anticipate the onset of the seizure as early as possible. Handcrafted features extracted based on the prior knowledge and hidden deep features are complementarily fused through the feature fusion module, and then the hybrid features are fed into the multiplicative long short-term memory (MLSTM) to explore the temporal dependency in EEG signals. A one-dimensional channel attention mechanism is implemented to emphasize the more representative information in the multi-channel output of the MLSTM. Finally, a transfer learning strategy is proposed to transfer the weights of the base model trained on the EEG data of all patients to the target patient model, and the latter is then continuously trained using the EEG data of the target patient. The proposed method achieves an average sensitivity of 95.56% and a false positive rate (FPR) of 0.27/h on the SWEC-ETHZ intracranial EEG data. For the more challenging CHB-MIT scalp EEG database, an average sensitivity of 89.47% and a FPR of 0.34/h are obtained. Experimental results demonstrate that the proposed method has good robustness and generalization ability in both intracranial and scalp EEG signals.


Assuntos
Epilepsia , Qualidade de Vida , Algoritmos , Eletroencefalografia/métodos , Epilepsia/diagnóstico , Humanos , Aprendizado de Máquina , Redes Neurais de Computação , Convulsões/diagnóstico
19.
Comput Methods Programs Biomed ; 221: 106840, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35550455

RESUMO

BACKGROUND AND OBJECTIVE: Recently, spectral Dynamic Causal Modelling (DCM) has been used increasingly to infer effective connectivity from epileptic intracranial electroencephalographic (iEEG) signals. In this context, the Physiology-Based Model (PBM), a neural mass model, is used as a generative model. However, previous studies have highlighted out the inability of PBM to properly describe iEEG signals with specific power spectral densities (PSDs). More precisely, PSDs that have multiple peaks around ß and γ rhythms (i.e. spectral characteristics at seizure onset) are concerned. METHODS: To cope with this limitation, an alternative neural mass model, called the complete PBM (cPBM), is investigated. The spectral DCM and two recent variants are used to evaluate the relevance of cPBM over PBM. RESULTS: The study is conducted on both simulated signals and real epileptic iEEG recordings. Our results confirm that, compared to PBM, cPBM shows (i) more ability to model the desired PSDs and (ii) lower numerical complexity whatever the method. CONCLUSIONS: Thanks to its intrinsic and extrinsic connectivity parameters as well as the input coming into the fast inhibitory subpopulation, the cPBM provides a more expressive model of PSDs, leading to a better understanding of epileptic patterns and DCM-based effective connectivity inference.


Assuntos
Epilepsia , Rede Nervosa , Encéfalo , Eletroencefalografia , Ritmo Gama , Humanos , Modelos Neurológicos , Modelos Teóricos , Convulsões
20.
IEEE J Biomed Health Inform ; 26(7): 3015-3024, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35259123

RESUMO

Accurate and robust cephalometric image analysis plays an essential role in orthodontic diagnosis, treatment assessment and surgical planning. This paper proposes a novel landmark localization method for cephalometric analysis using multiscale image patch-based graph convolutional networks. In detail, image patches with the same size are hierarchically sampled from the Gaussian pyramid to well preserve multiscale context information. We combine local appearance and shape information into spatialized features with an attention module to enrich node representations in graph. The spatial relationships of landmarks are built with the incorporation of three-layer graph convolutional networks, and multiple landmarks are simultaneously updated and moved toward the targets in a cascaded coarse-to-fine process. Quantitative results obtained on publicly available cephalometric X-ray images have exhibited superior performance compared with other state-of-the-art methods in terms of mean radial error and successful detection rate within various precision ranges. Our approach performs significantly better especially in the clinically accepted range of 2 mm and this makes it suitable in cephalometric analysis and orthognathic surgery.


Assuntos
Processamento de Imagem Assistida por Computador , Cefalometria/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Radiografia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...