Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 72
Filtrar
1.
Eur J Radiol ; 177: 111581, 2024 Jun 17.
Artigo em Inglês | MEDLINE | ID: mdl-38925042

RESUMO

PURPOSE: To develop and validate an artificial intelligence (AI) application in a clinical setting to decide whether dynamic contrast-enhanced (DCE) sequences are necessary in multiparametric prostate MRI. METHODS: This study was approved by the institutional review board and requirement for study-specific informed consent was waived. A mobile app was developed to integrate AI-based image quality analysis into clinical workflow. An expert radiologist provided reference decisions. Diagnostic performance parameters (sensitivity and specificity) were calculated and inter-reader agreement was evaluated. RESULTS: Fully automated evaluation was possible in 87% of cases, with the application reaching a sensitivity of 80% and a specificity of 100% in selecting patients for multiparametric MRI. In 2% of patients, the application falsely decided on omitting DCE. With a technician reaching a sensitivity of 29% and specificity of 98%, and resident radiologists reaching sensitivity of 29% and specificity of 93%, the use of the application allowed a significant increase in sensitivity. CONCLUSION: The presented AI application accurately decides on a patient-specific MRI protocol based on image quality analysis, potentially allowing omission of DCE in the diagnostic workup of patients with suspected prostate cancer. This could streamline workflow and optimize time utilization of healthcare professionals.

2.
Sci Rep ; 14(1): 12526, 2024 05 31.
Artigo em Inglês | MEDLINE | ID: mdl-38822074

RESUMO

Transcatheter aortic valve replacement (TAVR) is a widely used intervention for patients with severe aortic stenosis. Identifying high-risk patients is crucial due to potential postprocedural complications. Currently, this involves manual clinical assessment and time-consuming radiological assessment of preprocedural computed tomography (CT) images by an expert radiologist. In this study, we introduce a probabilistic model that predicts post-TAVR mortality automatically using unprocessed, preprocedural CT and 25 baseline patient characteristics. The model utilizes CT volumes by automatically localizing and extracting a region of interest around the aortic root and ascending aorta. It then extracts task-specific features with a 3D deep neural network and integrates them with patient characteristics to perform outcome prediction. As missing measurements or even missing CT images are common in TAVR planning, the proposed model is designed with a probabilistic structure to allow for marginalization over such missing information. Our model demonstrates an AUROC of 0.725 for predicting all-cause mortality during postprocedure follow-up on a cohort of 1449 TAVR patients. This performance is on par with what can be achieved with lengthy radiological assessments performed by experts. Thus, these findings underscore the potential of the proposed model in automatically analyzing CT volumes and integrating them with patient characteristics for predicting mortality after TAVR.


Assuntos
Estenose da Valva Aórtica , Tomografia Computadorizada por Raios X , Substituição da Valva Aórtica Transcateter , Humanos , Substituição da Valva Aórtica Transcateter/mortalidade , Substituição da Valva Aórtica Transcateter/métodos , Tomografia Computadorizada por Raios X/métodos , Feminino , Masculino , Idoso de 80 Anos ou mais , Estenose da Valva Aórtica/cirurgia , Estenose da Valva Aórtica/mortalidade , Estenose da Valva Aórtica/diagnóstico por imagem , Idoso
3.
Radiol Med ; 129(6): 901-911, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38700556

RESUMO

PURPOSE: High PSMA expression might be correlated with structural characteristics such as growth patterns on histopathology, not recognized by the human eye on MRI images. Deep structural image analysis might be able to detect such differences and therefore predict if a lesion would be PSMA positive. Therefore, we aimed to train a neural network based on PSMA PET/MRI scans to predict increased prostatic PSMA uptake based on the axial T2-weighted sequence alone. MATERIAL AND METHODS: All patients undergoing simultaneous PSMA PET/MRI for PCa staging or biopsy guidance between April 2016 and December 2020 at our institution were selected. To increase the specificity of our model, the prostatic beds on PSMA PET scans were dichotomized in positive and negative regions using an SUV threshold greater than 4 to generate a PSMA PET map. Then, a C-ENet was trained on the T2 images of the training cohort to generate a predictive prostatic PSMA PET map. RESULTS: One hundred and fifty-four PSMA PET/MRI scans were available (133 [68Ga]Ga-PSMA-11 and 21 [18F]PSMA-1007). Significant cancer was present in 127 of them. The whole dataset was divided into a training cohort (n = 124) and a test cohort (n = 30). The C-ENet was able to predict the PSMA PET map with a dice similarity coefficient of 69.5 ± 15.6%. CONCLUSION: Increased prostatic PSMA uptake on PET might be estimated based on T2 MRI alone. Further investigation with larger cohorts and external validation is needed to assess whether PSMA uptake can be predicted accurately enough to help in the interpretation of mpMRI.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética , Neoplasias da Próstata , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Idoso , Imageamento por Ressonância Magnética/métodos , Pessoa de Meia-Idade , Próstata/diagnóstico por imagem , Tomografia por Emissão de Pósitrons/métodos , Estudos Retrospectivos , Glutamato Carboxipeptidase II/metabolismo , Antígenos de Superfície/metabolismo , Valor Preditivo dos Testes , Tamanho do Órgão , Radioisótopos de Gálio , Compostos Radiofarmacêuticos/farmacocinética
4.
Radiol Artif Intell ; 6(4): e230138, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38568094

RESUMO

Purpose To investigate the accuracy and robustness of prostate segmentation using deep learning across various training data sizes, MRI vendors, prostate zones, and testing methods relative to fellowship-trained diagnostic radiologists. Materials and Methods In this systematic review, Embase, PubMed, Scopus, and Web of Science databases were queried for English-language articles using keywords and related terms for prostate MRI segmentation and deep learning algorithms dated to July 31, 2022. A total of 691 articles from the search query were collected and subsequently filtered to 48 on the basis of predefined inclusion and exclusion criteria. Multiple characteristics were extracted from selected studies, such as deep learning algorithm performance, MRI vendor, and training dataset features. The primary outcome was comparison of mean Dice similarity coefficient (DSC) for prostate segmentation for deep learning algorithms versus diagnostic radiologists. Results Forty-eight studies were included. Most published deep learning algorithms for whole prostate gland segmentation (39 of 42 [93%]) had a DSC at or above expert level (DSC ≥ 0.86). The mean DSC was 0.79 ± 0.06 (SD) for peripheral zone, 0.87 ± 0.05 for transition zone, and 0.90 ± 0.04 for whole prostate gland segmentation. For selected studies that used one major MRI vendor, the mean DSCs of each were as follows: General Electric (three of 48 studies), 0.92 ± 0.03; Philips (four of 48 studies), 0.92 ± 0.02; and Siemens (six of 48 studies), 0.91 ± 0.03. Conclusion Deep learning algorithms for prostate MRI segmentation demonstrated accuracy similar to that of expert radiologists despite varying parameters; therefore, future research should shift toward evaluating segmentation robustness and patient outcomes across diverse clinical settings. Keywords: MRI, Genital/Reproductive, Prostate Segmentation, Deep Learning Systematic review registration link: osf.io/nxaev © RSNA, 2024.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética , Neoplasias da Próstata , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Próstata/diagnóstico por imagem , Próstata/anatomia & histologia , Interpretação de Imagem Assistida por Computador/métodos
5.
Phys Imaging Radiat Oncol ; 27: 100464, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37497188

RESUMO

Background and purpose: The superior tissue contrast of magnetic resonance (MR) compared to computed tomography (CT) led to an increasing interest towards MR-only radiotherapy. For the latter, the dose calculation should be performed on a synthetic CT (sCT). Patient-specific quality assurance (PSQA) methods have not been established yet and this study aimed to assess several software-based solutions. Materials and methods: A retrospective study was performed on 20 patients treated at an MR-Linac, which were selected to evenly cover four subcategories: (i) standard, (ii) air pockets, (iii) lung and (iv) implant cases. The neural network (NN) CycleGAN was adopted to generate a reference sCT, which was then compared to four PSQA methods: (A) water override of body, (B) five tissue classes with bulk densities, (C) sCT generated by a separate NN (pix2pix) and (D) deformed CT. Results: The evaluation of the dose endpoints demonstrated that while all methods A-D provided statistically equivalent results (p = 0.05) within the 2% level for the standard cases (i), only the methods C-D guaranteed the same result over the whole cohort. The bulk densities override was shown to be a valuable method in absence of lung tissue within the beam path. Conclusion: The observations of this study suggested that the use of an additional sCT generated by a separate NN was an appropriate tool to perform PSQA of a sCT in an MR-only workflow at an MR-Linac. The time and dose endpoints requirements were respected, namely within 10 min and 2%.

6.
Phys Imaging Radiat Oncol ; 27: 100471, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37497191

RESUMO

Background and purpose: Synthetic computed tomography (sCT) scans are necessary for dose calculation in magnetic resonance (MR)-only radiotherapy. While deep learning (DL) has shown remarkable performance in generating sCT scans from MR images, research has predominantly focused on high-field MR images. This study presents the first implementation of a DL model for sCT generation in head-and-neck (HN) cancer using low-field MR images. Specifically, the use of vision transformers (ViTs) was explored. Materials and methods: The dataset consisted of 31 patients, resulting in 196 pairs of deformably-registered computed tomography (dCT) and MR scans. The latter were obtained using a balanced steady-state precession sequence on a 0.35T scanner. Residual ViTs were trained on 2D axial, sagittal, and coronal slices, respectively, and the final sCTs were generated by averaging the models' outputs. Different image similarity metrics, dose volume histogram (DVH) deviations, and gamma analyses were computed on the test set (n = 6). The overlap between auto-contours on sCT scans and manual contours on MR images was evaluated for different organs-at-risk using the Dice score. Results: The median [range] value of the test mean absolute error was 57 [37-74] HU. DVH deviations were below 1% for all structures. The median gamma passing rates exceeded 94% in the 2%/2mm analysis (threshold = 90%). The median Dice scores were above 0.7 for all organs-at-risk. Conclusions: The clinical applicability of DL-based sCT generation from low-field MR images in HN cancer was proved. High sCT-dCT similarity and dose metric accuracy were achieved, and sCT suitability for organs-at-risk auto-delineation was shown.

7.
IEEE Trans Neural Netw Learn Syst ; 34(10): 6955-6967, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37027587

RESUMO

3-D object recognition has successfully become an appealing research topic in the real world. However, most existing recognition models unreasonably assume that the categories of 3-D objects cannot change over time in the real world. This unrealistic assumption may result in significant performance degradation for them to learn new classes of 3-D objects consecutively due to the catastrophic forgetting on old learned classes. Moreover, they cannot explore which 3-D geometric characteristics are essential to alleviate the catastrophic forgetting on old classes of 3-D objects. To tackle the above challenges, we develop a novel Incremental 3-D Object Recognition Network (i.e., InOR-Net), which could recognize new classes of 3-D objects continuously by overcoming the catastrophic forgetting on old classes. Specifically, category-guided geometric reasoning is proposed to reason local geometric structures with distinctive 3-D characteristics of each class by leveraging intrinsic category information. We then propose a novel critic-induced geometric attention mechanism to distinguish which 3-D geometric characteristics within each class are beneficial to overcome the catastrophic forgetting on old classes of 3-D objects while preventing the negative influence of useless 3-D characteristics. In addition, a dual adaptive fairness compensations' strategy is designed to overcome the forgetting brought by class imbalance by compensating biased weights and predictions of the classifier. Comparison experiments verify the state-of-the-art performance of the proposed InOR-Net model on several public point cloud datasets.

8.
Med Image Anal ; 87: 102792, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37054649

RESUMO

Supervised deep learning-based methods yield accurate results for medical image segmentation. However, they require large labeled datasets for this, and obtaining them is a laborious task that requires clinical expertise. Semi/self-supervised learning-based approaches address this limitation by exploiting unlabeled data along with limited annotated data. Recent self-supervised learning methods use contrastive loss to learn good global level representations from unlabeled images and achieve high performance in classification tasks on popular natural image datasets like ImageNet. In pixel-level prediction tasks such as segmentation, it is crucial to also learn good local level representations along with global representations to achieve better accuracy. However, the impact of the existing local contrastive loss-based methods remains limited for learning good local representations because similar and dissimilar local regions are defined based on random augmentations and spatial proximity; not based on the semantic label of local regions due to lack of large-scale expert annotations in the semi/self-supervised setting. In this paper, we propose a local contrastive loss to learn good pixel level features useful for segmentation by exploiting semantic label information obtained from pseudo-labels of unlabeled images alongside limited annotated images with ground truth (GT) labels. In particular, we define the proposed contrastive loss to encourage similar representations for the pixels that have the same pseudo-label/GT label while being dissimilar to the representation of pixels with different pseudo-label/GT label in the dataset. We perform pseudo-label based self-training and train the network by jointly optimizing the proposed contrastive loss on both labeled and unlabeled sets and segmentation loss on only the limited labeled set. We evaluated the proposed approach on three public medical datasets of cardiac and prostate anatomies, and obtain high segmentation performance with a limited labeled set of one or two 3D volumes. Extensive comparisons with the state-of-the-art semi-supervised and data augmentation methods and concurrent contrastive learning methods demonstrate the substantial improvement achieved by the proposed method. The code is made publicly available at https://github.com/krishnabits001/pseudo_label_contrastive_training.


Assuntos
Coração , Pelve , Masculino , Humanos , Próstata , Semântica , Aprendizado de Máquina Supervisionado , Processamento de Imagem Assistida por Computador
9.
Med Phys ; 50(9): 5682-5697, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36945890

RESUMO

BACKGROUND: To test and validate novel CT techniques, such as texture analysis in radiomics, repeat measurements are required. Current anthropomorphic phantoms lack fine texture and true anatomic representation. 3D-printing of iodinated ink on paper is a promising phantom manufacturing technique. Previously acquired or artificially created CT data can be used to generate realistic phantoms. PURPOSE: To present the design process of an anthropomorphic 3D-printed iodine ink phantom, highlighting the different advantages and pitfalls in its use. To analyze the phantom's X-ray attenuation properties, and the influences of the printing process on the imaging characteristics, by comparing it to the original input dataset. METHODS: Two patient CT scans and artificially generated test patterns were combined in a single dataset for phantom printing and cropped to a size of 26 × 19 × 30 cm3 . This DICOM dataset was printed on paper using iodinated ink. The phantom was CT-scanned and compared to the original image dataset used for printing the phantom. The water-equivalent diameter of the phantom was compared to that of a patient cohort (N = 104). Iodine concentrations in the phantom were measured using dual-energy CT. 86 radiomics features were extracted from 10 repeat phantom scans and the input dataset. Features were compared using a histogram analysis and a PCA individually and overall, respectively. The frequency content was compared using the normalized spectrum modulus. RESULTS: Low density structures are depicted incorrectly, while soft tissue structures show excellent visual accordance with the input dataset. Maximum deviations of around 30 HU between the original dataset and phantom HU values were observed. The phantom has X-ray attenuation properties comparable to a lightweight adult patient (∼54 kg, BMI 19 kg/m2 ). Iodine concentrations in the phantom varied between 0 and 50 mg/ml. PCA of radiomics features shows different tissue types separate in similar areas of PCA representation in the phantom scans as in the input dataset. Individual feature analysis revealed systematic shift of first order radiomics features compared to the original dataset, while some higher order radiomics features did not. The normalized frequency modulus |f(ω)| of the phantom data agrees well with the original data. However, all frequencies systematically occur more frequently in the phantom compared to the maximum of the spectrum modulus than in the original data set, especially for mid-frequencies (e.g., for ω = 0.3942 mm-1 , |f(ω)|original  = 0.09 * |fmax |original and |f(ω)|phantom  = 0.12 * |fmax |phantom ). CONCLUSIONS: 3D-iodine-ink-printing technology can be used to print anthropomorphic phantoms with a water-equivalent diameter of a lightweight adult patient. Challenges include small residual air enclosures and the fidelity of HU values. For soft tissue, there is a good agreement between the HU values of the phantom and input data set. Radiomics texture features of the phantom scans are similar to the input data set, but systematic shifts of radiomics features in first order features, due to differences in HU values, need to be considered. The paper substrate influences the spatial frequency distribution of the phantom scans. This phantom type is of very limited use for dual-energy CT analyses.


Assuntos
Tinta , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Imagens de Fantasmas , Impressão Tridimensional
10.
Med Image Anal ; 83: 102599, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36327652

RESUMO

Despite recent progress of automatic medical image segmentation techniques, fully automatic results usually fail to meet clinically acceptable accuracy, thus typically require further refinement. To this end, we propose a novel Volumetric Memory Network, dubbed as VMN, to enable segmentation of 3D medical images in an interactive manner. Provided by user hints on an arbitrary slice, a 2D interaction network is firstly employed to produce an initial 2D segmentation for the chosen slice. Then, the VMN propagates the initial segmentation mask bidirectionally to all slices of the entire volume. Subsequent refinement based on additional user guidance on other slices can be incorporated in the same manner. To facilitate smooth human-in-the-loop segmentation, a quality assessment module is introduced to suggest the next slice for interaction based on the segmentation quality of each slice produced in the previous round. Our VMN demonstrates two distinctive features: First, the memory-augmented network design offers our model the ability to quickly encode past segmentation information, which will be retrieved later for the segmentation of other slices; Second, the quality assessment module enables the model to directly estimate the quality of each segmentation prediction, which allows for an active learning paradigm where users preferentially label the lowest-quality slice for multi-round refinement. The proposed network leads to a robust interactive segmentation engine, which can generalize well to various types of user annotations (e.g., scribble, bounding box, extreme clicking). Extensive experiments have been conducted on three public medical image segmentation datasets (i.e., MSD, KiTS19, CVC-ClinicDB), and the results clearly confirm the superiority of our approach in comparison with state-of-the-art segmentation models. The code is made publicly available at https://github.com/0liliulei/Mem3D.

11.
Phys Imaging Radiat Oncol ; 24: 173-179, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36478992

RESUMO

Background and purpose: The requirement of computed tomography (CT) for radiotherapy planning may be bypassed by synthetic CT (sCT) generated from magnetic resonance (MR), which has recently led to the clinical introduction of MR-only radiotherapy for specific sites. Further developments are required for abdominal sCT, mostly due to the presence of mobile air pockets affecting the dose calculation. In this study we aimed to overcome this limitation for abdominal sCT at a low field (0.35 T) hybrid MR-Linac. Materials and methods: A retrospective analysis was conducted enrolling 168 patients corresponding to 215 MR-CT pairs. After the exclusion criteria, 152 volumetric images were used to train the cycle-consistent generative adversarial network (CycleGAN) and 34 to test the sCT. Image similarity metrics and dose recalculation analysis were performed. Results: The generated sCT faithfully reproduced the original CT and the location of the air pockets agreed with the MR scan. The dose calculation did not require manual bulk density overrides and the mean deviations of the dose-volume histogram dosimetric points were within 1 % of the CT, without any outlier above 2 %. The mean gamma passing rates were above 99 % for the 2 %/ 2 mm analysis and no cases below 95 % were observed. Conclusions: This study presented the implementation of CycleGAN to perform sCT generation in the abdominal region for a low field hybrid MR-Linac. The sCT was shown to correctly allocate the electron density for the mobile air pockets and the dosimetric analysis demonstrated the potential for future implementation of MR-only radiotherapy in the abdomen.

12.
Sci Rep ; 12(1): 4732, 2022 03 18.
Artigo em Inglês | MEDLINE | ID: mdl-35304508

RESUMO

Medical imaging quantitative features had once disputable usefulness in clinical studies. Nowadays, advancements in analysis techniques, for instance through machine learning, have enabled quantitative features to be progressively useful in diagnosis and research. Tissue characterisation is improved via the "radiomics" features, whose extraction can be automated. Despite the advances, stability of quantitative features remains an important open problem. As features can be highly sensitive to variations of acquisition details, it is not trivial to quantify stability and efficiently select stable features. In this work, we develop and validate a Computed Tomography (CT) simulator environment based on the publicly available ASTRA toolbox ( www.astra-toolbox.com ). We show that the variability, stability and discriminative power of the radiomics features extracted from the virtual phantom images generated by the simulator are similar to those observed in a tandem phantom study. Additionally, we show that the variability is matched between a multi-center phantom study and simulated results. Consequently, we demonstrate that the simulator can be utilised to assess radiomics features' stability and discriminative power.


Assuntos
Aprendizado de Máquina , Tomografia Computadorizada por Raios X , Imagens de Fantasmas , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos
13.
IEEE Trans Med Imaging ; 41(7): 1885-1896, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35143393

RESUMO

Undersampling the k-space during MR acquisitions saves time, however results in an ill-posed inversion problem, leading to an infinite set of images as possible solutions. Traditionally, this is tackled as a reconstruction problem by searching for a single "best" image out of this solution set according to some chosen regularization or prior. This approach, however, misses the possibility of other solutions and hence ignores the uncertainty in the inversion process. In this paper, we propose a method that instead returns multiple images which are possible under the acquisition model and the chosen prior to capture the uncertainty in the inversion process. To this end, we introduce a low dimensional latent space and model the posterior distribution of the latent vectors given the acquisition data in k-space, from which we can sample in the latent space and obtain the corresponding images. We use a variational autoencoder for the latent model and the Metropolis adjusted Langevin algorithm for the sampling. We evaluate our method on two datasets; with images from the Human Connectome Project and in-house measured multi-coil images. We compare to five alternative methods. Results indicate that the proposed method produces images that match the measured k-space data better than the alternatives, while showing realistic structural variability. Furthermore, in contrast to the compared methods, the proposed method yields higher uncertainty in the undersampled phase encoding direction, as expected.


Assuntos
Conectoma , Processamento de Imagem Assistida por Computador , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos
14.
Comput Biol Med ; 142: 105215, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34999414

RESUMO

BACKGROUND: Infection with human papilloma virus (HPV) is one of the most relevant prognostic factors in advanced oropharyngeal cancer (OPC) treatment. In this study we aimed to assess the diagnostic accuracy of a deep learning-based method for HPV status prediction in computed tomography (CT) images of advanced OPC. METHOD: An internal dataset and three public collections were employed (internal: n = 151, HNC1: n = 451; HNC2: n = 80; HNC3: n = 110). Internal and HNC1 datasets were used for training, whereas HNC2 and HNC3 collections were used as external test cohorts. All CT scans were resampled to a 2 mm3 resolution and a sub-volume of 72x72x72 pixels was cropped on each scan, centered around the tumor. Then, a 2.5D input of size 72x72x3 pixels was assembled by selecting the 2D slice containing the largest tumor area along the axial, sagittal and coronal planes, respectively. The convolutional neural network employed consisted of the first 5 modules of the Xception model and a small classification network. Ten-fold cross-validation was applied to evaluate training performance. At test time, soft majority voting was used to predict HPV status. RESULTS: A final training mean [range] area under the curve (AUC) of 0.84 [0.76-0.89], accuracy of 0.76 [0.64-0.83] and F1-score of 0.74 [0.62-0.83] were achieved. AUC/accuracy/F1-score values of 0.83/0.75/0.69 and 0.88/0.79/0.68 were achieved on the HNC2 and HNC3 test sets, respectively. CONCLUSION: Deep learning was successfully applied and validated in two external cohorts to predict HPV status in CT images of advanced OPC, proving its potential as a support tool in cancer precision medicine.


Assuntos
Alphapapillomavirus , Neoplasias Orofaríngeas , Infecções por Papillomavirus , Humanos , Redes Neurais de Computação , Neoplasias Orofaríngeas/diagnóstico por imagem , Papillomaviridae , Infecções por Papillomavirus/diagnóstico por imagem
15.
Invest Radiol ; 57(1): 33-43, 2022 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-34074943

RESUMO

OBJECTIVES: To develop, test, and validate a body composition profiling algorithm for automated segmentation of body compartments in whole-body magnetic resonance imaging (wbMRI) and to investigate the influence of different acquisition parameters on performance and robustness. MATERIALS AND METHODS: A segmentation algorithm for subcutaneous and visceral adipose tissue (SCAT and VAT) and total muscle mass (TMM) was designed using a deep learning U-net architecture convolutional neuronal network. Twenty clinical wbMRI scans were manually segmented and used as training, validation, and test datasets. Segmentation performance was then tested on different data, including different magnetic resonance imaging protocols and scanners with and without use of contrast media. Test-retest reliability on 2 consecutive scans of 16 healthy volunteers each as well as impact of parameters slice thickness, matrix resolution, and different coil settings were investigated. Sorensen-Dice coefficient (DSC) was used to measure the algorithms' performance with manual segmentations as reference standards. Test-retest reliability and parameter effects were investigated comparing respective compartment volumes. Abdominal volumes were compared with published normative values. RESULTS: Algorithm performance measured by DSC was 0.93 (SCAT) to 0.77 (VAT) using the test dataset. Dependent from the respective compartment, similar or slightly reduced performance was seen for other scanners and scan protocols (DSC ranging from 0.69-0.72 for VAT to 0.83-0.91 for SCAT). No significant differences in body composition profiling was seen on repetitive volunteer scans (P = 0.88-1) or after variation of protocol parameters (P = 0.07-1). CONCLUSIONS: Body composition profiling from wbMRI by using a deep learning-based convolutional neuronal network algorithm for automated segmentation of body compartments is generally possible. First results indicate that robust and reproducible segmentations equally accurate to a manual expert may be expected also for a range of different acquisition parameters.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética , Algoritmos , Composição Corporal , Humanos , Reprodutibilidade dos Testes , Imagem Corporal Total
16.
Arthritis Care Res (Hoboken) ; 74(6): 929-936, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-33337584

RESUMO

OBJECTIVE: To study the longitudinal performance of fully automated cartilage segmentation in knees with radiographic osteoarthritis (OA), we evaluated the sensitivity to change in progressor knees from the Foundation for the National Institutes of Health OA Biomarkers Consortium between the automated and previously reported manual expert segmentation, and we determined whether differences in progression rates between predefined cohorts can be detected by the fully automated approach. METHODS: The OA Initiative Biomarker Consortium was a nested case-control study. Progressor knees had both medial tibiofemoral radiographic joint space width loss (≥0.7 mm) and a persistent increase in Western Ontario and McMaster Universities Osteoarthritis Index pain scores (≥9 on a 0-100 scale) after 2 years from baseline (n = 194), whereas non-progressor knees did not have either of both (n = 200). Deep-learning automated algorithms trained on radiographic OA knees or knees of a healthy reference cohort (HRC) were used to automatically segment medial femorotibial compartment (MFTC) and lateral femorotibial cartilage on baseline and 2-year follow-up magnetic resonance imaging. Findings were compared with previously published manual expert segmentation. RESULTS: The mean ± SD MFTC cartilage loss in the progressor cohort was -181 ± 245 µm by manual segmentation (standardized response mean [SRM] -0.74), -144 ± 200 µm by the radiographic OA-based model (SRM -0.72), and -69 ± 231 µm by HRC-based model segmentation (SRM -0.30). Cohen's d for rates of progression between progressor versus the non-progressor cohort was -0.84 (P < 0.001) for manual, -0.68 (P < 0.001) for the automated radiographic OA model, and -0.14 (P = 0.18) for automated HRC model segmentation. CONCLUSION: A fully automated deep-learning segmentation approach not only displays similar sensitivity to change of longitudinal cartilage thickness loss in knee OA as did manual expert segmentation but also effectively differentiates longitudinal rates of loss of cartilage thickness between cohorts with different progression profiles.


Assuntos
Cartilagem Articular , Aprendizado Profundo , Osteoartrite do Joelho , Algoritmos , Biomarcadores , Cartilagem Articular/diagnóstico por imagem , Cartilagem Articular/patologia , Estudos de Casos e Controles , Progressão da Doença , Humanos , Articulação do Joelho/diagnóstico por imagem , Articulação do Joelho/patologia , Imageamento por Ressonância Magnética/métodos , National Institutes of Health (U.S.) , Osteoartrite do Joelho/diagnóstico por imagem , Osteoartrite do Joelho/patologia , Estados Unidos
17.
Nat Commun ; 12(1): 6205, 2021 Oct 27.
Artigo em Inglês | MEDLINE | ID: mdl-34707110

RESUMO

Accurate 3D representations of lithium-ion battery electrodes, in which the active particles, binder and pore phases are distinguished and labeled, can assist in understanding and ultimately improving battery performance. Here, we demonstrate a methodology for using deep-learning tools to achieve reliable segmentations of volumetric images of electrodes on which standard segmentation approaches fail due to insufficient contrast. We implement the 3D U-Net architecture for segmentation, and, to overcome the limitations of training data obtained experimentally through imaging, we show how synthetic learning data, consisting of realistic artificial electrode structures and their tomographic reconstructions, can be generated and used to enhance network performance. We apply our method to segment x-ray tomographic microscopy images of graphite-silicon composite electrodes and show it is accurate across standard metrics. We then apply it to obtain a statistically meaningful analysis of the microstructural evolution of the carbon-black and binder domain during battery operation.

18.
Med Image Anal ; 74: 102208, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34487984

RESUMO

Unsupervised abnormality detection is an appealing approach to identify patterns that are not present in training data without specific annotations for such patterns. In the medical imaging field, methods taking this approach have been proposed to detect lesions. The appeal of this approach stems from the fact that it does not require lesion-specific supervision and can potentially generalize to any sort of abnormal patterns. The principle is to train a generative model on images from healthy individuals to estimate the distribution of images of the normal anatomy, i.e., a normative distribution, and detect lesions as out-of-distribution regions. Restoration-based techniques that modify a given image by taking gradient ascent steps with respect to a posterior distribution composed of a normative distribution and a likelihood term recently yielded state-of-the-art results. However, these methods do not explicitly model ascent directions with respect to the normative distribution, i.e. normative ascent direction, which is essential for successful restoration. In this work, we introduce a novel approach for unsupervised lesion detection by modeling normative ascent directions. We present different modelling options based on the defined ascent directions with local Gaussians. We further extend the proposed method to efficiently utilize 3D information, which has not been explored in most existing works. We experimentally show that the proposed method provides higher accuracy in detection and produces more realistic restored images. The performance of the proposed method is evaluated against baselines on publicly available BRATS and ATLAS stroke lesion datasets; the detection accuracy of the proposed method surpasses the current state-of-the-art results.


Assuntos
Acidente Vascular Cerebral , Humanos
19.
Insights Imaging ; 12(1): 112, 2021 Aug 09.
Artigo em Inglês | MEDLINE | ID: mdl-34370164

RESUMO

OBJECTIVES: To develop and validate an artificial intelligence algorithm to decide on the necessity of dynamic contrast-enhanced sequences (DCE) in prostate MRI. METHODS: This study was approved by the institutional review board and requirement for study-specific informed consent was waived. A convolutional neural network (CNN) was developed on 300 prostate MRI examinations. Consensus of two expert readers on the necessity of DCE acted as reference standard. The CNN was validated in a separate cohort of 100 prostate MRI examinations from the same vendor and 31 examinations from a different vendor. Sensitivity/specificity were calculated using ROC curve analysis and results were compared to decisions made by a radiology technician. RESULTS: The CNN reached a sensitivity of 94.4% and specificity of 68.8% (AUC: 0.88) for the necessity of DCE, correctly assigning 44%/34% of patients to a biparametric/multiparametric protocol. In 2% of all patients, the CNN incorrectly decided on omitting DCE. With a technician reaching a sensitivity of 63.9% and specificity of 89.1%, the use of the CNN would allow for an increase in sensitivity of 30.5%. The CNN achieved an AUC of 0.73 in a set of examinations from a different vendor. CONCLUSIONS: The CNN would have correctly assigned 78% of patients to a biparametric or multiparametric protocol, with only 2% of all patients requiring re-examination to add DCE sequences. Integrating this CNN in clinical routine could render the requirement for on-table monitoring obsolete by performing contrast-enhanced MRI only when needed.

20.
Oper Neurosurg (Hagerstown) ; 21(4): 242-247, 2021 09 15.
Artigo em Inglês | MEDLINE | ID: mdl-34131753

RESUMO

BACKGROUND: Current intraoperative orientation methods either rely on preoperative imaging, are resource-intensive to implement, or difficult to interpret. Real-time, reliable anatomic recognition would constitute another strong pillar on which neurosurgeons could rest for intraoperative orientation. OBJECTIVE: To assess the feasibility of machine vision algorithms to identify anatomic structures using only the endoscopic camera without prior explicit anatomo-topographic knowledge in a proof-of-concept study. METHODS: We developed and validated a deep learning algorithm to detect the nasal septum, the middle turbinate, and the inferior turbinate during endoscopic endonasal approaches based on endoscopy videos from 23 different patients. The model was trained in a weakly supervised manner on 18 and validated on 5 patients. Performance was compared against a baseline consisting of the average positions of the training ground truth labels using a semiquantitative 3-tiered system. RESULTS: We used 367 images extracted from the videos of 18 patients for training, as well as 182 test images extracted from the videos of another 5 patients for testing the fully developed model. The prototype machine vision algorithm was able to identify the 3 endonasal structures qualitatively well. Compared to the baseline model based on location priors, the algorithm demonstrated slightly but statistically significantly (P < .001) improved annotation performance. CONCLUSION: Automated recognition of anatomic structures in endoscopic videos by means of a machine vision model using only the endoscopic camera without prior explicit anatomo-topographic knowledge is feasible. This proof of concept encourages further development of fully automated software for real-time intraoperative anatomic guidance during surgery.


Assuntos
Algoritmos , Endoscopia , Humanos , Hipófise , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA