Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 64
Filtrar
1.
Bioengineering (Basel) ; 11(5)2024 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-38790302

RESUMO

The progress of incorporating deep learning in the field of medical image interpretation has been greatly hindered due to the tremendous cost and time associated with generating ground truth for supervised machine learning, alongside concerns about the inconsistent quality of images acquired. Active learning offers a potential solution to these problems of expanding dataset ground truth by algorithmically choosing the most informative samples for ground truth labeling. Still, this effort incurs the costs of human labeling, which needs minimization. Furthermore, automatic labeling approaches employing active learning often exhibit overfitting tendencies while selecting samples closely aligned with the training set distribution and excluding out-of-distribution samples, which could potentially improve the model's effectiveness. We propose that the majority of out-of-distribution instances can be attributed to inconsistent cross images. Since the FDA approved the first whole-slide image system for medical diagnosis in 2017, whole-slide images have provided enriched critical information to advance the field of automated histopathology. Here, we exemplify the benefits of a novel deep learning strategy that utilizes high-resolution whole-slide microscopic images. We quantitatively assess and visually highlight the inconsistencies within the whole-slide image dataset employed in this study. Accordingly, we introduce a deep learning-based preprocessing algorithm designed to normalize unknown samples to the training set distribution, effectively mitigating the overfitting issue. Consequently, our approach significantly increases the amount of automatic region-of-interest ground truth labeling on high-resolution whole-slide images using active deep learning. We accept 92% of the automatic labels generated for our unlabeled data cohort, expanding the labeled dataset by 845%. Additionally, we demonstrate expert time savings of 96% relative to manual expert ground-truth labeling.

2.
Neurotoxicol Teratol ; 102: 107336, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38402997

RESUMO

Microglial cells mediate diverse homeostatic, inflammatory, and immune processes during normal development and in response to cytotoxic challenges. During these functional activities, microglial cells undergo distinct numerical and morphological changes in different tissue volumes in both rodent and human brains. However, it remains unclear how these cytostructural changes in microglia correlate with region-specific neurochemical functions. To better understand these relationships, neuroscientists need accurate, reproducible, and efficient methods for quantifying microglial cell number and morphologies in histological sections. To address this deficit, we developed a novel deep learning (DL)-based classification, stereology approach that links the appearance of Iba1 immunostained microglial cells at low magnification (20×) with the total number of cells in the same brain region based on unbiased stereology counts as ground truth. Once DL models are trained, total microglial cell numbers in specific regions of interest can be estimated and treatment groups predicted in a high-throughput manner (<1 min) using only low-power images from test cases, without the need for time and labor-intensive stereology counts or morphology ratings in test cases. Results for this DL-based automatic stereology approach on two datasets (total 39 mouse brains) showed >90% accuracy, 100% percent repeatability (Test-Retest) and 60× greater efficiency than manual stereology (<1 min vs. ∼ 60 min) using the same tissue sections. Ongoing and future work includes use of this DL-based approach to establish clear neurodegeneration profiles in age-related human neurological diseases and related animal models.


Assuntos
Aprendizado Profundo , Microglia , Animais , Camundongos , Humanos , Encéfalo/patologia , Contagem de Células/métodos
3.
Cancers (Basel) ; 15(8)2023 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-37190264

RESUMO

Histopathological classification in prostate cancer remains a challenge with high dependence on the expert practitioner. We develop a deep learning (DL) model to identify the most prominent Gleason pattern in a highly curated data cohort and validate it on an independent dataset. The histology images are partitioned in tiles (14,509) and are curated by an expert to identify individual glandular structures with assigned primary Gleason pattern grades. We use transfer learning and fine-tuning approaches to compare several deep neural network architectures that are trained on a corpus of camera images (ImageNet) and tuned with histology examples to be context appropriate for histopathological discrimination with small samples. In our study, the best DL network is able to discriminate cancer grade (GS3/4) from benign with an accuracy of 91%, F1-score of 0.91 and AUC 0.96 in a baseline test (52 patients), while the cancer grade discrimination of the GS3 from GS4 had an accuracy of 68% and AUC of 0.71 (40 patients).

4.
bioRxiv ; 2023 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-36865216

RESUMO

Morphology-based classification of cells in the bone marrow aspirate (BMA) is a key step in the diagnosis and management of hematologic malignancies. However, it is time-intensive and must be performed by expert hematopathologists and laboratory professionals. We curated a large, high-quality dataset of 41,595 hematopathologist consensus-annotated single-cell images extracted from BMA whole slide images (WSIs) containing 23 morphologic classes from the clinical archives of the University of California, San Francisco. We trained a convolutional neural network, DeepHeme, to classify images in this dataset, achieving a mean area under the curve (AUC) of 0.99. DeepHeme was then externally validated on WSIs from Memorial Sloan Kettering Cancer Center, with a similar AUC of 0.98, demonstrating robust generalization. When compared to individual hematopathologists from three different top academic medical centers, the algorithm outperformed all three. Finally, DeepHeme reliably identified cell states such as mitosis, paving the way for image-based quantification of mitotic index in a cell-specific manner, which may have important clinical applications.

5.
Artigo em Inglês | MEDLINE | ID: mdl-36327184

RESUMO

The detection and segmentation of stained cells and nuclei are essential prerequisites for subsequent quantitative research for many diseases. Recently, deep learning has shown strong performance in many computer vision problems, including solutions for medical image analysis. Furthermore, accurate stereological quantification of microscopic structures in stained tissue sections plays a critical role in understanding human diseases and developing safe and effective treatments. In this article, we review the most recent deep learning approaches for cell (nuclei) detection and segmentation in cancer and Alzheimer's disease with an emphasis on deep learning approaches combined with unbiased stereology. Major challenges include accurate and reproducible cell detection and segmentation of microscopic images from stained sections. Finally, we discuss potential improvements and future trends in deep learning applied to cell detection and segmentation.

6.
J Chem Neuroanat ; 124: 102134, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35839940

RESUMO

Stereology-based methods provide the current state-of-the-art approaches for accurate quantification of numbers and other morphometric parameters of biological objects in stained tissue sections. The advent of artificial intelligence (AI)-based deep learning (DL) offers the possibility of improving throughput by automating the collection of stereology data. We have recently shown that DL can effectively achieve comparable accuracy to manual stereology but with higher repeatability, improved throughput, and less variation due to human factors by quantifying the total number of immunostained cells at their maximal profile of focus in extended depth of field (EDF) images. In the first of two novel contributions in this work, we propose a semi-automatic approach using a handcrafted Adaptive Segmentation Algorithm (ASA) to automatically generate ground truth on EDF images for training our deep learning (DL) models to automatically count cells using unbiased stereology methods. This update increases the amount of training data, thereby improving the accuracy and efficiency of automatic cell counting methods, without a requirement for extra expert time. The second contribution of this work is a Multi-channel Input and Multi-channel Output (MIMO) method using a U-Net deep learning architecture for automatic cell counting in a stack of z-axis images (also known as disector stacks). This DL-based digital automation of the ordinary optical fractionator ensures accurate counts through spatial separation of stained cells in the z-plane, thereby avoiding false negatives from overlapping cells in EDF images without the shortcomings of 3D and recurrent DL models. The contribution overcomes the issue of under-counting errors with EDF images due to overlapping cells in the z-plane (masking). We demonstrate the practical applications of these advances with automatic disector-based estimates of the total number of NeuN-immunostained neurons in a mouse neocortex. In summary, this work provides the first demonstration of automatic estimation of a total cell number in tissue sections using a combination of deep learning and the disector-based optical fractionator method.


Assuntos
Inteligência Artificial , Neocórtex , Algoritmos , Animais , Contagem de Células/métodos , Humanos , Camundongos , Neurônios
7.
Diagnostics (Basel) ; 12(2)2022 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-35204436

RESUMO

Glioma is the most common type of primary malignant brain tumor. Accurate survival time prediction for glioma patients may positively impact treatment planning. In this paper, we develop an automatic survival time prediction tool for glioblastoma patients along with an effective solution to the limited availability of annotated medical imaging datasets. Ensembles of snapshots of three dimensional (3D) deep convolutional neural networks (CNN) are applied to Magnetic Resonance Image (MRI) data to predict survival time of high-grade glioma patients. Additionally, multi-sequence MRI images were used to enhance survival prediction performance. A novel way to leverage the potential of ensembles to overcome the limitation of labeled medical image availability is shown. This new classification method separates glioblastoma patients into long- and short-term survivors. The BraTS (Brain Tumor Image Segmentation) 2019 training dataset was used in this work. Each patient case consisted of three MRI sequences (T1CE, T2, and FLAIR). Our training set contained 163 cases while the test set included 46 cases. The best known prediction accuracy of 74% for this type of problem was achieved on the unseen test set.

8.
Med Image Comput Comput Assist Interv ; 13433: 749-759, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36939418

RESUMO

Artificial Intelligence (AI)-based methods allow for automatic assessment of pain intensity based on continuous monitoring and processing of subtle changes in sensory signals, including facial expression, body movements, and crying frequency. Currently, there is a large and growing need for expanding current AI-based approaches to the assessment of postoperative pain in the neonatal intensive care unit (NICU). In contrast to acute procedural pain in the clinic, the NICU has neonates emerging from postoperative sedation, usually intubated, and with variable energy reserves for manifesting forceful pain responses. Here, we present a novel multi-modal approach designed, developed, and validated for assessment of neonatal postoperative pain in the challenging NICU setting. Our approach includes a robust network capable of efficient reconstruction of missing modalities (e.g., obscured facial expression due to intubation) using an unsupervised spatio-temporal feature learning with a generative model for learning the joint features. Our approach generates the final pain score along with the intensity using an attentional cross-modal feature fusion. Using experimental dataset from postoperative neonates in the NICU, our pain assessment approach achieves superior performance (AUC 0.906, accuracy 0.820) as compared to the state-of-the-art approaches.

9.
Front Pediatr ; 10: 1022751, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36819198

RESUMO

Background: The assessment and management of neonatal pain is crucial for the development and wellbeing of vulnerable infants. Specifically, neonatal pain is associated with adverse health outcomes but is often under-identified and therefore under-treated. Neonatal stress may be misinterpreted as pain and may therefore be treated inappropriately. The assessment of neonatal pain is complicated by the non-verbal status of patients, age-dependent variation in pain responses, limited education on identifying pain in premature infants, and the clinical utility of existing tools. Objective: We review research surrounding neonatal pain assessment scales currently in use to assess neonatal pain in the neonatal intensive care unit. Methods: We performed a systematic review of original research using PRISMA guidelines for literature published between 2016 and 2021 using the key words "neonatal pain assessment" in the databases Web of Science, PubMed, and CINAHL. Fifteen articles remained after review, duplicate, irrelevant, or low-quality articles were eliminated. Results: We found research evaluating 13 neonatal pain scales. Important measurement categories include behavioral parameters, physiological parameters, continuous pain, acute pain, chronic pain, and the ability to distinguish between pain and stress. Provider education, inter-rater reliability and ease of use are important factors that contribute to an assessment tool's success. Each scale studied had strengths and limitations that aided or hindered its use for measuring neonatal pain in the neonatal intensive care unit, but no scale excelled in all areas identified as important for reliably identifying and measuring pain in this vulnerable population. Conclusion: A more comprehensive neonatal pain assessment tool and more provider education on differences in pain signals in premature neonates may be needed to increase the clinical utility of pain scales that address the different aspects of neonatal pain.

10.
IEEE Trans Med Imaging ; 40(12): 3748-3761, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34264825

RESUMO

Lung cancer is by far the leading cause of cancer death in the US. Recent studies have demonstrated the effectiveness of screening using low dose CT (LDCT) in reducing lung cancer related mortality. While lung nodules are detected with a high rate of sensitivity, this exam has a low specificity rate and it is still difficult to separate benign and malignant lesions. The ISBI 2018 Lung Nodule Malignancy Prediction Challenge, developed by a team from the Quantitative Imaging Network of the National Cancer Institute, was focused on the prediction of lung nodule malignancy from two sequential LDCT screening exams using automated (non-manual) algorithms. We curated a cohort of 100 subjects who participated in the National Lung Screening Trial and had established pathological diagnoses. Data from 30 subjects were randomly selected for training and the remaining was used for testing. Participants were evaluated based on the area under the receiver operating characteristic curve (AUC) of nodule-wise malignancy scores generated by their algorithms on the test set. The challenge had 17 participants, with 11 teams submitting reports with method description, mandated by the challenge rules. Participants used quantitative methods, resulting in a reporting test AUC ranging from 0.698 to 0.913. The top five contestants used deep learning approaches, reporting an AUC between 0.87 - 0.91. The team's predictor did not achieve significant differences from each other nor from a volume change estimate (p =.05 with Bonferroni-Holm's correction).


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Algoritmos , Humanos , Pulmão , Neoplasias Pulmonares/diagnóstico por imagem , Curva ROC , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X
11.
IEEE Access ; 9: 72970-72979, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34178559

RESUMO

A number of recent papers have shown experimental evidence that suggests it is possible to build highly accurate deep neural network models to detect COVID-19 from chest X-ray images. In this paper, we show that good generalization to unseen sources has not been achieved. Experiments with richer data sets than have previously been used show models have high accuracy on seen sources, but poor accuracy on unseen sources. The reason for the disparity is that the convolutional neural network model, which learns features, can focus on differences in X-ray machines or in positioning within the machines, for example. Any feature that a person would clearly rule out is called a confounding feature. Some of the models were trained on COVID-19 image data taken from publications, which may be different than raw images. Some data sets were of pediatric cases with pneumonia where COVID-19 chest X-rays are almost exclusively from adults, so lung size becomes a spurious feature that can be exploited. In this work, we have eliminated many confounding features by working with as close to raw data as possible. Still, deep learned models may leverage source specific confounders to differentiate COVID-19 from pneumonia preventing generalizing to new data sources (i.e. external sites). Our models have achieved an AUC of 1.00 on seen data sources but in the worst case only scored an AUC of 0.38 on unseen ones. This indicates that such models need further assessment/development before they can be broadly clinically deployed. An example of fine-tuning to improve performance at a new site is given.

12.
Tomography ; 7(2): 154-168, 2021 04 29.
Artigo em Inglês | MEDLINE | ID: mdl-33946756

RESUMO

Lung cancer causes more deaths globally than any other type of cancer. To determine the best treatment, detecting EGFR and KRAS mutations is of interest. However, non-invasive ways to obtain this information are not available. Furthermore, many times there is a lack of big enough relevant public datasets, so the performance of single classifiers is not outstanding. In this paper, an ensemble approach is applied to increase the performance of EGFR and KRAS mutation prediction using a small dataset. A new voting scheme, Selective Class Average Voting (SCAV), is proposed and its performance is assessed both for machine learning models and CNNs. For the EGFR mutation, in the machine learning approach, there was an increase in the sensitivity from 0.66 to 0.75, and an increase in AUC from 0.68 to 0.70. With the deep learning approach, an AUC of 0.846 was obtained, and with SCAV, the accuracy of the model was increased from 0.80 to 0.857. For the KRAS mutation, both in the machine learning models (0.65 to 0.71 AUC) and the deep learning models (0.739 to 0.778 AUC), a significant increase in performance was found. The results obtained in this work show how to effectively learn from small image datasets to predict EGFR and KRAS mutations, and that using ensembles with SCAV increases the performance of machine learning classifiers and CNNs. The results provide confidence that as large datasets become available, tools to augment clinical capabilities can be fielded.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Carcinoma Pulmonar de Células não Pequenas/genética , Receptores ErbB/genética , Humanos , Neoplasias Pulmonares/genética , Mutação , Proteínas Proto-Oncogênicas p21(ras)/genética
13.
Data Brief ; 35: 106796, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33644268

RESUMO

This paper presents the first multimodal neonatal pain dataset that contains visual, vocal, and physiological responses following clinically required procedural and postoperative painful procedures. It was collected from 58 neonates (27-41 gestational age) during their hospitalization in the neonatal intensive care unit. The visual and vocal data were recorded using an inexpensive RGB camera while the physiological responses (vital signs and cortical activity) were recorded using portable bedside monitors. The recorded behavioral and physiological responses were scored by expert nurses using two validated pain scales to obtain the ground truth labels. In addition to behavioral and physiological responses, our dataset contains clinical information such as the neonate's age, gender, weight, pharmacological and non-pharmacological interventions, and previous painful procedures. The presented multimodal dataset can be used to develop artificial intelligence systems that monitor, assess, and predict neonatal pain based on the analysis of behavioral and physiological responses. It can also be used to advance the understanding of neonatal pain, which can lead to the development of effective pain prevention and treatment.

14.
J Neurosci Methods ; 354: 109102, 2021 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-33607171

RESUMO

BACKGROUND: Quantifying cells in a defined region of biological tissue is critical for many clinical and preclinical studies, especially in the fields of pathology, toxicology, cancer and behavior. As part of a program to develop accurate, precise and more efficient automatic approaches for quantifying morphometric changes in biological tissue, we have shown that both deep learning-based and hand-crafted algorithms can estimate the total number of histologically stained cells at their maximal profile of focus in Extended Depth of Field (EDF) images. Deep learning-based approaches show accuracy comparable to manual counts on EDF images but significant enhancement in reproducibility, throughput efficiency and reduced error from human factors. However, a majority of the automated counts are designed for single-immunostained tissue sections. NEW METHOD: To expand the automatic counting methods to more complex dual-staining protocols, we developed an adaptive method to separate stain color channels on images from tissue sections stained by a primary immunostain with secondary counterstain. COMPARISON WITH EXISTING METHODS: The proposed method overcomes the limitations of the state-of-the-art stain-separation methods, like the requirement of pure stain color basis as a prerequisite or stain color basis learning on each image. RESULTS: Experimental results are presented for automatic counts using deep learning-based and hand-crafted algorithms for sections immunostained for neurons (Neu-N) or microglial cells (Iba-1) with cresyl violet counterstain. CONCLUSION: Our findings show more accurate counts by deep learning methods compared to the handcrafted method. Thus, stain-separated images can function as input for automatic deep learning-based quantification methods designed for single-stained tissue sections.


Assuntos
Aprendizado Profundo , Algoritmos , Corantes , Humanos , Processamento de Imagem Assistida por Computador , Reprodutibilidade dos Testes , Coloração e Rotulagem
15.
Paediatr Neonatal Pain ; 3(3): 134-145, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35547946

RESUMO

The advent of increasingly sophisticated medical technology, surgical interventions, and supportive healthcare measures is raising survival probabilities for babies born premature and/or with life-threatening health conditions. In the United States, this trend is associated with greater numbers of neonatal surgeries and higher admission rates into neonatal intensive care units (NICU) for newborns at all birth weights. Following surgery, current pain management in NICU relies primarily on narcotics (opioids) such as morphine and fentanyl (about 100 times more potent than morphine) that lead to a number of complications, including prolonged stays in NICU for opioid withdrawal. In this paper, we review current practices and challenges for pain assessment and treatment in NICU and outline ongoing efforts using Artificial Intelligence (AI) to support pain- and opioid-sparing approaches for newborns in the future. A major focus for these next-generation approaches to NICU-based pain management is proactive pain mitigation (avoidance) aimed at preventing harm to neonates from both postsurgical pain and opioid withdrawal. AI-based frameworks can use single or multiple combinations of continuous objective variables, that is, facial and body movements, crying frequencies, and physiological data (vital signs), to make high-confidence predictions about time-to-pain onset following postsurgical sedation. Such predictions would create a therapeutic window prior to pain onset for mitigation with non-narcotic pharmaceutical and nonpharmaceutical interventions. These emerging AI-based strategies have the potential to minimize or avoid damage to the neonate's body and psyche from postsurgical pain and opioid withdrawal.

16.
Comput Biol Med ; 129: 104150, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33348218

RESUMO

The current practice for assessing neonatal postoperative pain relies on bedside caregivers. This practice is subjective, inconsistent, slow, and discontinuous. To develop a reliable medical interpretation, several automated approaches have been proposed to enhance the current practice. These approaches are unimodal and focus mainly on assessing neonatal procedural (acute) pain. As pain is a multimodal emotion that is often expressed through multiple modalities, the multimodal assessment of pain is necessary especially in case of postoperative (acute prolonged) pain. Additionally, spatio-temporal analysis is more stable over time and has been proven to be highly effective at minimizing misclassification errors. In this paper, we present a novel multimodal spatio-temporal approach that integrates visual and vocal signals and uses them for assessing neonatal postoperative pain. We conduct comprehensive experiments to investigate the effectiveness of the proposed approach. We compare the performance of the multimodal and unimodal postoperative pain assessment, and measure the impact of temporal information integration. The experimental results, on a real-world dataset, show that the proposed multimodal spatio-temporal approach achieves the highest AUC (0.87) and accuracy (79%), which are on average 6.67% and 6.33% higher than unimodal approaches. The results also show that the integration of temporal information markedly improves the performance as compared to the non-temporal approach as it captures changes in the pain dynamic. These results demonstrate that the proposed approach can be used as a viable alternative to manual assessment, which would tread a path toward fully automated pain monitoring in clinical settings, point-of-care testing, and homes.


Assuntos
Aprendizado Profundo , Emoções , Humanos , Recém-Nascido , Dor Pós-Operatória/diagnóstico
17.
Conf Proc IEEE Int Conf Syst Man Cybern ; 2021: 1133-1138, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36936797

RESUMO

Spectrograms visualize the frequency components of a given signal which may be an audio signal or even a time-series signal. Audio signals have higher sampling rate and high variability of frequency with time. Spectrograms can capture such variations well. But, vital signs which are time-series signals have less sampling frequency and low-frequency variability due to which, spectrograms fail to express variations and patterns. In this paper, we propose a novel solution to introduce frequency variability using frequency modulation on vital signs. Then we apply spectrograms on frequency modulated signals to capture the patterns. The proposed approach has been evaluated on 4 different medical datasets across both prediction and classification tasks. Significant results are found showing the efficacy of the approach for vital sign signals. The results from the proposed approach are promising with an accuracy of 91.55% and 91.67% in prediction and classification tasks respectively.

18.
Comput Biol Med ; 122: 103882, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32658721

RESUMO

Convolutional Neural Networks (CNNs) have been utilized for to distinguish between benign lung nodules and those that will become malignant. The objective of this study was to use an ensemble of CNNs to predict which baseline nodules would be diagnosed as lung cancer in a second follow up screening after more than one year. Low-dose helical computed tomography images and data were utilized from the National Lung Screening Trial (NLST). The malignant nodules and nodule positive controls were divided into training and test cohorts. T0 nodules were used to predict lung cancer incidence at T1 or T2. To increase the sample size, image augmentation was performed using rotations, flipping, and elastic deformation. Three CNN architectures were designed for malignancy prediction, and each architecture was trained using seven different seeds to create the initial weights. This enabled variability in the CNN models which were combined to generate a robust, more accurate ensemble model. Augmenting images using only rotation and flipping and training with images from T0 yielded the best accuracy to predict lung cancer incidence at T2 from a separate test cohort (Accuracy = 90.29%; AUC = 0.96) based on an ensemble 21 models. Images augmented by rotation and flipping enabled effective learning by increasing the relatively small sample size. Ensemble learning with deep neural networks is a compelling approach that accurately predicted lung cancer incidence at the second screening after the baseline screen mostly 2 years later.


Assuntos
Neoplasias Pulmonares , Tomografia Computadorizada por Raios X , Estudos de Coortes , Humanos , Pulmão , Neoplasias Pulmonares/diagnóstico por imagem , Redes Neurais de Computação
19.
Tomography ; 6(2): 65-76, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32548282

RESUMO

Quantitative imaging biomarkers (QIBs) provide medical image-derived intensity, texture, shape, and size features that may help characterize cancerous tumors and predict clinical outcomes. Successful clinical translation of QIBs depends on the robustness of their measurements. Biomarkers derived from positron emission tomography images are prone to measurement errors owing to differences in image processing factors such as the tumor segmentation method used to define volumes of interest over which to calculate QIBs. We illustrate a new Bayesian statistical approach to characterize the robustness of QIBs to different processing factors. Study data consist of 22 QIBs measured on 47 head and neck tumors in 10 positron emission tomography/computed tomography scans segmented manually and with semiautomated methods used by 7 institutional members of the NCI Quantitative Imaging Network. QIB performance is estimated and compared across institutions with respect to measurement errors and power to recover statistical associations with clinical outcomes. Analysis findings summarize the performance impact of different segmentation methods used by Quantitative Imaging Network members. Robustness of some advanced biomarkers was found to be similar to conventional markers, such as maximum standardized uptake value. Such similarities support current pursuits to better characterize disease and predict outcomes by developing QIBs that use more imaging information and are robust to different processing factors. Nevertheless, to ensure reproducibility of QIB measurements and measures of association with clinical outcomes, errors owing to segmentation methods need to be reduced.


Assuntos
Fluordesoxiglucose F18 , Neoplasias de Cabeça e Pescoço , Tomografia por Emissão de Pósitrons , Teorema de Bayes , Biomarcadores Tumorais , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Humanos , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X
20.
Tomography ; 6(2): 209-215, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32548298

RESUMO

Noninvasive diagnosis of lung cancer in early stages is one task where radiomics helps. Clinical practice shows that the size of a nodule has high predictive power for malignancy. In the literature, convolutional neural networks (CNNs) have become widely used in medical image analysis. We study the ability of a CNN to capture nodule size in computed tomography images after images are resized for CNN input. For our experiments, we used the National Lung Screening Trial data set. Nodules were labeled into 2 categories (small/large) based on the original size of a nodule. After all extracted patches were re-sampled into 100-by-100-pixel images, a CNN was able to successfully classify test nodules into small- and large-size groups with high accuracy. To show the generality of our discovery, we repeated size classification experiments using Common Objects in Context (COCO) data set. From the data set, we selected 3 categories of images, namely, bears, cats, and dogs. For all 3 categories a 5- × 2-fold cross-validation was performed to put them into small and large classes. The average area under receiver operating curve is 0.954, 0.952, and 0.979 for the bear, cat, and dog categories, respectively. Thus, camera image rescaling also enables a CNN to discover the size of an object. The source code for experiments with the COCO data set is publicly available in Github (https://github.com/VisionAI-USF/COCO_Size_Decoding/).


Assuntos
Neoplasias Pulmonares , Nódulos Pulmonares Múltiplos , Animais , Gatos , Cães , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Nódulos Pulmonares Múltiplos/diagnóstico por imagem , Redes Neurais de Computação , Ensaios Clínicos Controlados Aleatórios como Assunto , Tomografia Computadorizada por Raios X , Ursidae
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA