Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 66
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
J Magn Reson Imaging ; 46(1): 115-123, 2017 07.
Artigo em Inglês | MEDLINE | ID: mdl-27678245

RESUMO

PURPOSE: Glioblastoma multiforme (GBM) is the most common malignant brain tumor in adults. Most GBMs exhibit extensive regional heterogeneity at tissue, cellular, and molecular scales, but the clinical relevance of the observed spatial imaging characteristics remains unknown. We investigated pretreatment magnetic resonance imaging (MRI) scans of GBMs to identify tumor subregions and quantify their image-based spatial characteristics that are associated with survival time. MATERIALS AND METHODS: We quantified tumor subregions (termed habitats) in GBMs, which are hypothesized to capture intratumoral characteristics using multiple MRI sequences. For proof-of-concept, we developed a computational framework that used intratumoral grouping and spatial mapping to identify GBM tumor subregions and yield habitat-based features. Using a feature selector and three classifiers, experimental results from two datasets are reported, including Dataset1 with 32 GBM patients (594 tumor slices) and Dataset2 with 22 GBM patients, who did not undergo resection (261 tumor slices) for survival group prediction. RESULTS: In both datasets, we show that habitat-based features achieved 87.50% and 86.36% accuracies for survival group prediction, respectively, using leave-one-out cross-validation. Experimental results revealed that spatially correlated features between signal-enhanced subregions were effective for predicting survival groups (P < 0.05 for all three machine-learning classifiers). CONCLUSION: The quantitative spatial-correlated features derived from MRI-defined tumor subregions in GBM could be effectively used to predict the survival time of patients. LEVEL OF EVIDENCE: 2 J. MAGN. RESON. IMAGING 2017;46:115-123.


Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/mortalidade , Glioblastoma/diagnóstico por imagem , Glioblastoma/mortalidade , Reconhecimento Automatizado de Padrão/métodos , Análise Espaço-Temporal , Análise de Sobrevida , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Biomarcadores , Neoplasias Encefálicas/patologia , Feminino , Glioblastoma/patologia , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Incidência , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Prognóstico , Reprodutibilidade dos Testes , Fatores de Risco , Sensibilidade e Especificidade , Estados Unidos/epidemiologia , Adulto Jovem
2.
J Digit Imaging ; 29(4): 476-87, 2016 08.
Artigo em Inglês | MEDLINE | ID: mdl-26847203

RESUMO

Tumor volume estimation, as well as accurate and reproducible borders segmentation in medical images, are important in the diagnosis, staging, and assessment of response to cancer therapy. The goal of this study was to demonstrate the feasibility of a multi-institutional effort to assess the repeatability and reproducibility of nodule borders and volume estimate bias of computerized segmentation algorithms in CT images of lung cancer, and to provide results from such a study. The dataset used for this evaluation consisted of 52 tumors in 41 CT volumes (40 patient datasets and 1 dataset containing scans of 12 phantom nodules of known volume) from five collections available in The Cancer Imaging Archive. Three academic institutions developing lung nodule segmentation algorithms submitted results for three repeat runs for each of the nodules. We compared the performance of lung nodule segmentation algorithms by assessing several measurements of spatial overlap and volume measurement. Nodule sizes varied from 29 µl to 66 ml and demonstrated a diversity of shapes. Agreement in spatial overlap of segmentations was significantly higher for multiple runs of the same algorithm than between segmentations generated by different algorithms (p < 0.05) and was significantly higher on the phantom dataset compared to the other datasets (p < 0.05). Algorithms differed significantly in the bias of the measured volumes of the phantom nodules (p < 0.05) underscoring the need for assessing performance on clinical data in addition to phantoms. Algorithms that most accurately estimated nodule volumes were not the most repeatable, emphasizing the need to evaluate both their accuracy and precision. There were considerable differences between algorithms, especially in a subset of heterogeneous nodules, underscoring the recommendation that the same software be used at all time points in longitudinal studies.


Assuntos
Algoritmos , Neoplasias Pulmonares/diagnóstico por imagem , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Humanos , Neoplasias Pulmonares/patologia , Imagens de Fantasmas , Reprodutibilidade dos Testes , Nódulo Pulmonar Solitário/patologia , Carga Tumoral
3.
J Magn Reson Imaging ; 42(5): 1421-30, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25884277

RESUMO

PURPOSE: To evaluate heterogeneity within tumor subregions or "habitats" via textural kinetic analysis on breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) for the classification of two clinical prognostic features; 1) estrogen receptor (ER)-positive from ER-negative tumors, and 2) tumors with four or more viable lymph node metastases after neoadjuvant chemotherapy from tumors without nodal metastases. MATERIALS AND METHODS: Two separate volumetric DCE-MRI datasets were obtained at 1.5T, comprised of bilateral axial dynamic 3D T1 -weighted fat suppressed gradient recalled echo-pulse sequences obtained before and after gadolinium-based contrast administration. Representative image slices of breast tumors from 38 and 34 patients were used for ER status and lymph node classification, respectively. Four tumor habitats were defined based on their kinetic contrast enhancement characteristics. The heterogeneity within each habitat was quantified using textural kinetic features, which were evaluated using two feature selectors and three classifiers. RESULTS: Textural kinetic features from the habitat with rapid delayed washout yielded classification accuracies of 84.44% (area under the curve [AUC] 0.83) for ER and 88.89% (AUC 0.88) for lymph node status. The texture feature, information measure of correlation, most often chosen in cross-validations, measures heterogeneity and provides accuracy approximately the same as with the best feature set. CONCLUSION: Heterogeneity within habitats with rapid washout is highly predictive of molecular tumor characteristics and clinical behavior.


Assuntos
Neoplasias da Mama/metabolismo , Neoplasias da Mama/patologia , Gadolínio , Aumento da Imagem , Imageamento por Ressonância Magnética , Receptores de Estrogênio/metabolismo , Adulto , Idoso , Área Sob a Curva , Mama/metabolismo , Mama/patologia , Meios de Contraste , Feminino , Humanos , Linfonodos/patologia , Metástase Linfática , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Estudos Retrospectivos , Sensibilidade e Especificidade
4.
J Digit Imaging ; 27(6): 805-23, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24990346

RESUMO

Quantitative size, shape, and texture features derived from computed tomographic (CT) images may be useful as predictive, prognostic, or response biomarkers in non-small cell lung cancer (NSCLC). However, to be useful, such features must be reproducible, non-redundant, and have a large dynamic range. We developed a set of quantitative three-dimensional (3D) features to describe segmented tumors and evaluated their reproducibility to select features with high potential to have prognostic utility. Thirty-two patients with NSCLC were subjected to unenhanced thoracic CT scans acquired within 15 min of each other under an approved protocol. Primary lung cancer lesions were segmented using semi-automatic 3D region growing algorithms. Following segmentation, 219 quantitative 3D features were extracted from each lesion, corresponding to size, shape, and texture, including features in transformed spaces (laws, wavelets). The most informative features were selected using the concordance correlation coefficient across test-retest, the biological range and a feature independence measure. There were 66 (30.14%) features with concordance correlation coefficient ≥ 0.90 across test-retest and acceptable dynamic range. Of these, 42 features were non-redundant after grouping features with R (2) Bet ≥ 0.95. These reproducible features were found to be predictive of radiological prognosis. The area under the curve (AUC) was 91% for a size-based feature and 92% for the texture features (runlength, laws). We tested the ability of image features to predict a radiological prognostic score on an independent NSCLC (39 adenocarcinoma) samples, the AUC for texture features (runlength emphasis, energy) was 0.84 while the conventional size-based features (volume, longest diameter) was 0.80. Test-retest and correlation analyses have identified non-redundant CT image features with both high intra-patient reproducibility and inter-patient biological range. Thus making the case that quantitative image features are informative and prognostic biomarkers for NSCLC.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Área Sob a Curva , Feminino , Humanos , Imageamento Tridimensional/métodos , Pulmão/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
5.
Artigo em Inglês | MEDLINE | ID: mdl-38993353

RESUMO

Among patients with early-stage non-small cell lung cancer (NSCLC) undergoing surgical resection, identifying who is at high-risk of recurrence can inform clinical guidelines with respect to more aggressive follow-up and/or adjuvant therapy. While predicting recurrence based on pre-surgical resection data is ideal, clinically important pathological features are only evaluated postoperatively. Therefore, we developed two supervised classification models to assess the importance of pre- and post-surgical features for predicting 5-year recurrence. An integrated dataset was generated by combining clinical covariates and radiomic features calculated from pre-surgical computed tomography images. After removing correlated radiomic features, the SHapley Additive exPlanations (SHAP) method was used to measure feature importance and select relevant features. Binary classification was performed using a Support Vector Machine, followed by a feature ablation study assessing the impact of radiomic and clinical features. We demonstrate that the post-surgical model significantly outperforms the pre-surgical model in predicting lung cancer recurrence, with tumor pathological features and peritumoral radiomic features contributing significantly to the model's performance.

6.
IEEE Access ; 12: 49122-49133, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38994038

RESUMO

There is a tendency for object detection systems using off-the-shelf algorithms to fail when deployed in complex scenes. The present work describes a case for detecting facial expression in post-surgical neonates (newborns) as a modality for predicting and classifying severe pain in the Neonatal Intensive Care Unit (NICU). Our initial testing showed that both an off-the-shelf face detector and a machine learning algorithm trained on adult faces failed to detect facial expression of neonates in the NICU. We improved accuracy in this complex scene by training a state-of-the-art "You-Only-Look-Once" (YOLO) face detection model using the USF-MNPAD-I dataset of neonate faces. At run-time our trained YOLO model showed a difference of 8.6% mean Average Precision (mAP) and 21.2% Area under the ROC Curve (AUC) for automatic classification of neonatal pain compared with manual pain scoring by NICU nurses. Given the challenges, time and effort associated with collecting ground truth from the faces of post-surgical neonates, here we share the weights from training our YOLO model with these facial expression data. These weights can facilitate the further development of accurate strategies for detecting facial expression, which can be used to predict the time to pain onset in combination with other sensory modalities (body movements, crying frequency, vital signs). Reliable predictions of time to pain onset in turn create a therapeutic window of time wherein NICU nurses and providers can implement safe and effective strategies to mitigate severe pain in this vulnerable patient population.

7.
Bioengineering (Basel) ; 11(5)2024 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-38790302

RESUMO

The progress of incorporating deep learning in the field of medical image interpretation has been greatly hindered due to the tremendous cost and time associated with generating ground truth for supervised machine learning, alongside concerns about the inconsistent quality of images acquired. Active learning offers a potential solution to these problems of expanding dataset ground truth by algorithmically choosing the most informative samples for ground truth labeling. Still, this effort incurs the costs of human labeling, which needs minimization. Furthermore, automatic labeling approaches employing active learning often exhibit overfitting tendencies while selecting samples closely aligned with the training set distribution and excluding out-of-distribution samples, which could potentially improve the model's effectiveness. We propose that the majority of out-of-distribution instances can be attributed to inconsistent cross images. Since the FDA approved the first whole-slide image system for medical diagnosis in 2017, whole-slide images have provided enriched critical information to advance the field of automated histopathology. Here, we exemplify the benefits of a novel deep learning strategy that utilizes high-resolution whole-slide microscopic images. We quantitatively assess and visually highlight the inconsistencies within the whole-slide image dataset employed in this study. Accordingly, we introduce a deep learning-based preprocessing algorithm designed to normalize unknown samples to the training set distribution, effectively mitigating the overfitting issue. Consequently, our approach significantly increases the amount of automatic region-of-interest ground truth labeling on high-resolution whole-slide images using active deep learning. We accept 92% of the automatic labels generated for our unlabeled data cohort, expanding the labeled dataset by 845%. Additionally, we demonstrate expert time savings of 96% relative to manual expert ground-truth labeling.

8.
Neurotoxicol Teratol ; 102: 107336, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38402997

RESUMO

Microglial cells mediate diverse homeostatic, inflammatory, and immune processes during normal development and in response to cytotoxic challenges. During these functional activities, microglial cells undergo distinct numerical and morphological changes in different tissue volumes in both rodent and human brains. However, it remains unclear how these cytostructural changes in microglia correlate with region-specific neurochemical functions. To better understand these relationships, neuroscientists need accurate, reproducible, and efficient methods for quantifying microglial cell number and morphologies in histological sections. To address this deficit, we developed a novel deep learning (DL)-based classification, stereology approach that links the appearance of Iba1 immunostained microglial cells at low magnification (20×) with the total number of cells in the same brain region based on unbiased stereology counts as ground truth. Once DL models are trained, total microglial cell numbers in specific regions of interest can be estimated and treatment groups predicted in a high-throughput manner (<1 min) using only low-power images from test cases, without the need for time and labor-intensive stereology counts or morphology ratings in test cases. Results for this DL-based automatic stereology approach on two datasets (total 39 mouse brains) showed >90% accuracy, 100% percent repeatability (Test-Retest) and 60× greater efficiency than manual stereology (<1 min vs. ∼ 60 min) using the same tissue sections. Ongoing and future work includes use of this DL-based approach to establish clear neurodegeneration profiles in age-related human neurological diseases and related animal models.


Assuntos
Aprendizado Profundo , Microglia , Animais , Camundongos , Humanos , Encéfalo/patologia , Contagem de Células/métodos
9.
Pattern Recognit ; 46(3): 692-702, 2013 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-23459617

RESUMO

A single click ensemble segmentation (SCES) approach based on an existing "Click&Grow" algorithm is presented. The SCES approach requires only one operator selected seed point as compared with multiple operator inputs, which are typically needed. This facilitates processing large numbers of cases. Evaluation on a set of 129 CT lung tumor images using a similarity index (SI) was done. The average SI is above 93% using 20 different start seeds, showing stability. The average SI for 2 different readers was 79.53%. We then compared the SCES algorithm with the two readers, the level set algorithm and the skeleton graph cut algorithm obtaining an average SI of 78.29%, 77.72%, 63.77% and 63.76% respectively. We can conclude that the newly developed automatic lung lesion segmentation algorithm is stable, accurate and automated.

10.
Cancers (Basel) ; 15(8)2023 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-37190264

RESUMO

Histopathological classification in prostate cancer remains a challenge with high dependence on the expert practitioner. We develop a deep learning (DL) model to identify the most prominent Gleason pattern in a highly curated data cohort and validate it on an independent dataset. The histology images are partitioned in tiles (14,509) and are curated by an expert to identify individual glandular structures with assigned primary Gleason pattern grades. We use transfer learning and fine-tuning approaches to compare several deep neural network architectures that are trained on a corpus of camera images (ImageNet) and tuned with histology examples to be context appropriate for histopathological discrimination with small samples. In our study, the best DL network is able to discriminate cancer grade (GS3/4) from benign with an accuracy of 91%, F1-score of 0.91 and AUC 0.96 in a baseline test (52 patients), while the cancer grade discrimination of the GS3 from GS4 had an accuracy of 68% and AUC of 0.71 (40 patients).

11.
bioRxiv ; 2023 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-36865216

RESUMO

Morphology-based classification of cells in the bone marrow aspirate (BMA) is a key step in the diagnosis and management of hematologic malignancies. However, it is time-intensive and must be performed by expert hematopathologists and laboratory professionals. We curated a large, high-quality dataset of 41,595 hematopathologist consensus-annotated single-cell images extracted from BMA whole slide images (WSIs) containing 23 morphologic classes from the clinical archives of the University of California, San Francisco. We trained a convolutional neural network, DeepHeme, to classify images in this dataset, achieving a mean area under the curve (AUC) of 0.99. DeepHeme was then externally validated on WSIs from Memorial Sloan Kettering Cancer Center, with a similar AUC of 0.98, demonstrating robust generalization. When compared to individual hematopathologists from three different top academic medical centers, the algorithm outperformed all three. Finally, DeepHeme reliably identified cell states such as mitosis, paving the way for image-based quantification of mitotic index in a cell-specific manner, which may have important clinical applications.

12.
Diagnostics (Basel) ; 12(2)2022 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-35204436

RESUMO

Glioma is the most common type of primary malignant brain tumor. Accurate survival time prediction for glioma patients may positively impact treatment planning. In this paper, we develop an automatic survival time prediction tool for glioblastoma patients along with an effective solution to the limited availability of annotated medical imaging datasets. Ensembles of snapshots of three dimensional (3D) deep convolutional neural networks (CNN) are applied to Magnetic Resonance Image (MRI) data to predict survival time of high-grade glioma patients. Additionally, multi-sequence MRI images were used to enhance survival prediction performance. A novel way to leverage the potential of ensembles to overcome the limitation of labeled medical image availability is shown. This new classification method separates glioblastoma patients into long- and short-term survivors. The BraTS (Brain Tumor Image Segmentation) 2019 training dataset was used in this work. Each patient case consisted of three MRI sequences (T1CE, T2, and FLAIR). Our training set contained 163 cases while the test set included 46 cases. The best known prediction accuracy of 74% for this type of problem was achieved on the unseen test set.

13.
Artigo em Inglês | MEDLINE | ID: mdl-36327184

RESUMO

The detection and segmentation of stained cells and nuclei are essential prerequisites for subsequent quantitative research for many diseases. Recently, deep learning has shown strong performance in many computer vision problems, including solutions for medical image analysis. Furthermore, accurate stereological quantification of microscopic structures in stained tissue sections plays a critical role in understanding human diseases and developing safe and effective treatments. In this article, we review the most recent deep learning approaches for cell (nuclei) detection and segmentation in cancer and Alzheimer's disease with an emphasis on deep learning approaches combined with unbiased stereology. Major challenges include accurate and reproducible cell detection and segmentation of microscopic images from stained sections. Finally, we discuss potential improvements and future trends in deep learning applied to cell detection and segmentation.

14.
Front Pediatr ; 10: 1022751, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36819198

RESUMO

Background: The assessment and management of neonatal pain is crucial for the development and wellbeing of vulnerable infants. Specifically, neonatal pain is associated with adverse health outcomes but is often under-identified and therefore under-treated. Neonatal stress may be misinterpreted as pain and may therefore be treated inappropriately. The assessment of neonatal pain is complicated by the non-verbal status of patients, age-dependent variation in pain responses, limited education on identifying pain in premature infants, and the clinical utility of existing tools. Objective: We review research surrounding neonatal pain assessment scales currently in use to assess neonatal pain in the neonatal intensive care unit. Methods: We performed a systematic review of original research using PRISMA guidelines for literature published between 2016 and 2021 using the key words "neonatal pain assessment" in the databases Web of Science, PubMed, and CINAHL. Fifteen articles remained after review, duplicate, irrelevant, or low-quality articles were eliminated. Results: We found research evaluating 13 neonatal pain scales. Important measurement categories include behavioral parameters, physiological parameters, continuous pain, acute pain, chronic pain, and the ability to distinguish between pain and stress. Provider education, inter-rater reliability and ease of use are important factors that contribute to an assessment tool's success. Each scale studied had strengths and limitations that aided or hindered its use for measuring neonatal pain in the neonatal intensive care unit, but no scale excelled in all areas identified as important for reliably identifying and measuring pain in this vulnerable population. Conclusion: A more comprehensive neonatal pain assessment tool and more provider education on differences in pain signals in premature neonates may be needed to increase the clinical utility of pain scales that address the different aspects of neonatal pain.

15.
J Chem Neuroanat ; 124: 102134, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35839940

RESUMO

Stereology-based methods provide the current state-of-the-art approaches for accurate quantification of numbers and other morphometric parameters of biological objects in stained tissue sections. The advent of artificial intelligence (AI)-based deep learning (DL) offers the possibility of improving throughput by automating the collection of stereology data. We have recently shown that DL can effectively achieve comparable accuracy to manual stereology but with higher repeatability, improved throughput, and less variation due to human factors by quantifying the total number of immunostained cells at their maximal profile of focus in extended depth of field (EDF) images. In the first of two novel contributions in this work, we propose a semi-automatic approach using a handcrafted Adaptive Segmentation Algorithm (ASA) to automatically generate ground truth on EDF images for training our deep learning (DL) models to automatically count cells using unbiased stereology methods. This update increases the amount of training data, thereby improving the accuracy and efficiency of automatic cell counting methods, without a requirement for extra expert time. The second contribution of this work is a Multi-channel Input and Multi-channel Output (MIMO) method using a U-Net deep learning architecture for automatic cell counting in a stack of z-axis images (also known as disector stacks). This DL-based digital automation of the ordinary optical fractionator ensures accurate counts through spatial separation of stained cells in the z-plane, thereby avoiding false negatives from overlapping cells in EDF images without the shortcomings of 3D and recurrent DL models. The contribution overcomes the issue of under-counting errors with EDF images due to overlapping cells in the z-plane (masking). We demonstrate the practical applications of these advances with automatic disector-based estimates of the total number of NeuN-immunostained neurons in a mouse neocortex. In summary, this work provides the first demonstration of automatic estimation of a total cell number in tissue sections using a combination of deep learning and the disector-based optical fractionator method.


Assuntos
Inteligência Artificial , Neocórtex , Algoritmos , Animais , Contagem de Células/métodos , Humanos , Camundongos , Neurônios
16.
Med Image Comput Comput Assist Interv ; 13433: 749-759, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36939418

RESUMO

Artificial Intelligence (AI)-based methods allow for automatic assessment of pain intensity based on continuous monitoring and processing of subtle changes in sensory signals, including facial expression, body movements, and crying frequency. Currently, there is a large and growing need for expanding current AI-based approaches to the assessment of postoperative pain in the neonatal intensive care unit (NICU). In contrast to acute procedural pain in the clinic, the NICU has neonates emerging from postoperative sedation, usually intubated, and with variable energy reserves for manifesting forceful pain responses. Here, we present a novel multi-modal approach designed, developed, and validated for assessment of neonatal postoperative pain in the challenging NICU setting. Our approach includes a robust network capable of efficient reconstruction of missing modalities (e.g., obscured facial expression due to intubation) using an unsupervised spatio-temporal feature learning with a generative model for learning the joint features. Our approach generates the final pain score along with the intensity using an attentional cross-modal feature fusion. Using experimental dataset from postoperative neonates in the NICU, our pain assessment approach achieves superior performance (AUC 0.906, accuracy 0.820) as compared to the state-of-the-art approaches.

17.
Tomography ; 7(2): 154-168, 2021 04 29.
Artigo em Inglês | MEDLINE | ID: mdl-33946756

RESUMO

Lung cancer causes more deaths globally than any other type of cancer. To determine the best treatment, detecting EGFR and KRAS mutations is of interest. However, non-invasive ways to obtain this information are not available. Furthermore, many times there is a lack of big enough relevant public datasets, so the performance of single classifiers is not outstanding. In this paper, an ensemble approach is applied to increase the performance of EGFR and KRAS mutation prediction using a small dataset. A new voting scheme, Selective Class Average Voting (SCAV), is proposed and its performance is assessed both for machine learning models and CNNs. For the EGFR mutation, in the machine learning approach, there was an increase in the sensitivity from 0.66 to 0.75, and an increase in AUC from 0.68 to 0.70. With the deep learning approach, an AUC of 0.846 was obtained, and with SCAV, the accuracy of the model was increased from 0.80 to 0.857. For the KRAS mutation, both in the machine learning models (0.65 to 0.71 AUC) and the deep learning models (0.739 to 0.778 AUC), a significant increase in performance was found. The results obtained in this work show how to effectively learn from small image datasets to predict EGFR and KRAS mutations, and that using ensembles with SCAV increases the performance of machine learning classifiers and CNNs. The results provide confidence that as large datasets become available, tools to augment clinical capabilities can be fielded.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Carcinoma Pulmonar de Células não Pequenas/genética , Receptores ErbB/genética , Humanos , Neoplasias Pulmonares/genética , Mutação , Proteínas Proto-Oncogênicas p21(ras)/genética
18.
IEEE Access ; 9: 72970-72979, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34178559

RESUMO

A number of recent papers have shown experimental evidence that suggests it is possible to build highly accurate deep neural network models to detect COVID-19 from chest X-ray images. In this paper, we show that good generalization to unseen sources has not been achieved. Experiments with richer data sets than have previously been used show models have high accuracy on seen sources, but poor accuracy on unseen sources. The reason for the disparity is that the convolutional neural network model, which learns features, can focus on differences in X-ray machines or in positioning within the machines, for example. Any feature that a person would clearly rule out is called a confounding feature. Some of the models were trained on COVID-19 image data taken from publications, which may be different than raw images. Some data sets were of pediatric cases with pneumonia where COVID-19 chest X-rays are almost exclusively from adults, so lung size becomes a spurious feature that can be exploited. In this work, we have eliminated many confounding features by working with as close to raw data as possible. Still, deep learned models may leverage source specific confounders to differentiate COVID-19 from pneumonia preventing generalizing to new data sources (i.e. external sites). Our models have achieved an AUC of 1.00 on seen data sources but in the worst case only scored an AUC of 0.38 on unseen ones. This indicates that such models need further assessment/development before they can be broadly clinically deployed. An example of fine-tuning to improve performance at a new site is given.

19.
Conf Proc IEEE Int Conf Syst Man Cybern ; 2021: 1133-1138, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36936797

RESUMO

Spectrograms visualize the frequency components of a given signal which may be an audio signal or even a time-series signal. Audio signals have higher sampling rate and high variability of frequency with time. Spectrograms can capture such variations well. But, vital signs which are time-series signals have less sampling frequency and low-frequency variability due to which, spectrograms fail to express variations and patterns. In this paper, we propose a novel solution to introduce frequency variability using frequency modulation on vital signs. Then we apply spectrograms on frequency modulated signals to capture the patterns. The proposed approach has been evaluated on 4 different medical datasets across both prediction and classification tasks. Significant results are found showing the efficacy of the approach for vital sign signals. The results from the proposed approach are promising with an accuracy of 91.55% and 91.67% in prediction and classification tasks respectively.

20.
J Neurosci Methods ; 354: 109102, 2021 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-33607171

RESUMO

BACKGROUND: Quantifying cells in a defined region of biological tissue is critical for many clinical and preclinical studies, especially in the fields of pathology, toxicology, cancer and behavior. As part of a program to develop accurate, precise and more efficient automatic approaches for quantifying morphometric changes in biological tissue, we have shown that both deep learning-based and hand-crafted algorithms can estimate the total number of histologically stained cells at their maximal profile of focus in Extended Depth of Field (EDF) images. Deep learning-based approaches show accuracy comparable to manual counts on EDF images but significant enhancement in reproducibility, throughput efficiency and reduced error from human factors. However, a majority of the automated counts are designed for single-immunostained tissue sections. NEW METHOD: To expand the automatic counting methods to more complex dual-staining protocols, we developed an adaptive method to separate stain color channels on images from tissue sections stained by a primary immunostain with secondary counterstain. COMPARISON WITH EXISTING METHODS: The proposed method overcomes the limitations of the state-of-the-art stain-separation methods, like the requirement of pure stain color basis as a prerequisite or stain color basis learning on each image. RESULTS: Experimental results are presented for automatic counts using deep learning-based and hand-crafted algorithms for sections immunostained for neurons (Neu-N) or microglial cells (Iba-1) with cresyl violet counterstain. CONCLUSION: Our findings show more accurate counts by deep learning methods compared to the handcrafted method. Thus, stain-separated images can function as input for automatic deep learning-based quantification methods designed for single-stained tissue sections.


Assuntos
Aprendizado Profundo , Algoritmos , Corantes , Humanos , Processamento de Imagem Assistida por Computador , Reprodutibilidade dos Testes , Coloração e Rotulagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA