Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 79
Filter
1.
medRxiv ; 2024 May 20.
Article in English | MEDLINE | ID: mdl-38826461

ABSTRACT

Rationale: Genetic variants and gene expression predict risk of chronic obstructive pulmonary disease (COPD), but their effect on COPD heterogeneity is unclear. Objectives: Define high-risk COPD subtypes using both genetics (polygenic risk score, PRS) and blood gene expression (transcriptional risk score, TRS) and assess differences in clinical and molecular characteristics. Methods: We defined high-risk groups based on PRS and TRS quantiles by maximizing differences in protein biomarkers in a COPDGene training set and identified these groups in COPDGene and ECLIPSE test sets. We tested multivariable associations of subgroups with clinical outcomes and compared protein-protein interaction networks and drug repurposing analyses between high-risk groups. Measurements and Main Results: We examined two high-risk omics-defined groups in non-overlapping test sets (n=1,133 NHW COPDGene, n=299 African American (AA) COPDGene, n=468 ECLIPSE). We defined "High activity" (low PRS/high TRS) and "severe risk" (high PRS/high TRS) subgroups. Participants in both subgroups had lower body-mass index (BMI), lower lung function, and alterations in metabolic, growth, and immune signaling processes compared to a low-risk (low PRS, low TRS) reference subgroup. "High activity" but not "severe risk" participants had greater prospective FEV 1 decline (COPDGene: -51 mL/year; ECLIPSE: - 40 mL/year) and their proteomic profiles were enriched in gene sets perturbed by treatment with 5-lipoxygenase inhibitors and angiotensin-converting enzyme (ACE) inhibitors. Conclusions: Concomitant use of polygenic and transcriptional risk scores identified clinical and molecular heterogeneity amongst high-risk individuals. Proteomic and drug repurposing analysis identified subtype-specific enrichment for therapies and suggest prior drug repurposing failures may be explained by patient selection.

2.
Sci Rep ; 14(1): 5294, 2024 Mar 04.
Article in English | MEDLINE | ID: mdl-38438405

ABSTRACT

Monte Carlo simulations of physics processes at particle colliders like the Large Hadron Collider at CERN take up a major fraction of the computational budget. For some simulations, a single data point takes seconds, minutes, or even hours to compute from first principles. Since the necessary number of data points per simulation is on the order of 10 9 - 10 12 , machine learning regressors can be used in place of physics simulators to significantly reduce this computational burden. However, this task requires high-precision regressors that can deliver data with relative errors of less than 1% or even 0.1% over the entire domain of the function. In this paper, we develop optimal training strategies and tune various machine learning regressors to satisfy the high-precision requirement. We leverage symmetry arguments from particle physics to optimize the performance of the regressors. Inspired by ResNets, we design a Deep Neural Network with skip connections that outperform fully connected Deep Neural Networks. We find that at lower dimensions, boosted decision trees far outperform neural networks while at higher dimensions neural networks perform significantly better. We show that these regressors can speed up simulations by a factor of 10 3 - 10 6 over the first-principles computations currently used in Monte Carlo simulations. Additionally, using symmetry arguments derived from particle physics, we reduce the number of regressors necessary for each simulation by an order of magnitude. Our work can significantly reduce the training and storage burden of Monte Carlo simulations at current and future collider experiments.

3.
Am J Respir Crit Care Med ; 209(3): 273-287, 2024 Feb 01.
Article in English | MEDLINE | ID: mdl-37917913

ABSTRACT

Rationale: Emphysema is a chronic obstructive pulmonary disease phenotype with important prognostic implications. Identifying blood-based biomarkers of emphysema will facilitate early diagnosis and development of targeted therapies. Objectives: To discover blood omics biomarkers for chest computed tomography-quantified emphysema and develop predictive biomarker panels. Methods: Emphysema blood biomarker discovery was performed using differential gene expression, alternative splicing, and protein association analyses in a training sample of 2,370 COPDGene participants with available blood RNA sequencing, plasma proteomics, and clinical data. Internal validation was conducted in a COPDGene testing sample (n = 1,016), and external validation was done in the ECLIPSE study (n = 526). Because low body mass index (BMI) and emphysema often co-occur, we performed a mediation analysis to quantify the effect of BMI on gene and protein associations with emphysema. Elastic net models with bootstrapping were also developed in the training sample sequentially using clinical, blood cell proportions, RNA-sequencing, and proteomic biomarkers to predict quantitative emphysema. Model accuracy was assessed by the area under the receiver operating characteristic curves for subjects stratified into tertiles of emphysema severity. Measurements and Main Results: Totals of 3,829 genes, 942 isoforms, 260 exons, and 714 proteins were significantly associated with emphysema (false discovery rate, 5%) and yielded 11 biological pathways. Seventy-four percent of these genes and 62% of these proteins showed mediation by BMI. Our prediction models demonstrated reasonable predictive performance in both COPDGene and ECLIPSE. The highest-performing model used clinical, blood cell, and protein data (area under the receiver operating characteristic curve in COPDGene testing, 0.90; 95% confidence interval, 0.85-0.90). Conclusions: Blood transcriptome and proteome-wide analyses revealed key biological pathways of emphysema and enhanced the prediction of emphysema.


Subject(s)
Emphysema , Pulmonary Disease, Chronic Obstructive , Pulmonary Emphysema , Humans , Transcriptome , Proteomics , Pulmonary Emphysema/genetics , Pulmonary Emphysema/complications , Biomarkers , Gene Expression Profiling
4.
medRxiv ; 2023 Apr 29.
Article in English | MEDLINE | ID: mdl-37162978

ABSTRACT

Background: Spirometry measures lung function by selecting the best of multiple efforts meeting pre-specified quality control (QC), and reporting two key metrics: forced expiratory volume in 1 second (FEV1) and forced vital capacity (FVC). We hypothesize that discarded submaximal and QC-failing data meaningfully contribute to the prediction of airflow obstruction and all-cause mortality. Methods: We evaluated volume-time spirometry data from the UK Biobank. We identified "best" spirometry efforts as those passing QC with the maximum FVC. "Discarded" efforts were either submaximal or failed QC. To create a combined representation of lung function we implemented a contrastive learning approach, Spirogram-based Contrastive Learning Framework (Spiro-CLF), which utilized all recorded volume-time curves per participant and applied different transformations (e.g. flow-volume, flow-time). In a held-out 20% testing subset we applied the Spiro-CLF representation of a participant's overall lung function to 1) binary predictions of FEV1/FVC < 0.7 and FEV1 Percent Predicted (FEV1PP) < 80%, indicative of airflow obstruction, and 2) Cox regression for all-cause mortality. Findings: We included 940,705 volume-time curves from 352,684 UK Biobank participants with 2-3 spirometry efforts per individual (66.7% with 3 efforts) and at least one QC-passing spirometry effort. Of all spirometry efforts, 24.1% failed QC and 37.5% were submaximal. Spiro-CLF prediction of FEV1/FVC < 0.7 utilizing discarded spirometry efforts had an Area under the Receiver Operating Characteristics (AUROC) of 0.981 (0.863 for FEV1PP prediction). Incorporating discarded spirometry efforts in all-cause mortality prediction was associated with a concordance index (c-index) of 0.654, which exceeded the c-indices from FEV1 (0.590), FVC (0.559), or FEV1/FVC (0.599) from each participant's single best effort. Interpretation: A contrastive learning model using raw spirometry curves can accurately predict lung function using submaximal and QC-failing efforts. This model also has superior prediction of all-cause mortality compared to standard lung function measurements. Funding: MHC is supported by NIH R01HL137927, R01HL135142, HL147148, and HL089856.BDH is supported by NIH K08HL136928, U01 HL089856, and an Alpha-1 Foundation Research Grant.DH is supported by NIH 2T32HL007427-41EKS is supported by NIH R01 HL152728, R01 HL147148, U01 HL089856, R01 HL133135, P01 HL132825, and P01 HL114501.PJC is supported by NIH R01HL124233 and R01HL147326.SPB is supported by NIH R01HL151421 and UH3HL155806.TY, FH, and CYM are employees of Google LLC.

5.
IEEE Trans Pattern Anal Mach Intell ; 45(7): 9149-9168, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37021920

ABSTRACT

K-means is a fundamental clustering algorithm widely used in both academic and industrial applications. Its popularity can be attributed to its simplicity and efficiency. Studies show the equivalence of K-means to principal component analysis, non-negative matrix factorization, and spectral clustering. However, these studies focus on standard K-means with squared euclidean distance. In this review paper, we unify the available approaches in generalizing K-means to solve challenging and complex problems. We show that these generalizations can be seen from four aspects: data representation, distance measure, label assignment, and centroid updating. As concrete applications of transforming problems into modified K-means formulation, we review the following applications: iterative subspace projection and clustering, consensus clustering, constrained clustering, domain adaptation, and outlier detection.

6.
J Med Imaging (Bellingham) ; 10(2): 024005, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36992871

ABSTRACT

Purpose: Deep learning has demonstrated excellent performance enhancing noisy or degraded biomedical images. However, many of these models require access to a noise-free version of the images to provide supervision during training, which limits their utility. Here, we develop an algorithm (noise2Nyquist) that leverages the fact that Nyquist sampling provides guarantees about the maximum difference between adjacent slices in a volumetric image, which allows denoising to be performed without access to clean images. We aim to show that our method is more broadly applicable and more effective than other self-supervised denoising algorithms on real biomedical images, and provides comparable performance to algorithms that need clean images during training. Approach: We first provide a theoretical analysis of noise2Nyquist and an upper bound for denoising error based on sampling rate. We go on to demonstrate its effectiveness in denoising in a simulated example as well as real fluorescence confocal microscopy, computed tomography, and optical coherence tomography images. Results: We find that our method has better denoising performance than existing self-supervised methods and is applicable to datasets where clean versions are not available. Our method resulted in peak signal to noise ratio (PSNR) within 1 dB and structural similarity (SSIM) index within 0.02 of supervised methods. On medical images, it outperforms existing self-supervised methods by an average of 3 dB in PSNR and 0.1 in SSIM. Conclusion: noise2Nyquist can be used to denoise any volumetric dataset sampled at at least the Nyquist rate making it useful for a wide variety of existing datasets.

7.
Nat Commun ; 14(1): 339, 2023 Jan 20.
Article in English | MEDLINE | ID: mdl-36670105

ABSTRACT

The El Niño Southern Oscillation (ENSO) is a semi-periodic fluctuation in sea surface temperature (SST) over the tropical central and eastern Pacific Ocean that influences interannual variability in regional hydrology across the world through long-range dependence or teleconnections. Recent research has demonstrated the value of Deep Learning (DL) methods for improving ENSO prediction as well as Complex Networks (CN) for understanding teleconnections. However, gaps in predictive understanding of ENSO-driven river flows include the black box nature of DL, the use of simple ENSO indices to describe a complex phenomenon and translating DL-based ENSO predictions to river flow predictions. Here we show that eXplainable DL (XDL) methods, based on saliency maps, can extract interpretable predictive information contained in global SST and discover SST information regions and dependence structures relevant for river flows which, in tandem with climate network constructions, enable improved predictive understanding. Our results reveal additional information content in global SST beyond ENSO indices, develop understanding of how SSTs influence river flows, and generate improved river flow prediction, including uncertainty estimation. Observations, reanalysis data, and earth system model simulations are used to demonstrate the value of the XDL-CN based methods for future interannual and decadal scale climate projections.


Subject(s)
Deep Learning , El Nino-Southern Oscillation , Rivers , Temperature , Pacific Ocean
8.
Front Med (Lausanne) ; 9: 981074, 2022.
Article in English | MEDLINE | ID: mdl-36388913

ABSTRACT

Tertiary lymphoid structures (TLS) are specialized lymphoid formations that serve as local repertoire of T- and B-cells at sites of chronic inflammation, autoimmunity, and cancer. While presence of TLS has been associated with improved response to immune checkpoint blockade therapies and overall outcomes in several cancers, its prognostic value in basal cell carcinoma (BCC) has not been investigated. Herein, we determined the prognostic impact of TLS by relating its prevalence and maturation with outcome measures of anti-tumor immunity, namely tumor infiltrating lymphocytes (TILs) and tumor killing. In 30 distinct BCCs, we show the presence of TLS was significantly enriched in tumors harboring a nodular component and more mature primary TLS was associated with TIL counts. Moreover, assessment of the fibrillary matrix surrounding tumors showed discrete morphologies significantly associated with higher TIL counts, critically accounting for heterogeneity in TIL count distribution within TLS maturation stages. Specifically, increased length of fibers and lacunarity of the matrix with concomitant reduction in density and alignment of fibers were present surrounding tumors displaying high TIL counts. Given the interest in inducing TLS formation as a therapeutic intervention as well as its documented prognostic value, elucidating potential impediments to the ability of TLS in driving anti-tumor immunity within the tumor microenvironment warrants further investigation. These results begin to address and highlight the need to integrate stromal features which may present a hindrance to TLS formation and/or effective function as a mediator of immunotherapy response.

9.
Ophthalmol Sci ; 2(2): 100122, 2022 Jun.
Article in English | MEDLINE | ID: mdl-36249702

ABSTRACT

Purpose: To compare the efficacy and efficiency of training neural networks for medical image classification using comparison labels indicating relative disease severity versus diagnostic class labels from a retinopathy of prematurity (ROP) image dataset. Design: Evaluation of diagnostic test or technology. Participants: Deep learning neural networks trained on expert-labeled wide-angle retinal images obtained from patients undergoing diagnostic ROP examinations obtained as part of the Imaging and Informatics in ROP (i-ROP) cohort study. Methods: Neural networks were trained with either class or comparison labels indicating plus disease severity in ROP retinal fundus images from 2 datasets. After training and validation, all networks underwent evaluation using a separate test dataset in 1 of 2 binary classification tasks: normal versus abnormal or plus versus nonplus. Main Outcome Measures: Area under the receiver operating characteristic curve (AUC) values were measured to assess network performance. Results: Given the same number of labels, neural networks learned more efficiently by comparison, generating significantly higher AUCs in both classification tasks across both datasets. Similarly, given the same number of images, comparison learning developed networks with significantly higher AUCs across both classification tasks in 1 of 2 datasets. The difference in efficiency and accuracy between models trained on either label type decreased as the size of the training set increased. Conclusions: Comparison labels individually are more informative and more abundant per sample than class labels. These findings indicate a potential means of overcoming the common obstacle of data variability and scarcity when training neural networks for medical image classification tasks.

10.
Environ Sci Technol ; 56(18): 13473-13484, 2022 09 20.
Article in English | MEDLINE | ID: mdl-36048618

ABSTRACT

Rapid progress in various advanced analytical methods, such as single-cell technologies, enable unprecedented and deeper understanding of microbial ecology beyond the resolution of conventional approaches. A major application challenge exists in the determination of sufficient sample size without sufficient prior knowledge of the community complexity and, the need to balance between statistical power and limited time or resources. This hinders the desired standardization and wider application of these technologies. Here, we proposed, tested and validated a computational sampling size assessment protocol taking advantage of a metric, named kernel divergence. This metric has two advantages: First, it directly compares data set-wise distributional differences with no requirements on human intervention or prior knowledge-based preclassification. Second, minimal assumptions in distribution and sample space are made in data processing to enhance its application domain. This enables test-verified appropriate handling of data sets with both linear and nonlinear relationships. The model was then validated in a case study with Single-cell Raman Spectroscopy (SCRS) phenotyping data sets from eight different enhanced biological phosphorus removal (EBPR) activated sludge communities located across North America. The model allows the determination of sufficient sampling size for any targeted or customized information capture capacity or resolution level. Promised by its flexibility and minimal restriction of input data types, the proposed method is expected to be a standardized approach for sampling size optimization, enabling more comparable and reproducible experiments and analysis on complex environmental samples. Finally, these advantages enable the extension of the capability to other single-cell technologies or environmental applications with data sets exhibiting continuous features.


Subject(s)
Biological Products , Phosphorus , Humans , Machine Learning , Phosphorus/chemistry , Polyphosphates , Sewage , Spectrum Analysis, Raman
11.
Chronic Obstr Pulm Dis ; 9(3): 349-365, 2022 Jul 29.
Article in English | MEDLINE | ID: mdl-35649102

ABSTRACT

Background: The heterogeneous nature of chronic obstructive pulmonary disease (COPD) complicates the identification of the predictors of disease progression. We aimed to improve the prediction of disease progression in COPD by using machine learning and incorporating a rich dataset of phenotypic features. Methods: We included 4496 smokers with available data from their enrollment and 5-year follow-up visits in the COPD Genetic Epidemiology (COPDGene®) study. We constructed linear regression (LR) and supervised random forest models to predict 5-year progression in forced expiratory in 1 second (FEV1) from 46 baseline features. Using cross-validation, we randomly partitioned participants into training and testing samples. We also validated the results in the COPDGene 10-year follow-up visit. Results: Predicting the change in FEV1 over time is more challenging than simply predicting the future absolute FEV1 level. For random forest, R-squared was 0.15 and the area under the receiver operator characteristic (ROC) curves for the prediction of participants in the top quartile of observed progression was 0.71 (testing) and respectively, 0.10 and 0.70 (validation). Random forest provided slightly better performance than LR. The accuracy was best for Global initiative for chronic Obstructive Lung Disease (GOLD) grades 1-2 participants, and it was harder to achieve accurate prediction in advanced stages of the disease. Predictive variables differed in their relative importance as well as for the predictions by GOLD. Conclusion: Random forest, along with deep phenotyping, predicts FEV1 progression with reasonable accuracy. There is significant room for improvement in future models. This prediction model facilitates the identification of smokers at increased risk for rapid disease progression. Such findings may be useful in the selection of patient populations for targeted clinical trials.

12.
Neuroinformatics ; 20(4): 965-979, 2022 10.
Article in English | MEDLINE | ID: mdl-35349109

ABSTRACT

Degeneracy in biological systems refers to a many-to-one mapping between physical structures and their functional (including psychological) outcomes. Despite the ubiquity of the phenomenon, traditional analytical tools for modeling degeneracy in neuroscience are extremely limited. In this study, we generated synthetic datasets to describe three situations of degeneracy in fMRI data to demonstrate the limitations of the current univariate approach. We describe a novel computational approach for the analysis referred to as neural topographic factor analysis (NTFA). NTFA is designed to capture variations in neural activity across task conditions and participants. The advantage of this discovery-oriented approach is to reveal whether and how experimental trials and participants cluster into task conditions and participant groups. We applied NTFA on simulated data, revealing the appropriate degeneracy assumption in all three situations and demonstrating NTFA's utility in uncovering degeneracy. Lastly, we discussed the importance of testing degeneracy in fMRI data and the implications of applying NTFA to do so.


Subject(s)
Brain Mapping , Magnetic Resonance Imaging , Humans
13.
Neural Netw ; 149: 95-106, 2022 May.
Article in English | MEDLINE | ID: mdl-35219032

ABSTRACT

Lifelong Learning (LL) refers to the ability to continually learn and solve new problems with incremental available information over time while retaining previous knowledge. Much attention has been given lately to Supervised Lifelong Learning (SLL) with a stream of labelled data. In contrast, we focus on resolving challenges in Unsupervised Lifelong Learning (ULL) with streaming unlabelled data when the data distribution and the unknown class labels evolve over time. Bayesian framework is natural to incorporate past knowledge and sequentially update the belief with new data. We develop a fully Bayesian inference framework for ULL with a novel end-to-end Deep Bayesian Unsupervised Lifelong Learning (DBULL) algorithm, which can progressively discover new clusters without forgetting the past with unlabelled data while learning latent representations. To efficiently maintain past knowledge, we develop a novel knowledge preservation mechanism via sufficient statistics of the latent representation for raw data. To detect the potential new clusters on the fly, we develop an automatic cluster discovery and redundancy removal strategy in our inference inspired by Nonparametric Bayesian statistics techniques. We demonstrate the effectiveness of our approach using image and text corpora benchmark datasets in both LL and batch settings.


Subject(s)
Algorithms , Education, Continuing , Bayes Theorem
14.
J Hazard Mater ; 423(Pt B): 127141, 2022 02 05.
Article in English | MEDLINE | ID: mdl-34560480

ABSTRACT

One of the major challenges in realization and implementations of the Tox21 vision is the urgent need to establish quantitative link between in-vitro assay molecular endpoint and in-vivo regulatory-relevant phenotypic toxicity endpoint. Current toxicomics approach still mostly rely on large number of redundant markers without pre-selection or ranking, therefore, selection of relevant biomarkers with minimal redundancy would reduce the number of markers to be monitored and reduce the cost, time, and complexity of the toxicity screening and risk monitoring. Here, we demonstrated that, using time series toxicomics in-vitro assay along with machine learning-based feature selection (maximum relevance and minimum redundancy (MRMR)) and classification method (support vector machine (SVM)), an "optimal" number of biomarkers with minimum redundancy can be identified for prediction of phenotypic toxicity endpoints with good accuracy. We included two case studies for in-vivo carcinogenicity and Ames genotoxicity prediction, using 20 selected chemicals including model genotoxic chemicals and negative controls, respectively. The results suggested that, employing the adverse outcome pathway (AOP) concept, molecular endpoints based on a relatively small number of properly selected biomarker-ensemble involved in the conserved DNA-damage and repair pathways among eukaryotes, were able to predict both Ames genotoxicity endpoints and in-vivo carcinogenicity in rats. A prediction accuracy of 76% with AUC = 0.81 was achieved while predicting in-vivo carcinogenicity with the top-ranked five biomarkers. For Ames genotoxicity prediction, the top-ranked five biomarkers were able to achieve prediction accuracy of 70% with AUC = 0.75. However, the specific biomarkers identified as the top-ranked five biomarkers are different for the two different phenotypic genotoxicity assays. The top-ranked biomarkers for the in-vivo carcinogenicity prediction mainly focused on double strand break repair and DNA recombination, whereas the selected top-ranked biomarkers for Ames genotoxicity prediction are associated with base- and nucleotide-excision repair The method developed in this study will help to fill in the knowledge gap in phenotypic anchoring and predictive toxicology, and contribute to the progress in the implementation of tox 21 vision for environmental and health applications.


Subject(s)
DNA Damage , Toxicogenetics , Animals , Biological Assay , Biomarkers , Machine Learning , Rats
15.
PLoS Comput Biol ; 17(10): e1009433, 2021 10.
Article in English | MEDLINE | ID: mdl-34634029

ABSTRACT

Most predictive models based on gene expression data do not leverage information related to gene splicing, despite the fact that splicing is a fundamental feature of eukaryotic gene expression. Cigarette smoking is an important environmental risk factor for many diseases, and it has profound effects on gene expression. Using smoking status as a prediction target, we developed deep neural network predictive models using gene, exon, and isoform level quantifications from RNA sequencing data in 2,557 subjects in the COPDGene Study. We observed that models using exon and isoform quantifications clearly outperformed gene-level models when using data from 5 genes from a previously published prediction model. Whereas the test set performance of the previously published model was 0.82 in the original publication, our exon-based models including an exon-to-isoform mapping layer achieved a test set AUC (area under the receiver operating characteristic) of 0.88, which improved to an AUC of 0.94 using exon quantifications from a larger set of genes. Isoform variability is an important source of latent information in RNA-seq data that can be used to improve clinical prediction models.


Subject(s)
Deep Learning , Models, Statistical , RNA-Seq/methods , Smoking , Aged , Computational Biology , Exons/genetics , Female , Gene Expression Profiling , Humans , Male , Middle Aged , Protein Isoforms/genetics , ROC Curve , Smoking/epidemiology , Smoking/genetics
16.
Sci Rep ; 11(1): 12576, 2021 06 15.
Article in English | MEDLINE | ID: mdl-34131165

ABSTRACT

Reflectance confocal microscopy (RCM) is an effective non-invasive tool for cancer diagnosis. However, acquiring and reading RCM images requires extensive training and experience, and novice clinicians exhibit high discordance in diagnostic accuracy. Quantitative tools to standardize image acquisition could reduce both required training and diagnostic variability. To perform diagnostic analysis, clinicians collect a set of RCM mosaics (RCM images concatenated in a raster fashion to extend the field view) at 4-5 specific layers in skin, all localized in the junction between the epidermal and dermal layers (dermal-epidermal junction, DEJ), necessitating locating that junction before mosaic acquisition. In this study, we automate DEJ localization using deep recurrent convolutional neural networks to delineate skin strata in stacks of RCM images collected at consecutive depths. Success will guide to automated and quantitative mosaic acquisition thus reducing inter operator variability and bring standardization in imaging. Testing our model against an expert labeled dataset of 504 RCM stacks, we achieved [Formula: see text] classification accuracy and nine-fold reduction in the number of anatomically impossible errors compared to the previous state-of-the-art.


Subject(s)
Early Detection of Cancer , Microscopy, Confocal/methods , Skin Neoplasms/diagnosis , Epidermis/diagnostic imaging , Epidermis/pathology , Female , Humans , Image Processing, Computer-Assisted/methods , Male , Neural Networks, Computer , Skin Neoplasms/diagnostic imaging , Skin Neoplasms/pathology
18.
Psychophysiology ; 58(6): e13818, 2021 06.
Article in English | MEDLINE | ID: mdl-33768687

ABSTRACT

Emotional granularity describes the ability to create emotional experiences that are precise and context-specific. Despite growing evidence of a link between emotional granularity and mental health, the physiological correlates of granularity have been under-investigated. This study explored the relationship between granularity and cardiorespiratory physiological activity in everyday life, with particular reference to the role of respiratory sinus arrhythmia (RSA), an estimate of vagal influence on the heart often associated with positive mental and physical health outcomes. Participants completed a physiologically triggered experience-sampling protocol including ambulatory recording of electrocardiogram, impedance cardiogram, movement, and posture. At each prompt, participants generated emotion labels to describe their current experience. In an end-of-day survey, participants elaborated on each prompt by rating the intensity of their experience on a standard set of emotion adjectives. Consistent with our hypotheses, individuals with higher granularity exhibited a larger number of distinct patterns of physiological activity during seated rest, and more situationally precise patterns of activity during emotional events: granularity was positively correlated with the number of clusters of cardiorespiratory physiological activity discovered in seated rest data, as well as with the performance of classifiers trained on event-related changes in physiological activity. Granularity was also positively associated with RSA during seated rest periods, although this relationship did not reach significance in this sample. These findings are consistent with constructionist accounts of emotion that propose concepts as a key mechanism underlying individual differences in emotional experience, physiological regulation, and physical health.


Subject(s)
Cardiorespiratory Fitness/physiology , Emotions/physiology , Heart Rate/physiology , Respiratory Sinus Arrhythmia/physiology , Adult , Electrocardiography , Female , Humans , Male , Posture , Surveys and Questionnaires , Young Adult
19.
Sci Rep ; 11(1): 3679, 2021 02 11.
Article in English | MEDLINE | ID: mdl-33574486

ABSTRACT

Reflectance confocal microscopy (RCM) is a non-invasive imaging tool that reduces the need for invasive histopathology for skin cancer diagnoses by providing high-resolution mosaics showing the architectural patterns of skin, which are used to identify malignancies in-vivo. RCM mosaics are similar to dermatopathology sections, both requiring extensive training to interpret. However, these modalities differ in orientation, as RCM mosaics are horizontal (parallel to the skin surface) while histopathology sections are vertical, and contrast mechanism, RCM with a single (reflectance) mechanism resulting in grayscale images and histopathology with multi-factor color-stained contrast. Image analysis and machine learning methods can potentially provide a diagnostic aid to clinicians to interpret RCM mosaics, eventually helping to ease the adoption and more efficiently utilizing RCM in routine clinical practice. However standard supervised machine learning may require a prohibitive volume of hand-labeled training data. In this paper, we present a weakly supervised machine learning model to perform semantic segmentation of architectural patterns encountered in RCM mosaics. Unlike more widely used fully supervised segmentation models that require pixel-level annotations, which are very labor-demanding and error-prone to obtain, here we focus on training models using only patch-level labels (e.g. a single field of view within an entire mosaic). We segment RCM mosaics into "benign" and "aspecific (nonspecific)" regions, where aspecific regions represent the loss of regular architecture due to injury and/or inflammation, pre-malignancy, or malignancy. We adopt Efficientnet, a deep neural network (DNN) proven to accurately accomplish classification tasks, to generate class activation maps, and use a Gaussian weighting kernel to stitch smaller images back into larger fields of view. The trained DNN achieved an average area under the curve of 0.969, and Dice coefficient of 0.778 showing the feasibility of spatial localization of aspecific regions in RCM images, and making the diagnostics decision model more interpretable to the clinicians.


Subject(s)
Image Processing, Computer-Assisted , Microscopy, Confocal , Skin Neoplasms/diagnosis , Skin/ultrastructure , Humans , Machine Learning , Neural Networks, Computer , Semantics , Skin/diagnostic imaging , Skin/pathology , Skin Neoplasms/diagnostic imaging , Skin Neoplasms/pathology
SELECTION OF CITATIONS
SEARCH DETAIL
...