Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 47
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
BMC Med Res Methodol ; 22(1): 336, 2022 12 28.
Artigo em Inglês | MEDLINE | ID: mdl-36577938

RESUMO

BACKGROUND: Many metagenomic studies have linked the imbalance in microbial abundance profiles to a wide range of diseases. These studies suggest utilizing the microbial abundance profiles as potential markers for metagenomic-associated conditions. Due to the inevitable importance of biomarkers in understanding the disease progression and the development of possible therapies, various computational tools have been proposed for metagenomic biomarker detection. However, most existing tools require prior scripting knowledge and lack user friendly interfaces, causing considerable time and effort to install, configure, and run these tools. Besides, there is no available all-in-one solution for running and comparing various metagenomic biomarker detection simultaneously. In addition, most of these tools just present the suggested biomarkers without any statistical evaluation for their quality. RESULTS: To overcome these limitations, this work presents MetaAnalyst, a software package with a simple graphical user interface (GUI) that (i) automates the installation and configuration of 28 state-of-the-art tools, (ii) supports flexible study design to enable studying the dataset under different scenarios smoothly, iii) runs and evaluates several algorithms simultaneously iv) supports different input formats and provides the user with several preprocessing capabilities, v) provides a variety of metrics to evaluate the quality of the suggested markers, and vi) presents the outcomes in the form of publication quality plots with various formatting capabilities as well as Excel sheets. CONCLUSIONS: The utility of this tool has been verified through studying a metagenomic dataset under four scenarios. The executable file for MetaAnalyst along with its user manual are made available at https://github.com/mshawaqfeh/MetaAnalyst .


Assuntos
Algoritmos , Software , Humanos , Metagenômica , Biomarcadores , Fenótipo
2.
BMC Bioinformatics ; 19(Suppl 3): 72, 2018 03 21.
Artigo em Inglês | MEDLINE | ID: mdl-29589560

RESUMO

BACKGROUND: Analyzing Variance heterogeneity in genome wide association studies (vGWAS) is an emerging approach for detecting genetic loci involved in gene-gene and gene-environment interactions. vGWAS analysis detects variability in phenotype values across genotypes, as opposed to typical GWAS analysis, which detects variations in the mean phenotype value. RESULTS: A handful of vGWAS analysis methods have been recently introduced in the literature. However, very little work has been done for evaluating these methods. To enable the development of better vGWAS analysis methods, this work presents the first quantitative vGWAS simulation procedure. To that end, we describe the mathematical framework and algorithm for generating quantitative vGWAS phenotype data from genotype profiles. Our simulation model accounts for both haploid and diploid genotypes under different modes of dominance. Our model is also able to simulate any number of genetic loci causing mean and variance heterogeneity. CONCLUSIONS: We demonstrate the utility of our simulation procedure through generating a variety of genetic loci types to evaluate common GWAS and vGWAS analysis methods. The results of this evaluation highlight the challenges current tools face in detecting GWAS and vGWAS loci.


Assuntos
Simulação por Computador , Estudo de Associação Genômica Ampla , Algoritmos , Diploide , Loci Gênicos , Genótipo , Humanos , Desequilíbrio de Ligação/genética , Fenótipo , Polimorfismo de Nucleotídeo Único/genética
3.
BMC Bioinformatics ; 18(1): 328, 2017 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-28693478

RESUMO

BACKGROUND: Biomarker detection presents itself as a major means of translating biological data into clinical applications. Due to the recent advances in high throughput sequencing technologies, an increased number of metagenomics studies have suggested the dysbiosis in microbial communities as potential biomarker for certain diseases. The reproducibility of the results drawn from metagenomic data is crucial for clinical applications and to prevent incorrect biological conclusions. The variability in the sample size and the subjects participating in the experiments induce diversity, which may drastically change the outcome of biomarker detection algorithms. Therefore, a robust biomarker detection algorithm that ensures the consistency of the results irrespective of the natural diversity present in the samples is needed. RESULTS: Toward this end, this paper proposes a novel Regularized Low Rank-Sparse Decomposition (RegLRSD) algorithm. RegLRSD models the bacterial abundance data as a superposition between a sparse matrix and a low-rank matrix, which account for the differentially and non-differentially abundant microbes, respectively. Hence, the biomarker detection problem is cast as a matrix decomposition problem. In order to yield more consistent and solid biological conclusions, RegLRSD incorporates the prior knowledge that the irrelevant microbes do not exhibit significant variation between samples belonging to different phenotypes. Moreover, an efficient algorithm to extract the sparse matrix is proposed. Comprehensive comparisons of RegLRSD with the state-of-the-art algorithms on three realistic datasets are presented. The obtained results demonstrate that RegLRSD consistently outperforms the other algorithms in terms of reproducibility performance and provides a marker list with high classification accuracy. CONCLUSIONS: The proposed RegLRSD algorithm for biomarker detection provides high reproducibility and classification accuracy performance regardless of the dataset complexity and the number of selected biomarkers. This renders RegLRSD as a reliable and powerful tool for identifying potential metagenomic biomarkers.


Assuntos
Algoritmos , Biomarcadores/análise , Metagenômica/métodos , Animais , Biomarcadores/metabolismo , Colite Ulcerativa/diagnóstico , Colite Ulcerativa/metabolismo , Cães , Insuficiência Pancreática Exócrina/diagnóstico , Insuficiência Pancreática Exócrina/metabolismo , Sequenciamento de Nucleotídeos em Larga Escala , Doenças Inflamatórias Intestinais/diagnóstico , Doenças Inflamatórias Intestinais/metabolismo , Camundongos , Reprodutibilidade dos Testes
4.
BMC Genomics ; 18(Suppl 3): 228, 2017 03 27.
Artigo em Inglês | MEDLINE | ID: mdl-28361680

RESUMO

BACKGROUND: Inferring the microbial interaction networks (MINs) and modeling their dynamics are critical in understanding the mechanisms of the bacterial ecosystem and designing antibiotic and/or probiotic therapies. Recently, several approaches were proposed to infer MINs using the generalized Lotka-Volterra (gLV) model. Main drawbacks of these models include the fact that these models only consider the measurement noise without taking into consideration the uncertainties in the underlying dynamics. Furthermore, inferring the MIN is characterized by the limited number of observations and nonlinearity in the regulatory mechanisms. Therefore, novel estimation techniques are needed to address these challenges. RESULTS: This work proposes SgLV-EKF: a stochastic gLV model that adopts the extended Kalman filter (EKF) algorithm to model the MIN dynamics. In particular, SgLV-EKF employs a stochastic modeling of the MIN by adding a noise term to the dynamical model to compensate for modeling uncertainties. This stochastic modeling is more realistic than the conventional gLV model which assumes that the MIN dynamics are perfectly governed by the gLV equations. After specifying the stochastic model structure, we propose the EKF to estimate the MIN. SgLV-EKF was compared with two similarity-based algorithms, one algorithm from the integral-based family and two regression-based algorithms, in terms of the achieved performance on two synthetic data-sets and two real data-sets. The first data-set models the randomness in measurement data, whereas, the second data-set incorporates uncertainties in the underlying dynamics. The real data-sets are provided by a recent study pertaining to an antibiotic-mediated Clostridium difficile infection. The experimental results demonstrate that SgLV-EKF outperforms the alternative methods in terms of robustness to measurement noise, modeling errors, and tracking the dynamics of the MIN. CONCLUSIONS: Performance analysis demonstrates that the proposed SgLV-EKF algorithm represents a powerful and reliable tool to infer MINs and track their dynamics.


Assuntos
Algoritmos , Metagenômica/métodos , Interações Microbianas , Modelos Teóricos
5.
BMC Genomics ; 17 Suppl 7: 549, 2016 08 22.
Artigo em Inglês | MEDLINE | ID: mdl-27556419

RESUMO

BACKGROUND: We considered the prediction of cancer classes (e.g. subtypes) using patient gene expression profiles that contain both systematic and condition-specific biases when compared with the training reference dataset. The conventional normalization-based approaches cannot guarantee that the gene signatures in the reference and prediction datasets always have the same distribution for all different conditions as the class-specific gene signatures change with the condition. Therefore, the trained classifier would work well under one condition but not under another. METHODS: To address the problem of current normalization approaches, we propose a novel algorithm called CrossLink (CL). CL recognizes that there is no universal, condition-independent normalization mapping of signatures. In contrast, it exploits the fact that the signature is unique to its associated class under any condition and thus employs an unsupervised clustering algorithm to discover this unique signature. RESULTS: We assessed the performance of CL for cross-condition predictions of PAM50 subtypes of breast cancer by using a simulated dataset modeled after TCGA BRCA tumor samples with a cross-validation scheme, and datasets with known and unknown PAM50 classification. CL achieved prediction accuracy >73 %, highest among other methods we evaluated. We also applied the algorithm to a set of breast cancer tumors derived from Arabic population to assign a PAM50 classification to each tumor based on their gene expression profiles. CONCLUSIONS: A novel algorithm CrossLink for cross-condition prediction of cancer classes was proposed. In all test datasets, CL showed robust and consistent improvement in prediction performance over other state-of-the-art normalization and classification algorithms.


Assuntos
Neoplasias da Mama/genética , Regulação Neoplásica da Expressão Gênica/genética , Transcriptoma/genética , Algoritmos , Neoplasias da Mama/classificação , Neoplasias da Mama/patologia , Análise por Conglomerados , Feminino , Humanos
6.
Epilepsy Behav ; 58: 48-60, 2016 05.
Artigo em Inglês | MEDLINE | ID: mdl-27057745

RESUMO

This paper presents a novel method for seizure onset detection using fused information extracted from multichannel electroencephalogram (EEG) and single-channel electrocardiogram (ECG). In existing seizure detectors, the analysis of the nonlinear and nonstationary ECG signal is limited to the time-domain or frequency-domain. In this work, heart rate variability (HRV) extracted from ECG is analyzed using a Matching-Pursuit (MP) and Wigner-Ville Distribution (WVD) algorithm in order to effectively extract meaningful HRV features representative of seizure and nonseizure states. The EEG analysis relies on a common spatial pattern (CSP) based feature enhancement stage that enables better discrimination between seizure and nonseizure features. The EEG-based detector uses logical operators to pool SVM seizure onset detections made independently across different EEG spectral bands. Two fusion systems are adopted. In the first system, EEG-based and ECG-based decisions are directly fused to obtain a final decision. The second fusion system adopts an override option that allows for the EEG-based decision to override the fusion-based decision in the event that the detector observes a string of EEG-based seizure decisions. The proposed detectors exhibit an improved performance, with respect to sensitivity and detection latency, compared with the state-of-the-art detectors. Experimental results demonstrate that the second detector achieves a sensitivity of 100%, detection latency of 2.6s, and a specificity of 99.91% for the MAJ fusion case.


Assuntos
Encéfalo/fisiopatologia , Eletrocardiografia/métodos , Eletroencefalografia/métodos , Frequência Cardíaca/fisiologia , Convulsões/diagnóstico , Adulto , Idoso , Algoritmos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Convulsões/fisiopatologia , Sensibilidade e Especificidade , Processamento de Sinais Assistido por Computador
7.
Epilepsy Behav ; 50: 77-87, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26149062

RESUMO

This paper presents two novel epileptic seizure onset detectors. The detectors rely on a common spatial pattern (CSP)-based feature enhancement stage that increases the variance between seizure and nonseizure scalp electroencephalography (EEG). The proposed feature enhancement stage enables better discrimination between seizure and nonseizure features. The first detector adopts a conventional classification stage using a support vector machine (SVM) that feeds the energy features extracted from different subbands to an SVM for seizure onset detection. The second detector uses logical operators to pool SVM seizure onset detections made independently across different EEG spectral bands. The proposed detectors exhibit an improved performance, with respect to sensitivity and detection latency, compared with the state-of-the-art detectors. Experimental results have demonstrated that the first detector achieves a sensitivity of 95.2%, detection latency of 6.43s, and false alarm rate of 0.59perhour. The second detector achieves a sensitivity of 100%, detection latency of 7.28s, and false alarm rate of 1.2per hour for the MAJORITY fusion method.


Assuntos
Eletroencefalografia/métodos , Eletroencefalografia/normas , Convulsões/diagnóstico , Convulsões/fisiopatologia , Algoritmos , Humanos , Fatores de Tempo
8.
Bioinformatics ; 29(19): 2410-8, 2013 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-23940252

RESUMO

MOTIVATION: Network component analysis (NCA) is an efficient method of reconstructing the transcription factor activity (TFA), which makes use of the gene expression data and prior information available about transcription factor (TF)-gene regulations. Most of the contemporary algorithms either exhibit the drawback of inconsistency and poor reliability, or suffer from prohibitive computational complexity. In addition, the existing algorithms do not possess the ability to counteract the presence of outliers in the microarray data. Hence, robust and computationally efficient algorithms are needed to enable practical applications. RESULTS: We propose ROBust Network Component Analysis (ROBNCA), a novel iterative algorithm that explicitly models the possible outliers in the microarray data. An attractive feature of the ROBNCA algorithm is the derivation of a closed form solution for estimating the connectivity matrix, which was not available in prior contributions. The ROBNCA algorithm is compared with FastNCA and the non-iterative NCA (NI-NCA). ROBNCA estimates the TF activity profiles as well as the TF-gene control strength matrix with a much higher degree of accuracy than FastNCA and NI-NCA, irrespective of varying noise, correlation and/or amount of outliers in case of synthetic data. The ROBNCA algorithm is also tested on Saccharomyces cerevisiae data and Escherichia coli data, and it is observed to outperform the existing algorithms. The run time of the ROBNCA algorithm is comparable with that of FastNCA, and is hundreds of times faster than NI-NCA. AVAILABILITY: The ROBNCA software is available at http://people.tamu.edu/∼amina/ROBNCA


Assuntos
Algoritmos , Fatores de Transcrição/análise , Ciclo Celular , Escherichia coli/química , Escherichia coli/genética , Escherichia coli/metabolismo , Expressão Gênica , Redes Neurais de Computação , Dinâmica não Linear , Reprodutibilidade dos Testes , Saccharomyces cerevisiae/citologia , Saccharomyces cerevisiae/genética , Saccharomyces cerevisiae/metabolismo , Fatores de Transcrição/genética , Fatores de Transcrição/metabolismo
9.
Comput Biol Med ; 173: 108303, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38547653

RESUMO

The rising occurrence and notable public health consequences of skin cancer, especially of the most challenging form known as melanoma, have created an urgent demand for more advanced approaches to disease management. The integration of modern computer vision methods into clinical procedures offers the potential for enhancing the detection of skin cancer . The UNet model has gained prominence as a valuable tool for this objective, continuously evolving to tackle the difficulties associated with the inherent diversity of dermatological images. These challenges stem from diverse medical origins and are further complicated by variations in lighting, patient characteristics, and hair density. In this work, we present an innovative end-to-end trainable network crafted for the segmentation of skin cancer . This network comprises an encoder-decoder architecture, a novel feature extraction block, and a densely connected multi-rate Atrous convolution block. We evaluated the performance of the proposed lightweight skin cancer segmentation network (LSCS-Net) on three widely used benchmark datasets for skin lesion segmentation: ISIC 2016, ISIC 2017, and ISIC 2018. The generalization capabilities of LSCS-Net are testified by the excellent performance on breast cancer and thyroid nodule segmentation datasets. The empirical findings confirm that LSCS-net attains state-of-the-art results, as demonstrated by a significantly elevated Jaccard index.


Assuntos
Neoplasias da Mama , Melanoma , Neoplasias Cutâneas , Humanos , Feminino , Neoplasias Cutâneas/diagnóstico por imagem , Melanoma/diagnóstico por imagem , Benchmarking , Cabelo , Processamento de Imagem Assistida por Computador
10.
Artif Intell Med ; 150: 102818, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38553158

RESUMO

Cardiac arrhythmia is one of the prime reasons for death globally. Early diagnosis of heart arrhythmia is crucial to provide timely medical treatment. Heart arrhythmias are diagnosed by analyzing the electrocardiogram (ECG) of patients. Manual analysis of ECG is time-consuming and challenging. Hence, effective automated detection of heart arrhythmias is important to produce reliable results. Different deep-learning techniques to detect heart arrhythmias such as Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Transformer, and Hybrid CNN-LSTM were proposed. However, these techniques, when used individually, are not sufficient to effectively learn multiple features from the ECG signal. The fusion of CNN and LSTM overcomes the limitations of CNN in the existing studies as CNN-LSTM hybrids can extract spatiotemporal features. However, LSTMs suffer from long-range dependency issues due to which certain features may be ignored. Hence, to compensate for the drawbacks of the existing models, this paper proposes a more comprehensive feature fusion technique by merging CNN, LSTM, and Transformer models. The fusion of these models facilitates learning spatial, temporal, and long-range dependency features, hence, helping to capture different attributes of the ECG signal. These features are subsequently passed to a majority voting classifier equipped with three traditional base learners. The traditional learners are enriched with deep features instead of handcrafted features. Experiments are performed on the MIT-BIH arrhythmias database and the model performance is compared with that of the state-of-art models. Results reveal that the proposed model performs better than the existing models yielding an accuracy of 99.56%.


Assuntos
Arritmias Cardíacas , Processamento de Sinais Assistido por Computador , Humanos , Arritmias Cardíacas/diagnóstico , Redes Neurais de Computação , Eletrocardiografia/métodos , Aprendizado de Máquina , Algoritmos
11.
Front Cardiovasc Med ; 11: 1424585, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39027006

RESUMO

Electrocardiogram (ECG) is a non-invasive approach to capture the overall electrical activity produced by the contraction and relaxation of the cardiac muscles. It has been established in the literature that the difference between ECG-derived age and chronological age represents a general measure of cardiovascular health. Elevated ECG-derived age strongly correlates with cardiovascular conditions (e.g., atherosclerotic cardiovascular disease). However, the neural networks for ECG age estimation are yet to be thoroughly evaluated from the perspective of ECG acquisition parameters. Additionally, deep learning systems for ECG analysis encounter challenges in generalizing across diverse ECG morphologies in various ethnic groups and are susceptible to errors with signals that exhibit random or systematic distortions To address these challenges, we perform a comprehensive empirical study to determine the threshold for the sampling rate and duration of ECG signals while considering their impact on the computational cost of the neural networks. To tackle the concern of ECG waveform variability in different populations, we evaluate the feasibility of utilizing pre-trained and fine-tuned networks to estimate ECG age in different ethnic groups. Additionally, we empirically demonstrate that finetuning is an environmentally sustainable way to train neural networks, and it significantly decreases the ECG instances required (by more than 100 × ) for attaining performance similar to the networks trained from random weight initialization on a complete dataset. Finally, we systematically evaluate augmentation schemes for ECG signals in the context of age estimation and introduce a random cropping scheme that provides best-in-class performance while using shorter-duration ECG signals. The results also show that random cropping enables the networks to perform well with systematic and random ECG signal corruptions.

12.
Front Physiol ; 14: 1246746, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37791347

RESUMO

Cardiovascular diseases are a leading cause of mortality globally. Electrocardiography (ECG) still represents the benchmark approach for identifying cardiac irregularities. Automatic detection of abnormalities from the ECG can aid in the early detection, diagnosis, and prevention of cardiovascular diseases. Deep Learning (DL) architectures have been successfully employed for arrhythmia detection and classification and offered superior performance to traditional shallow Machine Learning (ML) approaches. This survey categorizes and compares the DL architectures used in ECG arrhythmia detection from 2017-2023 that have exhibited superior performance. Different DL models such as Convolutional Neural Networks (CNNs), Multilayer Perceptrons (MLPs), Transformers, and Recurrent Neural Networks (RNNs) are reviewed, and a summary of their effectiveness is provided. This survey provides a comprehensive roadmap to expedite the acclimation process for emerging researchers willing to develop efficient algorithms for detecting ECG anomalies using DL models. Our tailored guidelines bridge the knowledge gap allowing newcomers to align smoothly with the prevailing research trends in ECG arrhythmia detection. We shed light on potential areas for future research and refinement in model development and optimization, intending to stimulate advancement in ECG arrhythmia detection and classification.

13.
PLoS One ; 18(8): e0288228, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37535557

RESUMO

A novel machine learning framework that is able to consistently detect, localize, and measure the severity of human congenital cleft lip anomalies is introduced. The ultimate goal is to fill an important clinical void: to provide an objective and clinically feasible method of gauging baseline facial deformity and the change obtained through reconstructive surgical intervention. The proposed method first employs the StyleGAN2 generative adversarial network with model adaptation to produce a normalized transformation of 125 faces, and then uses a pixel-wise subtraction approach to assess the difference between all baseline images and their normalized counterparts (a proxy for severity of deformity). The pipeline of the proposed framework consists of the following steps: image preprocessing, face normalization, color transformation, heat-map generation, morphological erosion, and abnormality scoring. Heatmaps that finely discern anatomic anomalies visually corroborate the generated scores. The proposed framework is validated through computer simulations as well as by comparison of machine-generated versus human ratings of facial images. The anomaly scores yielded by the proposed computer model correlate closely with human ratings, with a calculated Pearson's r score of 0.89. The proposed pixel-wise measurement technique is shown to more closely mirror human ratings of cleft faces than two other existing, state-of-the-art image quality metrics (Learned Perceptual Image Patch Similarity and Structural Similarity Index). The proposed model may represent a new standard for objective, automated, and real-time clinical measurement of faces affected by congenital cleft deformity.


Assuntos
Fenda Labial , Fissura Palatina , Doenças Musculoesqueléticas , Humanos , Fenda Labial/cirurgia , Fissura Palatina/diagnóstico por imagem , Fissura Palatina/cirurgia , Simulação por Computador , Aprendizado de Máquina , Processamento de Imagem Assistida por Computador/métodos
14.
Front Oncol ; 13: 1282536, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38125949

RESUMO

Elastography Ultrasound provides elasticity information of the tissues, which is crucial for understanding the density and texture, allowing for the diagnosis of different medical conditions such as fibrosis and cancer. In the current medical imaging scenario, elastograms for B-mode Ultrasound are restricted to well-equipped hospitals, making the modality unavailable for pocket ultrasound. To highlight the recent progress in elastogram synthesis, this article performs a critical review of generative adversarial network (GAN) methodology for elastogram generation from B-mode Ultrasound images. Along with a brief overview of cutting-edge medical image synthesis, the article highlights the contribution of the GAN framework in light of its impact and thoroughly analyzes the results to validate whether the existing challenges have been effectively addressed. Specifically, This article highlights that GANs can successfully generate accurate elastograms for deep-seated breast tumors (without having artifacts) and improve diagnostic effectiveness for pocket US. Furthermore, the results of the GAN framework are thoroughly analyzed by considering the quantitative metrics, visual evaluations, and cancer diagnostic accuracy. Finally, essential unaddressed challenges that lie at the intersection of elastography and GANs are presented, and a few future directions are shared for the elastogram synthesis research.

15.
Artif Intell Med ; 146: 102690, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-38042607

RESUMO

Twelve lead electrocardiogram signals capture unique fingerprints about the body's biological processes and electrical activity of heart muscles. Machine learning and deep learning-based models can learn the embedded patterns in the electrocardiogram to estimate complex metrics such as age and gender that depend on multiple aspects of human physiology. ECG estimated age with respect to the chronological age reflects the overall well-being of the cardiovascular system, with significant positive deviations indicating an aged cardiovascular system and a higher likelihood of cardiovascular mortality. Several conventional, machine learning, and deep learning-based methods have been proposed to estimate age from electronic health records, health surveys, and ECG data. This manuscript comprehensively reviews the methodologies proposed for ECG-based age and gender estimation over the last decade. Specifically, the review highlights that elevated ECG age is associated with atherosclerotic cardiovascular disease, abnormal peripheral endothelial dysfunction, and high mortality, among many other cardiovascular disorders. Furthermore, the survey presents overarching observations and insights across methods for age and gender estimation. This paper also presents several essential methodological improvements and clinical applications of ECG-estimated age and gender to encourage further improvements of the state-of-the-art methodologies.


Assuntos
Eletrocardiografia , Processamento de Sinais Assistido por Computador , Humanos , Idoso , Eletrocardiografia/métodos , Aprendizado de Máquina , Frequência Cardíaca/fisiologia , Probabilidade
16.
BMC Genomics ; 13 Suppl 6: S13, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23134756

RESUMO

BACKGROUND: Despite initial response in adjuvant chemotherapy, ovarian cancer patients treated with the combination of paclitaxel and carboplatin frequently suffer from recurrence after few cycles of treatment, and the underlying mechanisms causing the chemoresistance remain unclear. Recently, The Cancer Genome Atlas (TCGA) research network concluded an ovarian cancer study and released the dataset to the public. The TCGA dataset possesses large sample size, comprehensive molecular profiles, and clinical outcome information; however, because of the unknown molecular subtypes in ovarian cancer and the great diversity of adjuvant treatments TCGA patients went through, studying chemotherapeutic response using the TCGA data is difficult. Additionally, factors such as sample batches, patient ages, and tumor stages further confound or suppress the identification of relevant genes, and thus the biological functions and disease mechanisms. RESULTS: To address these issues, herein we propose an analysis procedure designed to reduce suppression effect by focusing on a specific chemotherapeutic treatment, and to remove confounding effects such as batch effect, patient's age, and tumor stages. The proposed procedure starts with a batch effect adjustment, followed by a rigorous sample selection process. Then, the gene expression, copy number, and methylation profiles from the TCGA ovarian cancer dataset are analyzed using a semi-supervised clustering method combined with a novel scoring function. As a result, two molecular classifications, one with poor copy number profiles and one with poor methylation profiles, enriched with unfavorable scores are identified. Compared with the samples enriched with favorable scores, these two classifications exhibit poor progression-free survival (PFS) and might be associated with poor chemotherapy response specifically to the combination of paclitaxel and carboplatin. Significant genes and biological processes are detected subsequently using classical statistical approaches and enrichment analysis. CONCLUSIONS: The proposed procedure for the reduction of confounding and suppression effects and the semi-supervised clustering method are essential steps to identify genes associated with the chemotherapeutic response.


Assuntos
Bases de Dados Factuais , Neoplasias Ovarianas/metabolismo , Adulto , Idoso , Idoso de 80 Anos ou mais , Antineoplásicos/uso terapêutico , Carboplatina/uso terapêutico , Análise por Conglomerados , Variações do Número de Cópias de DNA , Metilação de DNA , Intervalo Livre de Doença , Quimioterapia Combinada , Feminino , Regulação Neoplásica da Expressão Gênica , Humanos , Pessoa de Meia-Idade , Estadiamento de Neoplasias , Neoplasias Ovarianas/tratamento farmacológico , Neoplasias Ovarianas/genética , Paclitaxel/uso terapêutico
17.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1448-1451, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086585

RESUMO

The overriding clinical and academic challenge that inspires this work is the lack of a universally accepted, objective, and feasible method of measuring facial deformity; and, by extension, the lack of a reliable means of assessing the benefits and shortcomings of craniofacial surgical interventions. We propose a machine learning-based method to create a scale of facial deformity by producing numerical scores that reflect the level of deformity. An object detector that is constructed using a cascade function of Haar features has been trained with a rich dataset of normal faces in addition to a collection of images that does not contain faces. After that, the confidence score of the face detector was used as a gauge of facial abnormality. The scores were compared with a benchmark that is based on human appraisals obtained using a survey of a range of facial deformities. Interestingly, the overall Pearson's correlation coefficient of the machine scores with respect to the average human score exceeded 0.96.

18.
Plast Reconstr Surg Glob Open ; 10(1): e4034, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35070595

RESUMO

A sensitive, objective, and universally accepted method of measuring facial deformity does not currently exist. Two distinct machine learning methods are described here that produce numerical scores reflecting the level of deformity of a wide variety of facial conditions. METHODS: The first proposed technique utilizes an object detector based on a cascade function of Haar features. The model was trained using a dataset of 200,000 normal faces, as well as a collection of images devoid of faces. With the model trained to detect normal faces, the face detector confidence score was shown to function as a reliable gauge of facial abnormality. The second technique developed is based on a deep learning architecture of a convolutional autoencoder trained with the same rich dataset of normal faces. Because the convolutional autoencoder regenerates images disposed toward their training dataset (ie, normal faces), we utilized its reconstruction error as an indicator of facial abnormality. Scores generated by both methods were compared with human ratings obtained using a survey of 80 subjects evaluating 60 images depicting a range of facial deformities [rating from 1 (abnormal) to 7 (normal)]. RESULTS: The machine scores were highly correlated to the average human score, with overall Pearson's correlation coefficient exceeding 0.96 (P < 0.00001). Both methods were computationally efficient, reporting results within 3 seconds. CONCLUSIONS: These models show promise for adaptation into a clinically accessible handheld tool. It is anticipated that ongoing development of this technology will facilitate multicenter collaboration and comparison of outcomes between conditions, techniques, operators, and institutions.

19.
Sci Rep ; 10(1): 21375, 2020 12 07.
Artigo em Inglês | MEDLINE | ID: mdl-33288815

RESUMO

What is a normal face? A fundamental task for the facial reconstructive surgeon is to answer that question as it pertains to any given individual. Accordingly, it would be important to be able to place the facial appearance of a patient with congenital or acquired deformity numerically along their own continuum of normality, and to measure any surgical changes against such a personalized benchmark. This has not previously been possible. We have solved this problem by designing a computerized model that produces realistic, normalized versions of any given facial image, and objectively measures the perceptual distance between the raw and normalized facial image pair. The model is able to faithfully predict human scoring of facial normality. We believe this work represents a paradigm shift in the assessment of the human face, holding great promise for development as an objective tool for surgical planning, patient education, and as a means for clinical outcome measurement.

20.
IEEE/ACM Trans Comput Biol Bioinform ; 17(3): 1056-1067, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-30387737

RESUMO

The study of recurrent copy number variations (CNVs) plays an important role in understanding the onset and evolution of complex diseases such as cancer. Array-based comparative genomic hybridization (aCGH) is a widely used microarray based technology for identifying CNVs. However, due to high noise levels and inter-sample variability, detecting recurrent CNVs from aCGH data remains a challenging topic. This paper proposes a novel method for identification of the recurrent CNVs. In the proposed method, the noisy aCGH data is modeled as the superposition of three matrices: a full-rank matrix of weighted piece-wise generating signals accounting for the clean aCGH data, a Gaussian noise matrix to model the inherent experimentation errors and other sources of error, and a sparse matrix to capture the sparse inter-sample (sample-specific) variations. We demonstrated the ability of our method to separate accurately recurrent CNVs from sample-specific variations and noise in both simulated (artificial) data and real data. The proposed method produced more accurate results than current state-of-the-art methods used in recurrent CNV detection and exhibited robustness to noise and sample-specific variations.


Assuntos
Biologia Computacional/métodos , Variações do Número de Cópias de DNA/genética , Hibridização Genômica Comparativa , Bases de Dados Genéticas , Humanos , Modelos Genéticos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA