Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 50
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Graefes Arch Clin Exp Ophthalmol ; 262(7): 2227-2235, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38334809

RESUMO

PURPOSE: Tracking functional changes in visual fields (VFs) through standard automated perimetry remains a clinical standard for glaucoma diagnosis. This study aims to develop and evaluate a deep learning (DL) model to predict regional VF progression, which has not been explored in prior studies. METHODS: The study included 2430 eyes of 1283 patients with four or more consecutive VF examinations from the baseline. A multi-label transformer-based network (MTN) using longitudinal VF data was developed to predict progression in six VF regions mapped to the optic disc. Progression was defined using the mean deviation (MD) slope and calculated for all six VF regions, referred to as clusters. Separate MTN models, trained for focal progression detection and forecasting on various numbers of VFs as model input, were tested on a held-out test set. RESULTS: The MTNs overall demonstrated excellent macro-average AUCs above 0.884 in detecting focal VF progression given five or more VFs. With a minimum of 6 VFs, the model demonstrated superior and more stable overall and per-cluster performance, compared to 5 VFs. The MTN given 6 VFs achieved a macro-average AUC of 0.848 for forecasting progression across 8 VF tests. The MTN also achieved excellent performance (AUCs ≥ 0.86, 1.0 sensitivity, and specificity ≥ 0.70) in four out of six clusters for the eyes already with severe VF loss (baseline MD ≤ - 12 dB). CONCLUSION: The high prediction accuracy suggested that multi-label DL networks trained with longitudinal VF results may assist in identifying and forecasting progression in VF regions.


Assuntos
Aprendizado Profundo , Progressão da Doença , Testes de Campo Visual , Campos Visuais , Humanos , Campos Visuais/fisiologia , Testes de Campo Visual/métodos , Feminino , Masculino , Pessoa de Meia-Idade , Disco Óptico , Glaucoma/fisiopatologia , Glaucoma/diagnóstico , Idoso , Seguimentos , Curva ROC , Estudos Retrospectivos
2.
Telemed J E Health ; 2024 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-38934135

RESUMO

Background: Blurry images in teledermatology and consultation increased the diagnostic difficulty for both deep learning models and physicians. We aim to determine the extent of restoration in diagnostic accuracy after blurry images are deblurred by deep learning models. Methods: We used 19,191 skin images from a public skin image dataset that includes 23 skin disease categories, 54 skin images from a public dataset of blurry skin images, and 53 blurry dermatology consultation photos in a medical center to compare the diagnosis accuracy of trained diagnostic deep learning models and subjective sharpness between blurry and deblurred images. We evaluated five different deblurring models, including models for motion blur, Gaussian blur, Bokeh blur, mixed slight blur, and mixed strong blur. Main Outcomes and Measures: Diagnostic accuracy was measured as sensitivity and precision of correct model prediction of the skin disease category. Sharpness rating was performed by board-certified dermatologists on a 4-point scale, with 4 being the highest image clarity. Results: The sensitivity of diagnostic models dropped 0.15 and 0.22 on slightly and strongly blurred images, respectively, and deblurring models restored 0.14 and 0.17 for each group. The sharpness ratings perceived by dermatologists improved from 1.87 to 2.51 after deblurring. Activation maps showed the focus of diagnostic models was compromised by the blurriness but was restored after deblurring. Conclusions: Deep learning models can restore the diagnostic accuracy of diagnostic models for blurry images and increase image sharpness perceived by dermatologists. The model can be incorporated into teledermatology to help the diagnosis of blurry images.

3.
Gastroenterology ; 154(3): 568-575, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-29042219

RESUMO

BACKGROUND & AIMS: Narrow-band imaging is an image-enhanced form of endoscopy used to observed microstructures and capillaries of the mucosal epithelium which allows for real-time prediction of histologic features of colorectal polyps. However, narrow-band imaging expertise is required to differentiate hyperplastic from neoplastic polyps with high levels of accuracy. We developed and tested a system of computer-aided diagnosis with a deep neural network (DNN-CAD) to analyze narrow-band images of diminutive colorectal polyps. METHODS: We collected 1476 images of neoplastic polyps and 681 images of hyperplastic polyps, obtained from the picture archiving and communications system database in a tertiary hospital in Taiwan. Histologic findings from the polyps were also collected and used as the reference standard. The images and data were used to train the DNN. A test set of images (96 hyperplastic and 188 neoplastic polyps, smaller than 5 mm), obtained from patients who underwent colonoscopies from March 2017 through August 2017, was then used to test the diagnostic ability of the DNN-CAD vs endoscopists (2 expert and 4 novice), who were asked to classify the images of the test set as neoplastic or hyperplastic. Their classifications were compared with findings from histologic analysis. The primary outcome measures were diagnostic accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and diagnostic time. The accuracy, sensitivity, specificity, PPV, NPV, and diagnostic time were compared among DNN-CAD, the novice endoscopists, and the expert endoscopists. The study was designed to detect a difference of 10% in accuracy by a 2-sided McNemar test. RESULTS: In the test set, the DNN-CAD identified neoplastic or hyperplastic polyps with 96.3% sensitivity, 78.1% specificity, a PPV of 89.6%, and a NPV of 91.5%. Fewer than half of the novice endoscopists classified polyps with a NPV of 90% (their NPVs ranged from 73.9% to 84.0%). DNN-CAD classified polyps as neoplastic or hyperplastic in 0.45 ± 0.07 seconds-shorter than the time required by experts (1.54 ± 1.30 seconds) and nonexperts (1.77 ± 1.37 seconds) (both P < .001). DNN-CAD classified polyps with perfect intra-observer agreement (kappa score of 1). There was a low level of intra-observer and inter-observer agreement in classification among endoscopists. CONCLUSIONS: We developed a system called DNN-CAD to identify neoplastic or hyperplastic colorectal polyps less than 5 mm. The system classified polyps with a PPV of 89.6%, and a NPV of 91.5%, and in a shorter time than endoscopists. This deep-learning model has potential for not only endoscopic image recognition but for other forms of medical image analysis, including sonography, computed tomography, and magnetic resonance images.


Assuntos
Pólipos do Colo/patologia , Colonoscopia/métodos , Neoplasias Colorretais/patologia , Técnicas de Apoio para a Decisão , Diagnóstico por Computador , Interpretação de Imagem Assistida por Computador , Imagem de Banda Estreita , Automação , Pólipos do Colo/classificação , Neoplasias Colorretais/classificação , Bases de Dados Factuais , Humanos , Hiperplasia , Redes Neurais de Computação , Variações Dependentes do Observador , Valor Preditivo dos Testes , Reprodutibilidade dos Testes , Estudos Retrospectivos , Taiwan , Carga Tumoral
4.
J Pathol ; 243(2): 176-192, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-28696069

RESUMO

This study investigated hepatitis B virus (HBV) single-nucleotide variants (SNVs) and deletion mutations linked with hepatocellular carcinoma (HCC). Ninety-three HCC patients and 108 non-HCC patients were enrolled for HBV genome-wide next-generation sequencing (NGS) analysis. A systematic literature review and a meta-analysis were performed to validate NGS-defined HCC-associated SNVs and deletions. The experimental results identified 60 NGS-defined HCC-associated SNVs, including 41 novel SNVs, and their pathogenic frequencies. Each SNV was specific for either genotype B (n = 24) or genotype C (n = 34), except for nt53C, which was present in both genotypes. The pathogenic frequencies of these HCC-associated SNVs showed a distinct U-shaped distribution pattern. According to the meta-analysis and literature review, 167 HBV variants from 109 publications were categorized into four levels (A-D) of supporting evidence that they are associated with HCC. The proportion of NGS-defined HCC-associated SNVs among these HBV variants declined significantly from 75% of 12 HCC-associated variants by meta-analysis (Level A) to 0% of 10 HCC-unassociated variants by meta-analysis (Level D) (P < 0.0001). PreS deletions were significantly associated with HCC, in terms of deletion index, for both genotypes B (P = 0.030) and C (P = 0.049). For genotype C, preS deletions involving a specific fragment (nt2977-3013) were significantly associated with HCC (HCC versus non-HCC, 6/34 versus 0/32, P = 0.025). Meta-analysis of preS deletions showed significant association with HCC (summary odds ratio 3.0; 95% confidence interval 2.3-3.9). Transfection of Huh7 cells showed that all of the five novel NGS-defined HCC-associated SNVs in the small surface region influenced hepatocarcinogenesis pathways, including endoplasmic reticulum-stress and DNA repair systems, as shown by microarray, real-time polymerase chain reaction and western blot analysis. Their carcinogenic mechanisms are worthy of further research. Copyright © 2017 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.


Assuntos
Carcinoma Hepatocelular/genética , Deleção de Genes , Genoma Viral/genética , Vírus da Hepatite B/genética , Neoplasias Hepáticas/genética , Polimorfismo de Nucleotídeo Único/genética , Reparo do DNA/genética , Estresse do Retículo Endoplasmático/genética , Hepatite B Crônica/genética , Humanos , Proteínas de Neoplasias/genética , Estudos Retrospectivos
5.
Methods ; 129: 24-32, 2017 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-28802713

RESUMO

Many studies have suggested that deletions of Hepatitis B Viral (HBV) are associated with the development of progressive liver diseases, even ultimately resulting in hepatocellular carcinoma (HCC). Among the methods for detecting deletions from next-generation sequencing (NGS) data, few methods considered the characteristics of virus, such as high evolution rates and high divergence among the different HBV genomes. Sequencing high divergence HBV genome sequences using the NGS technology outputs millions of reads. Thus, detecting exact breakpoints of deletions from these big and complex data incurs very high computational cost. We proposed a novel analytical method named VirDelect (Virus Deletion Detect), which uses split read alignment base to detect exact breakpoint and diversity variable to consider high divergence in single-end reads data, such that the computational cost can be reduced without losing accuracy. We use four simulated reads datasets and two real pair-end reads datasets of HBV genome sequence to verify VirDelect accuracy by score functions. The experimental results show that VirDelect outperforms the state-of-the-art method Pindel in terms of accuracy score for all simulated datasets and VirDelect had only two base errors even in real datasets. VirDelect is also shown to deliver high accuracy in analyzing the single-end read data as well as pair-end data. VirDelect can serve as an effective and efficient bioinformatics tool for physiologists with high accuracy and efficient performance and applicable to further analysis with characteristics similar to HBV on genome length and high divergence. The software program of VirDelect can be downloaded at https://sourceforge.net/projects/virdelect/.


Assuntos
Carcinoma Hepatocelular/genética , Deleção de Genes , Vírus da Hepatite B/genética , Hepatite B/genética , Carcinoma Hepatocelular/virologia , Variação Genética , Genoma Viral/genética , Hepatite B/virologia , Vírus da Hepatite B/patogenicidade , Sequenciamento de Nucleotídeos em Larga Escala , Humanos
6.
Methods ; 111: 56-63, 2016 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-27480381

RESUMO

Hepatitis B viral (HBV) infection is strongly associated with an increased risk of liver diseases like cirrhosis or hepatocellular carcinoma (HCC). Many lines of evidence suggest that deletions occurring in HBV genomic DNA are highly associated with the activity of HBV via the interplay between aberrant viral proteins release and human immune system. Deletions finding on the HBV whole genome sequences is thus a very important issue though there exist underlying the challenges in mining such big and complex biological data. Although some next generation sequencing (NGS) tools are recently designed for identifying structural variations such as insertions or deletions, their validity is generally committed to human sequences study. This design may not be suitable for viruses due to different species. We propose a graphics processing unit (GPU)-based data mining method called DeF-GPU to efficiently and precisely identify HBV deletions from large NGS data, which generally contain millions of reads. To fit the single instruction multiple data instructions, sequencing reads are referred to as multiple data and the deletion finding procedure is referred to as a single instruction. We use Compute Unified Device Architecture (CUDA) to parallelize the procedures, and further validate DeF-GPU on 5 synthetic and 1 real datasets. Our results suggest that DeF-GPU outperforms the existing commonly-used method Pindel and is able to exactly identify the deletions of our ground truth in few seconds. The source code and other related materials are available at https://sourceforge.net/projects/defgpu/.


Assuntos
Biologia Computacional/métodos , Genoma Viral/genética , Vírus da Hepatite B/genética , Hepatite B/genética , Carcinoma Hepatocelular/genética , Carcinoma Hepatocelular/virologia , DNA Viral/genética , Hepatite B/virologia , Vírus da Hepatite B/patogenicidade , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Humanos , Neoplasias Hepáticas/genética , Neoplasias Hepáticas/virologia , Deleção de Sequência/genética , Software
7.
Nucleic Acids Res ; 43(3): 1593-608, 2015 Feb 18.
Artigo em Inglês | MEDLINE | ID: mdl-25609695

RESUMO

Overexpression of Oct4, a stemness gene encoding a transcription factor, has been reported in several cancers. However, the mechanism by which Oct4 directs transcriptional program that leads to somatic cancer progression remains unclear. In this study, we provide mechanistic insight into Oct4-driven transcriptional network promoting drug-resistance and metastasis in lung cancer cell, animal and clinical studies. Through an integrative approach combining our Oct4 chromatin-immunoprecipitation sequencing and ENCODE datasets, we identified the genome-wide binding regions of Oct4 in lung cancer at promoter and enhancer of numerous genes involved in critical pathways which promote tumorigenesis. Notably, PTEN and TNC were previously undefined targets of Oct4. In addition, novel Oct4-binding motifs were found to overlap with DNA elements for Sp1 transcription factor. We provided evidence that Oct4 suppressed PTEN in an Sp1-dependent manner by recruitment of HDAC1/2, leading to activation of AKT signaling and drug-resistance. In contrast, Oct4 transactivated TNC independent of Sp1 and resulted in cancer metastasis. Clinically, lung cancer patients with Oct4 high, PTEN low and TNC high expression profile significantly correlated with poor disease-free survival. Our study reveals a critical Oct4-driven transcriptional program that promotes lung cancer progression, illustrating the therapeutic potential of targeting Oc4 transcriptionally regulated genes.


Assuntos
Resistencia a Medicamentos Antineoplásicos/genética , Neoplasias Pulmonares/genética , Metástase Neoplásica/genética , Fator 3 de Transcrição de Octâmero/genética , PTEN Fosfo-Hidrolase/genética , Tenascina/genética , Linhagem Celular Tumoral , Imunoprecipitação da Cromatina , Humanos , Neoplasias Pulmonares/tratamento farmacológico , Neoplasias Pulmonares/patologia , Reação em Cadeia da Polimerase , Proteínas Proto-Oncogênicas c-akt/metabolismo , Transdução de Sinais , Transcrição Gênica
8.
Bioinformatics ; 30(21): 3054-61, 2014 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-25015989

RESUMO

MOTIVATION: A rapid progression of esophageal squamous cell carcinoma (ESCC) causes a high mortality rate because of the propensity for metastasis driven by genetic and epigenetic alterations. The identification of prognostic biomarkers would help prevent or control metastatic progression. Expression analyses have been used to find such markers, but do not always validate in separate cohorts. Epigenetic marks, such as DNA methylation, are a potential source of more reliable and stable biomarkers. Importantly, the integration of both expression and epigenetic alterations is more likely to identify relevant biomarkers. RESULTS: We present a new analysis framework, using ESCC progression-associated gene regulatory network (GRN escc), to identify differentially methylated CpG sites prognostic of ESCC progression. From the CpG loci differentially methylated in 50 tumor-normal pairs, we selected 44 CpG loci most highly associated with survival and located in the promoters of genes more likely to belong to GRN escc. Using an independent ESCC cohort, we confirmed that 8/10 of CpG loci in the promoter of GRN escc genes significantly correlated with patient survival. In contrast, 0/10 CpG loci in the promoter genes outside the GRN escc were correlated with patient survival. We further characterized the GRN escc network topology and observed that the genes with methylated CpG loci associated with survival deviated from the center of mass and were less likely to be hubs in the GRN escc. We postulate that our analysis framework improves the identification of bona fide prognostic biomarkers from DNA methylation studies, especially with partial genome coverage.


Assuntos
Carcinoma de Células Escamosas/genética , Metilação de DNA , Epigênese Genética , Neoplasias Esofágicas/genética , Redes Reguladoras de Genes , Biomarcadores Tumorais/metabolismo , Carcinoma de Células Escamosas/mortalidade , Ilhas de CpG , Progressão da Doença , Neoplasias Esofágicas/mortalidade , Carcinoma de Células Escamosas do Esôfago , Humanos , Regiões Promotoras Genéticas
9.
BMC Bioinformatics ; 15: 173, 2014 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-24909518

RESUMO

BACKGROUND: Human disease often arises as a consequence of alterations in a set of associated genes rather than alterations to a set of unassociated individual genes. Most previous microarray-based meta-analyses identified disease-associated genes or biomarkers independent of genetic interactions. Therefore, in this study, we present the first meta-analysis method capable of taking gene combination effects into account to efficiently identify associated biomarkers (ABs) across different microarray platforms. RESULTS: We propose a new meta-analysis approach called MiningABs to mine ABs across different array-based datasets. The similarity between paired probe sequences is quantified as a bridge to connect these datasets together. The ABs can be subsequently identified from an "improved" common logit model (c-LM) by combining several sibling-like LMs in a heuristic genetic algorithm selection process. Our approach is evaluated with two sets of gene expression datasets: i) 4 esophageal squamous cell carcinoma and ii) 3 hepatocellular carcinoma datasets. Based on an unbiased reciprocal test, we demonstrate that each gene in a group of ABs is required to maintain high cancer sample classification accuracy, and we observe that ABs are not limited to genes common to all platforms. Investigating the ABs using Gene Ontology (GO) enrichment, literature survey, and network analyses indicated that our ABs are not only strongly related to cancer development but also highly connected in a diverse network of biological interactions. CONCLUSIONS: The proposed meta-analysis method called MiningABs is able to efficiently identify ABs from different independently performed array-based datasets, and we show its validity in cancer biology via GO enrichment, literature survey and network analyses. We postulate that the ABs may facilitate novel target and drug discovery, leading to improved clinical treatment. Java source code, tutorial, example and related materials are available at "http://sourceforge.net/projects/miningabs/".


Assuntos
Mineração de Dados/métodos , Perfilação da Expressão Gênica , Expressão Gênica , Marcadores Genéticos/genética , Algoritmos , Biomarcadores Tumorais/genética , Perfilação da Expressão Gênica/métodos , Humanos , Neoplasias/genética
10.
Comput Methods Programs Biomed ; 252: 108236, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38776829

RESUMO

BACKGROUND AND OBJECTIVE: Strain analysis provides insights into myocardial function and cardiac condition evaluation. However, the anatomical characteristics of left atrium (LA) inherently limit LA strain analysis when using echocardiography. Cardiac computed tomography (CT) with its superior spatial resolution, has become critical for in-depth evaluation of LA function. Recent studies have explored the feasibility of CT-derived strain; however, they relied on manually selected regions of interest (ROIs) and mainly focused on left ventricle (LV). This study aimed to propose a first-of-its-kind fully automatic deep learning (DL)-based framework for three-dimensional (3D) LA strain extraction on cardiac CT. METHODS: A total of 111 patients undergoing ECG-gated contrast-enhanced CT for evaluating subclinical atrial fibrillation (AF) were enrolled in this study. We developed a 3D strain extraction framework on cardiac CT images, containing a 2.5D GN-U-Net network for LA segmentation, axis-oriented 3D view extraction, and LA strain measure. The segmentation accuracy was evaluated using Dice similarity coefficient (DSC). The model-extracted LA volumes and emptying fraction (EF) were compared with ground-truth measurements using intraclass correlation coefficient (ICC), correlation coefficient (r), and Bland-Altman plot (B-A). The automatically extracted LA strains were evaluated against the LA strains measured from 2D echocardiograms. We utilized this framework to gauge the effect of AF burden on LA strain, employing the atrial high rate episode (AHRE) burden as the measurement parameter. RESULTS: The GN-U-Net LA segmentation network achieved a DSC score of 0.9603 on the test set. The framework-extracted LA estimates demonstrated excellent ICCs of 0.949 (95 % CI: 0.93-0.97) for minimal LA volume, 0.904 (95 % CI: 0.86-0.93) for maximal LA volume, and 0.902 (95 % CI: 0.86-0.93) for EF, compared with expert measurements. The framework-extracted LA strains demonstrated moderate agreement with the LA strains based on 2D echocardiography (ICCs >0.703). Patients with AHRE > 6 min had significantly lower global strain and LAEF, as extracted by the framework than those with AHRE ≤ 6 min. CONCLUSION: The promising results highlighted the feasibility and clinical usefulness of automatically extracting 3D LA strain from CT images using a DL-based framework. This tool could provide a 3D-based alternative to echocardiography for assessing LA function.


Assuntos
Fibrilação Atrial , Átrios do Coração , Imageamento Tridimensional , Tomografia Computadorizada por Raios X , Humanos , Átrios do Coração/diagnóstico por imagem , Átrios do Coração/fisiopatologia , Tomografia Computadorizada por Raios X/métodos , Fibrilação Atrial/diagnóstico por imagem , Fibrilação Atrial/fisiopatologia , Feminino , Masculino , Pessoa de Meia-Idade , Idoso , Aprendizado Profundo , Algoritmos , Ecocardiografia/métodos
11.
J Chin Med Assoc ; 87(4): 369-376, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38334988

RESUMO

BACKGROUND: Intensive care unit (ICU) mortality prediction helps to guide therapeutic decision making for critically ill patients. Several scoring systems based on statistical techniques have been developed for this purpose. In this study, we developed a machine-learning model to predict patient mortality in the very early stage of ICU admission. METHODS: This study was performed with data from all patients admitted to the intensive care units of a tertiary medical center in Taiwan from 2009 to 2018. The patients' comorbidities, co-medications, vital signs, and laboratory data on the day of ICU admission were obtained from electronic medical records. We constructed random forest and extreme gradient boosting (XGBoost) models to predict ICU mortality, and compared their performance with that of traditional scoring systems. RESULTS: Data from 12,377 patients was allocated to training (n = 9901) and testing (n = 2476) datasets. The median patient age was 70.0 years; 9210 (74.41%) patients were under mechanical ventilation in the ICU. The areas under receiver operating characteristic curves for the random forest and XGBoost models (0.876 and 0.880, respectively) were larger than those for the Acute Physiology and Chronic Health Evaluation II score (0.738), Sequential Organ Failure Assessment score (0.747), and Simplified Acute Physiology Score II (0.743). The fraction of inspired oxygen on ICU admission was the most important predictive feature across all models. CONCLUSION: The XGBoost model most accurately predicted ICU mortality and was superior to traditional scoring systems. Our results highlight the utility of machine learning for ICU mortality prediction in the Asian population.


Assuntos
Estado Terminal , Unidades de Terapia Intensiva , Humanos , Idoso , Hospitais , Hospitalização , Aprendizado de Máquina
12.
BMC Bioinformatics ; 14: 230, 2013 Jul 21.
Artigo em Inglês | MEDLINE | ID: mdl-23870110

RESUMO

BACKGROUND: Frequent pattern mining analysis applied on microarray dataset appears to be a promising strategy for identifying relationships between gene expression levels. Unfortunately, too many itemsets (co-expressed genes) are identified by this analysis method since it does not consider the importance of each gene within biological processes to a cellular response and does not take into account temporal properties under biological treatment-control matched conditions in a microarray dataset. RESULTS: We propose a method termed TIIM (Top-k Impactful Itemsets Miner), which only requires specifying a user-defined number k to explore the top k itemsets with the most significantly differentially co-expressed genes between 2 conditions in a time course. To give genes different weights, a table with impact degrees for each gene was constructed based on the number of neighboring genes that are differently expressed in the dataset within gene regulatory networks. Finally, the resulting top-k impactful itemsets were manually evaluated using previous literature and analyzed by a Gene Ontology enrichment method. CONCLUSIONS: In this study, the proposed method was evaluated in 2 publicly available time course microarray datasets with 2 different experimental conditions. Both datasets identified potential itemsets with co-expressed genes evaluated from the literature and showed higher accuracies compared to the 2 corresponding control methods: i) performing TIIM without considering the gene expression differentiation between 2 different experimental conditions and impact degrees, and ii) performing TIIM with a constant impact degree for each gene. Our proposed method found that several new gene regulations involved in these itemsets were useful for biologists and provided further insights into the mechanisms underpinning biological processes. The Java source code and other related materials used in this study are available at "http://websystem.csie.ncku.edu.tw/TIIM_Program.rar".


Assuntos
Perfilação da Expressão Gênica , Regulação da Expressão Gênica , Redes Reguladoras de Genes , Algoritmos , Bases de Dados Factuais , Perfilação da Expressão Gênica/métodos , Genes , Análise de Sequência com Séries de Oligonucleotídeos/métodos
13.
BMC Bioinformatics ; 14 Suppl 12: S3, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24267918

RESUMO

BACKGROUND: Observation of gene expression changes implying gene regulations using a repetitive experiment in time course has become more and more important. However, there is no effective method which can handle such kind of data. For instance, in a clinical/biological progression like inflammatory response or cancer formation, a great number of differentially expressed genes at different time points could be identified through a large-scale microarray approach. For each repetitive experiment with different samples, converting the microarray datasets into transactional databases with significant singleton genes at each time point would allow sequential patterns implying gene regulations to be identified. Although traditional sequential pattern mining methods have been successfully proposed and widely used in different interesting topics, like mining customer purchasing sequences from a transactional database, to our knowledge, the methods are not suitable for such biological dataset because every transaction in the converted database may contain too many items/genes. RESULTS: In this paper, we propose a new algorithm called CTGR-Span (Cross-Timepoint Gene Regulation Sequential pattern) to efficiently mine CTGR-SPs (Cross-Timepoint Gene Regulation Sequential Patterns) even on larger datasets where traditional algorithms are infeasible. The CTGR-Span includes several biologically designed parameters based on the characteristics of gene regulation. We perform an optimal parameter tuning process using a GO enrichment analysis to yield CTGR-SPs more meaningful biologically. The proposed method was evaluated with two publicly available human time course microarray datasets and it was shown that it outperformed the traditional methods in terms of execution efficiency. After evaluating with previous literature, the resulting patterns also strongly correlated with the experimental backgrounds of the datasets used in this study. CONCLUSIONS: We propose an efficient CTGR-Span to mine several biologically meaningful CTGR-SPs. We postulate that the biologist can benefit from our new algorithm since the patterns implying gene regulations could provide further insights into the mechanisms of novel gene regulations during a biological or clinical progression. The Java source code, program tutorial and other related materials used in this program are available at http://websystem.csie.ncku.edu.tw/CTGR-Span.rar.


Assuntos
Algoritmos , Mineração de Dados , Regulação da Expressão Gênica , Análise por Conglomerados , Humanos , Análise de Sequência com Séries de Oligonucleotídeos/métodos
14.
IEEE J Biomed Health Inform ; 27(7): 3610-3621, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37130258

RESUMO

Sepsis is among the leading causes of morbidity and mortality in modern intensive care units (ICU). Due to accurate and early warning, the in-time antibiotic treatment of sepsis is critical for improving sepsis outcomes, contributing to saving lives, and reducing medical costs. However, the earlier prediction of sepsis onset is made, the fewer monitoring measurements can be processed, causing a lower prediction accuracy. In contrast, a more accurate prediction can be expected by analyzing more data but leading to the delayed warning associated with life-threatening events. In this study, we propose a novel deep reinforcement learning framework for solving early prediction of sepsis, called the Policy Network-based Early Warning Monitoring System (PoEMS). The proposed PoEMS provides accurate and early prediction results for sepsis onset based on analyzing varied-length electronic medical records (EMR). Furthermore, the system serves by monitoring the patient's health status consistently and provides an early warning only when a high risk of sepsis is detected. Additionally, a controlling parameter is designed for users to adjust the trade-off between earliness and accuracy, providing the adaptability of the model to meet various medical requirements in practical scenarios. Through a series of experiments on real-world medical data, the results demonstrate that our proposed PoEMS achieves a high AUROC result of more than 91% for early prediction, and predicts sepsis onset earlier and more accurately compared to other state-of-the-art competing methods.


Assuntos
Sepse , Humanos , Sepse/diagnóstico , Monitorização Fisiológica , Unidades de Terapia Intensiva
15.
Artigo em Inglês | MEDLINE | ID: mdl-36654772

RESUMO

OBJECTIVE: Early revascularization of the occluded coronary artery in patients with ST elevation myocardial infarction (STEMI) has been demonstrated to decrease mortality and morbidity. Currently, physicians rely on features of electrocardiograms (ECGs) to identify the most likely location of coronary arteries related to an infarct. We sought to predict these culprit arteries more accurately by using deep learning. METHODS: A deep learning model with a convolutional neural network (CNN) that incorporated ECG signals was trained on 384 patients with STEMI who underwent primary percutaneous coronary intervention (PCI) at a medical center. The performances of various signal preprocessing methods (short-time Fourier transform [STFT] and continuous wavelet transform [CWT]) with different lengths of input ECG signals were compared. The sensitivity and specificity for predicting each infarct-related artery and the overall accuracy were evaluated. RESULTS: ECG signal preprocessing with STFT achieved fair overall prediction accuracy (79.3%). The sensitivity and specificity for predicting the left anterior descending artery (LAD) as the culprit vessel were 85.7% and 88.4%, respectively. The sensitivity and specificity for predicting the left circumflex artery (LCX) were 37% and 99%, respectively, and the sensitivity and specificity for predicting the right coronary artery (RCA) were 88.4% and 82.4%, respectively. Using CWT (Morlet wavelet) for signal preprocessing resulted in better overall accuracy (83.7%) compared with STFT preprocessing. The sensitivity and specificity were 93.46% and 80.39% for LAD, 56% and 99.7% for LCX, and 85.9% and 92.9% for RCA, respectively. CONCLUSION: Our study demonstrated that deep learning with a CNN could facilitate the identification of the culprit coronary artery in patients with STEMI. Preprocessing ECG signals with CWT was demonstrated to be superior to doing so with STFT.


Assuntos
Aprendizado Profundo , Intervenção Coronária Percutânea , Infarto do Miocárdio com Supradesnível do Segmento ST , Humanos , Infarto do Miocárdio com Supradesnível do Segmento ST/diagnóstico , Coração , Vasos Coronários/cirurgia
16.
Neural Netw ; 158: 171-187, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36459884

RESUMO

Continual learning is an emerging research branch of deep learning, which aims to learn a model for a series of tasks continually without forgetting knowledge obtained from previous tasks. Despite receiving a lot of attention in the research community, temporal-based continual learning techniques are still underutilized. In this paper, we address the problem of temporal-based continual learning by allowing a model to continuously learn on temporal data. To solve the catastrophic forgetting problem of learning temporal data in task incremental scenarios, in this research, we propose a novel method based on attentive recurrent neural networks, called Temporal Teacher Distillation (TTD). TTD solves the catastrophic forgetting problem in an attentive recurrent neural network based on three hypotheses, namely Rotation Hypothesis, Redundant Hypothesis, and Recover Hypothesis. Rotation Hypothesis and Redundant hypotheses could cause the attention shift phenomenon, which degrades the model performance on the learned tasks. Moreover, not considering the Recover Hypothesis increases extra memory usage in continuously training different tasks. Therefore, the proposed TTD based on the above hypotheses complements the inadequacy of the existing methods for temporal-based continual learning. For evaluating the performance of our proposed method in task incremental setting, we use a public dataset, WIreless Sensor Data Mining (WISDM), and a synthetic dataset, Split-QuickDraw-100. According to experimental results, the proposed TTD significantly outperforms state-of-the-art methods by up to 14.6% and 45.1% in terms of accuracy and forgetting measures, respectively. To the best of our knowledge, this is the first work that studies continual learning in real-world incremental categories for temporal data classification with attentive recurrent neural networks and provides the proper application-oriented scenario.


Assuntos
Mineração de Dados , Redes Neurais de Computação , Rotação , Atenção
17.
Heliyon ; 9(1): e12945, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36699283

RESUMO

Rationale and objectives: Selecting region of interest (ROI) for left atrial appendage (LAA) filling defects assessment can be time consuming and prone to subjectivity. This study aimed to develop and validate a novel artificial intelligence (AI), deep learning (DL) based framework for automatic filling defects assessment on CT images for clinical and subclinical atrial fibrillation (AF) patients. Materials and methods: A total of 443,053 CT images were used for DL model development and testing. Images were analyzed by the AI framework and expert cardiologists/radiologists. The LAA segmentation performance was evaluated using Dice coefficient. The agreement between manual and automatic LAA ROI selections was evaluated using intraclass correlation coefficient (ICC) analysis. Receiver operating characteristic (ROC) curve analysis was used to assess filling defects based on the computed LAA to ascending aorta Hounsfield unit (HU) ratios. Results: A total of 210 patients (Group 1: subclinical AF, n = 105; Group 2: clinical AF with stroke, n = 35; Group 3: AF for catheter ablation, n = 70) were enrolled. The LAA volume segmentation achieved 0.931-0.945 Dice scores. The LAA ROI selection demonstrated excellent agreement (ICC ≥0.895, p < 0.001) with manual selection on the test sets. The automatic framework achieved an excellent AUC score of 0.979 in filling defects assessment. The ROC-derived optimal HU ratio threshold for filling defects detection was 0.561. Conclusion: The novel AI-based framework could accurately segment the LAA region and select ROIs while effectively avoiding trabeculae for filling defects assessment, achieving close-to-expert performance. This technique may help preemptively detect the potential thromboembolic risk for AF patients.

18.
Transl Vis Sci Technol ; 12(11): 1, 2023 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-37910082

RESUMO

Purpose: For this study, we aimed to determine whether a convolutional neural network (CNN)-based method (based on a feature extractor and an identifier) can be applied to monitor the progression of keratitis while managing suspected microbial keratitis (MK). Methods: This multicenter longitudinal cohort study included patients with suspected MK undergoing serial external eye photography at the 5 branches of Chang Gung Memorial Hospital from August 20, 2000, to August 19, 2020. Data were primarily analyzed from January 1 to March 25, 2022. The CNN-based model was evaluated via F1 score and accuracy. The area under the receiver operating characteristic curve (AUROC) was used to measure the precision-recall trade-off. Results: The model was trained using 1456 image pairs from 468 patients. In comparing models via only training the identifier, statistically significant higher accuracy (P < 0.05) in models via training both the identifier and feature extractor (full training) was verified, with 408 image pairs from 117 patients. The full training EfficientNet b3-based model showed 90.2% (getting better) and 82.1% (becoming worse) F1 scores, 87.3% accuracy, and 94.2% AUROC for 505 getting better and 272 becoming worse test image pairs from 452 patients. Conclusions: A CNN-based approach via deep learning applied in suspected MK can monitor the progress/regress during treatment by comparing external eye image pairs. Translational Relevance: The study bridges the gap between the investigation of the state-of-the-art CNN-based deep learning algorithm applied in ocular image analysis and the clinical care of suspected patients with MK.


Assuntos
Ceratite , Humanos , Estudos Longitudinais , Ceratite/diagnóstico , Olho , Redes Neurais de Computação , Algoritmos
19.
Bioinformatics ; 27(22): 3142-8, 2011 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-21926125

RESUMO

MOTIVATION: Association rule analysis methods are important techniques applied to gene expression data for finding expression relationships between genes. However, previous methods implicitly assume that all genes have similar importance, or they ignore the individual importance of each gene. The relation intensity between any two items has never been taken into consideration. Therefore, we proposed a technique named REMMAR (RElational-based Multiple Minimum supports Association Rules) algorithm to tackle this problem. This method adjusts the minimum relation support (MRS) for each gene pair depending on the regulatory relation intensity to discover more important association rules with stronger biological meaning. RESULTS: In the actual case study of this research, REMMAR utilized the shortest distance between any two genes in the Saccharomyces cerevisiae gene regulatory network (GRN) as the relation intensity to discover the association rules from two S.cerevisiae gene expression datasets. Under experimental evaluation, REMMAR can generate more rules with stronger relation intensity, and filter out rules without biological meaning in the protein-protein interaction network (PPIN). Furthermore, the proposed method has a higher precision (100%) than the precision of reference Apriori method (87.5%) for the discovered rules use a literature survey. Therefore, the proposed REMMAR algorithm can discover stronger association rules in biological relationships dissimilated by traditional methods to assist biologists in complicated genetic exploration.


Assuntos
Algoritmos , Perfilação da Expressão Gênica/métodos , Análise de Sequência com Séries de Oligonucleotídeos/métodos , Regulação da Expressão Gênica , Redes Reguladoras de Genes , Mapas de Interação de Proteínas , Saccharomyces cerevisiae/genética , Saccharomyces cerevisiae/metabolismo , Proteínas de Saccharomyces cerevisiae/genética , Proteínas de Saccharomyces cerevisiae/metabolismo
20.
Comput Methods Programs Biomed ; 216: 106666, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35124480

RESUMO

BACKGROUND AND OBJECTIVE: The incidence rate of skin cancers is increasing worldwide annually. Using machine learning and deep learning for skin lesion classification is one of the essential research topics. In this study, we formulate a major-type misclassification problem that previous studies did not consider in the multi-class skin lesion classification. Moreover, addressing the major-type misclassification problem is significant for real-world computer-aided diagnosis. METHODS: This study presents a novel method, namely Hierarchy-Aware Contrastive Learning with Late Fusion (HAC-LF), to improve the overall performance of multi-class skin classification. In HAC-LF, we design a new loss function, Hierarchy-Aware Contrastive Loss (HAC Loss), to reduce the impact of the major-type misclassification problem. The late fusion method is applied to balance the major-type and multi-class classification performance. RESULTS: We conduct a series of experiments with the ISIC 2019 Challenges dataset, which consists of three skin lesion datasets, to verify the performance of our methods. The results show that our proposed method surpasses the representative deep learning methods for skin lesion classification in all evaluation metrics used in this study. HAC-LF achieves 0.871, 0.842, 0.889 for accuracy, sensitivity, and specificity in the major-type classification, respectively. With the imbalanced class distribution, HAC-LF outperforms the baseline model regarding the sensitivity of minority classes. CONCLUSIONS: This research formulates a major-type misclassification problem. We propose HAC-LF to deal with it and boost the multi-class skin lesion classification performance. According to the results, the advantage of HAC-LF is that the proposed HAC Loss can beneficially reduce the impact of the major-type misclassification by decreasing the major-type error rate. Besides the medical field HAC-LF is promising to be applied to other domains possessing the data with the hierarchical structure.


Assuntos
Dermatopatias , Neoplasias Cutâneas , Benchmarking , Diagnóstico por Computador , Humanos , Aprendizado de Máquina , Neoplasias Cutâneas/diagnóstico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA