Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 56
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Int Ophthalmol ; 44(1): 130, 2024 Mar 13.
Artículo en Inglés | MEDLINE | ID: mdl-38478099

RESUMEN

PURPOSE: This study seeks to build a normative database for the vessel density of the superficial retina (SVD) and evaluate how changes and trends in the retinal microvasculature may be influenced by age and axial length (AL) in non-glaucomatous eyes, as measured with optical coherence tomography angiography (OCTA). METHODS: We included 500 eyes of 290 healthy subjects visiting a county hospital. Each participant underwent comprehensive ophthalmological examinations and OCTA to measure the SVD and thickness of the macular and peripapillary areas. To analyze correlations between SVD and age or AL, multivariable linear regression models with generalized estimating equations were applied. RESULTS: Age was negatively correlated with the SVD of the superior, central, and inferior macular areas and the superior peripapillary area, with a decrease rate of 1.06%, 1.36%, 0.84%, and 0.66% per decade, respectively. However, inferior peripapillary SVD showed no significant correlation with age. AL was negatively correlated with the SVD of the inferior macular area and the superior and inferior peripapillary areas, with coefficients of -0.522%/mm, -0.733%/mm, and -0.664%/mm, respectively. AL was also negatively correlated with the thickness of the retinal nerve fiber layer and inferior ganglion cell complex (p = 0.004). CONCLUSION: Age and AL were the two main factors affecting changes in SVD. Furthermore, AL, a relative term to represent the degree of myopia, had a greater effect than age and showed a more significant effect on thickness than on SVD. This relationship has important implications because myopia is a significant issue in modern cities.


Asunto(s)
Miopía , Vasos Retinianos , Humanos , Retina , Tomografía de Coherencia Óptica/métodos , Fibras Nerviosas , Envejecimiento
2.
J Med Internet Res ; 25: e48834, 2023 12 29.
Artículo en Inglés | MEDLINE | ID: mdl-38157232

RESUMEN

BACKGROUND: Traditional methods for investigating work hours rely on an employee's physical presence at the worksite. However, accurately identifying break times at the worksite and distinguishing remote work outside the worksite poses challenges in work hour estimations. Machine learning has the potential to differentiate between human-smartphone interactions at work and off work. OBJECTIVE: In this study, we aimed to develop a novel approach called "probability in work mode," which leverages human-smartphone interaction patterns and corresponding GPS location data to estimate work hours. METHODS: To capture human-smartphone interactions and GPS locations, we used the "Staff Hours" app, developed by our team, to passively and continuously record participants' screen events, including timestamps of notifications, screen on or off occurrences, and app usage patterns. Extreme gradient boosted trees were used to transform these interaction patterns into a probability, while 1-dimensional convolutional neural networks generated successive probabilities based on previous sequence probabilities. The resulting probability in work mode allowed us to discern periods of office work, off-work, breaks at the worksite, and remote work. RESULTS: Our study included 121 participants, contributing to a total of 5503 person-days (person-days represent the cumulative number of days across all participants on which data were collected and analyzed). The developed machine learning model exhibited an average prediction performance, measured by the area under the receiver operating characteristic curve, of 0.915 (SD 0.064). Work hours estimated using the probability in work mode (higher than 0.5) were significantly longer (mean 11.2, SD 2.8 hours per day) than the GPS-defined counterparts (mean 10.2, SD 2.3 hours per day; P<.001). This discrepancy was attributed to the higher remote work time of 111.6 (SD 106.4) minutes compared to the break time of 54.7 (SD 74.5) minutes. CONCLUSIONS: Our novel approach, the probability in work mode, harnessed human-smartphone interaction patterns and machine learning models to enhance the precision and accuracy of work hour investigation. By integrating human-smartphone interactions and GPS data, our method provides valuable insights into work patterns, including remote work and breaks, offering potential applications in optimizing work productivity and well-being.


Asunto(s)
Aprendizaje Automático , Teléfono Inteligente , Humanos , Algoritmos , Redes Neurales de la Computación , Probabilidad
3.
Entropy (Basel) ; 23(2)2021 Feb 11.
Artículo en Inglés | MEDLINE | ID: mdl-33670368

RESUMEN

Nowadays, deep learning methods with high structural complexity and flexibility inevitably lean on the computational capability of the hardware. A platform with high-performance GPUs and large amounts of memory could support neural networks having large numbers of layers and kernels. However, naively pursuing high-cost hardware would probably drag the technical development of deep learning methods. In the article, we thus establish a new preprocessing method to reduce the computational complexity of the neural networks. Inspired by the band theory of solids in physics, we map the image space into a noninteraction physical system isomorphically and then treat image voxels as particle-like clusters. Then, we reconstruct the Fermi-Dirac distribution to be a correction function for the normalization of the voxel intensity and as a filter of insignificant cluster components. The filtered clusters at the circumstance can delineate the morphological heterogeneity of the image voxels. We used the BraTS 2019 datasets and the dimensional fusion U-net for the algorithmic validation, and the proposed Fermi-Dirac correction function exhibited comparable performance to other employed preprocessing methods. By comparing to the conventional z-score normalization function and the Gamma correction function, the proposed algorithm can save at least 38% of computational time cost under a low-cost hardware architecture. Even though the correction function of global histogram equalization has the lowest computational time among the employed correction functions, the proposed Fermi-Dirac correction function exhibits better capabilities of image augmentation and segmentation.

4.
Gastroenterology ; 154(3): 568-575, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-29042219

RESUMEN

BACKGROUND & AIMS: Narrow-band imaging is an image-enhanced form of endoscopy used to observed microstructures and capillaries of the mucosal epithelium which allows for real-time prediction of histologic features of colorectal polyps. However, narrow-band imaging expertise is required to differentiate hyperplastic from neoplastic polyps with high levels of accuracy. We developed and tested a system of computer-aided diagnosis with a deep neural network (DNN-CAD) to analyze narrow-band images of diminutive colorectal polyps. METHODS: We collected 1476 images of neoplastic polyps and 681 images of hyperplastic polyps, obtained from the picture archiving and communications system database in a tertiary hospital in Taiwan. Histologic findings from the polyps were also collected and used as the reference standard. The images and data were used to train the DNN. A test set of images (96 hyperplastic and 188 neoplastic polyps, smaller than 5 mm), obtained from patients who underwent colonoscopies from March 2017 through August 2017, was then used to test the diagnostic ability of the DNN-CAD vs endoscopists (2 expert and 4 novice), who were asked to classify the images of the test set as neoplastic or hyperplastic. Their classifications were compared with findings from histologic analysis. The primary outcome measures were diagnostic accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and diagnostic time. The accuracy, sensitivity, specificity, PPV, NPV, and diagnostic time were compared among DNN-CAD, the novice endoscopists, and the expert endoscopists. The study was designed to detect a difference of 10% in accuracy by a 2-sided McNemar test. RESULTS: In the test set, the DNN-CAD identified neoplastic or hyperplastic polyps with 96.3% sensitivity, 78.1% specificity, a PPV of 89.6%, and a NPV of 91.5%. Fewer than half of the novice endoscopists classified polyps with a NPV of 90% (their NPVs ranged from 73.9% to 84.0%). DNN-CAD classified polyps as neoplastic or hyperplastic in 0.45 ± 0.07 seconds-shorter than the time required by experts (1.54 ± 1.30 seconds) and nonexperts (1.77 ± 1.37 seconds) (both P < .001). DNN-CAD classified polyps with perfect intra-observer agreement (kappa score of 1). There was a low level of intra-observer and inter-observer agreement in classification among endoscopists. CONCLUSIONS: We developed a system called DNN-CAD to identify neoplastic or hyperplastic colorectal polyps less than 5 mm. The system classified polyps with a PPV of 89.6%, and a NPV of 91.5%, and in a shorter time than endoscopists. This deep-learning model has potential for not only endoscopic image recognition but for other forms of medical image analysis, including sonography, computed tomography, and magnetic resonance images.


Asunto(s)
Pólipos del Colon/patología , Colonoscopía/métodos , Neoplasias Colorrectales/patología , Técnicas de Apoyo para la Decisión , Diagnóstico por Computador , Interpretación de Imagen Asistida por Computador , Imagen de Banda Estrecha , Automatización , Pólipos del Colon/clasificación , Neoplasias Colorrectales/clasificación , Bases de Datos Factuales , Humanos , Hiperplasia , Redes Neurales de la Computación , Variaciones Dependientes del Observador , Valor Predictivo de las Pruebas , Reproducibilidad de los Resultados , Estudios Retrospectivos , Taiwán , Carga Tumoral
5.
J Formos Med Assoc ; 118(1 Pt 3): 457-462, 2019 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-30060982

RESUMEN

BACKGROUND/PURPOSE: To investigate the knowledge and learning ability of glaucoma patients regarding their anti-glaucoma topical medications. METHODS: Patients on regular follow-up at the Glaucoma Clinic at Hsin-Chu General Hospital were recruited. After detailed ocular examinations, the participants were asked to recall and identify their glaucoma eye drops. The same test was repeated 3 months later. The results of both tests, the learning ability of patients regarding their glaucoma drugs, and the relationship between learning ability and demographic variables were evaluated. RESULTS: Two hundred eighty-seven glaucoma patients participated in this study. Of the study population, 25.8% and 57.1% could recall their topical mediation at the first and second tests, whereas 72.1% and 88.5% could identify their prescribed eye drops at the first and second tests, respectively. Approximately 34% of the participants showed improved knowledge at the repeat test, whereas 40% of the participants showed no improvement. Participants with a better learning ability were more likely to be younger, with a higher level of education, and with less visual field impairment. CONCLUSION: The knowledge of glaucoma patients regarding their prescribed medication was deficient in Taiwan. Physician effort could improve knowledge on the prescribed drugs. Patient-centered education should be considered, targeting elderly individuals, illiterate individuals, and those with loss of visual function to increase compliance with glaucoma medication.


Asunto(s)
Glaucoma/tratamiento farmacológico , Conocimientos, Actitudes y Práctica en Salud , Cumplimiento de la Medicación/estadística & datos numéricos , Soluciones Oftálmicas/administración & dosificación , Adulto , Anciano , Monitoreo de Drogas , Femenino , Humanos , Presión Intraocular , Modelos Logísticos , Masculino , Persona de Mediana Edad , Estudios Prospectivos , Encuestas y Cuestionarios , Taiwán
6.
BMC Bioinformatics ; 19(Suppl 4): 154, 2018 05 08.
Artículo en Inglés | MEDLINE | ID: mdl-29745829

RESUMEN

BACKGROUND: A new emerged cancer treatment utilizes intrinsic immune surveillance mechanism that is silenced by those malicious cells. Hence, studies of tumor infiltrating lymphocyte populations (TILs) are key to the success of advanced treatments. In addition to laboratory methods such as immunohistochemistry and flow cytometry, in silico gene expression deconvolution methods are available for analyses of relative proportions of immune cell types. RESULTS: Herein, we used microarray data from the public domain to profile gene expression pattern of twenty-two immune cell types. Initially, outliers were detected based on the consistency of gene profiling clustering results and the original cell phenotype notation. Subsequently, we filtered out genes that are expressed in non-hematopoietic normal tissues and cancer cells. For every pair of immune cell types, we ran t-tests for each gene, and defined differentially expressed genes (DEGs) from this comparison. Equal numbers of DEGs were then collected as candidate lists and numbers of conditions and minimal values for building signature matrixes were calculated. Finally, we used v -Support Vector Regression to construct a deconvolution model. The performance of our system was finally evaluated using blood biopsies from 20 adults, in which 9 immune cell types were identified using flow cytometry. The present computations performed better than current state-of-the-art deconvolution methods. CONCLUSIONS: Finally, we implemented the proposed method into R and tested extensibility and usability on Windows, MacOS, and Linux operating systems. The method, MySort, is wrapped as the Galaxy platform pluggable tool and usage details are available at https://testtoolshed.g2.bx.psu.edu/view/moneycat/mysort/e3afe097e80a .


Asunto(s)
Perfilación de la Expresión Génica/métodos , Leucocitos/metabolismo , Análisis por Conglomerados , Simulación por Computador , Regulación de la Expresión Génica , Humanos , Fenotipo
7.
Appl Microbiol Biotechnol ; 101(2): 771-781, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-27771740

RESUMEN

Terminal disinfection and daily cleaning have been performed in hospitals in Taiwan for many years to reduce the risks of healthcare-associated infections. However, the effectiveness of these cleaning approaches and dynamic changes of surface microbiota upon cleaning remain unclear. Here, we report the surface changes of bacterial communities with terminal disinfection and daily cleaning in a medical intensive care unit (MICU) and only terminal disinfection in a respiratory care center (RCC) using 16s ribosomal RNA (rRNA) metagenomics. A total of 36 samples, including 9 samples per sampling time, from each ward were analysed. The clinical isolates were recorded during the sampling time. A large amount of microbial diversity was detected, and human skin microbiota (HSM) was predominant in both wards. In addition, the colonization rate of the HSM in the MICU was higher than that in the RCC, especially for Moraxellaceae. A higher alpha-diversity (p = 0.005519) and a lower UniFrac distance was shown in the RCC due to the lack of daily cleaning. Moreover, a significantly higher abundance among Acinetobacter sp., Streptococcus sp. and Pseudomonas sp. was shown in the RCC compared to the MICU using the paired t test. We concluded that cleaning changes might contribute to the difference in diversity between two wards.


Asunto(s)
Bacterias/clasificación , Bacterias/aislamiento & purificación , Desinfección/métodos , Microbiología Ambiental , Hospitales , Servicio de Limpieza en Hospital/métodos , Bacterias/genética , Análisis por Conglomerados , ADN Bacteriano/química , ADN Bacteriano/genética , ADN Ribosómico/química , ADN Ribosómico/genética , Humanos , Unidades de Cuidados Intensivos , Metagenómica , Filogenia , ARN Ribosómico 16S/genética , Análisis de Secuencia de ADN , Taiwán
8.
BMC Bioinformatics ; 17(Suppl 13): 381, 2016 Oct 06.
Artículo en Inglés | MEDLINE | ID: mdl-27766939

RESUMEN

BACKGROUND: It has been a challenging task to build a genome-wide phylogenetic tree for a large group of species containing a large number of genes with long nucleotides sequences. The most popular method, called feature frequency profile (FFP-k), finds the frequency distribution for all words of certain length k over the whole genome sequence using (overlapping) windows of the same length. For a satisfactory result, the recommended word length (k) ranges from 6 to 15 and it may not be a multiple of 3 (codon length). The total number of possible words needed for FFP-k can range from 46=4096 to 415. RESULTS: We propose a simple improvement over the popular FFP method using only a typical word length of 3. A new method, called Trinucleotide Usage Profile (TUP), is proposed based only on the (relative) frequency distribution using non-overlapping windows of length 3. The total number of possible words needed for TUP is 43=64, which is much less than the total count for the recommended optimal "resolution" for FFP. To build a phylogenetic tree, we propose first representing each of the species by a TUP vector and then using an appropriate distance measure between pairs of the TUP vectors for the tree construction. In particular, we propose summarizing a DNA sequence by a matrix of three rows corresponding to three reading frames, recording the frequency distribution of the non-overlapping words of length 3 in each of the reading frame. We also provide a numerical measure for comparing trees constructed with various methods. CONCLUSIONS: Compared to the FFP method, our empirical study showed that the proposed TUP method is more capable of building phylogenetic trees with a stronger biological support. We further provide some justifications on this from the information theory viewpoint. Unlike the FFP method, the TUP method takes the advantage that the starting of the first reading frame is (usually) known. Without this information, the FFP method could only rely on the frequency distribution of overlapping words, which is the average (or mixture) of the frequency distributions of three possible reading frames. Consequently, we show (from the entropy viewpoint) that the FFP procedure could dilute important gene information and therefore provides less accurate classification.


Asunto(s)
Algoritmos , Biología Computacional/métodos , Filogenia , Sistemas de Lectura , Bacterias/genética , Codón
9.
Artif Intell Med ; 149: 102809, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38462295

RESUMEN

Cardiovascular diseases, particularly arrhythmias, remain a leading cause of mortality worldwide. Electrocardiogram (ECG) analysis plays a pivotal role in cardiovascular disease diagnosis. Although previous studies have focused on waveform analysis and model training, integrating additional clinical information, especially demographic data, remains challenging. In this study, we present an innovative approach to ECG classification by incorporating demographic information from patients' medical histories through a colorization technique. Our proposed method maps demographic features onto the (R, G, B) color space through normalized scaling. Each demographic feature corresponds to a distinct color, allowing for different ECG leads to be colored. This approach preserves the relationships between data by maintaining the color correlations in the statistical features, enhancing ECG analytics and supporting precision medicine. We conducted experiments with PTB-XL dataset and achieved 1%-6% improvements in the area under the receiving operator characteristic curve performance compared with other methods for various classification problems. Notably, our method excelled in multiclass and challenging classification tasks. The combined use of color features and the original waveform shape features enhanced prediction accuracy for various deep learning models. Our findings suggest that colorization is a promising avenue for advancing ECG classification and diagnosis, contributing to improved prediction and diagnosis of cardiovascular diseases and ultimately enhancing clinical outcomes.


Asunto(s)
Enfermedades Cardiovasculares , Aprendizaje Profundo , Humanos , Enfermedades Cardiovasculares/diagnóstico , Electrocardiografía , Medicina de Precisión
10.
Medicine (Baltimore) ; 103(7): e37112, 2024 Feb 16.
Artículo en Inglés | MEDLINE | ID: mdl-38363886

RESUMEN

Chronic kidney disease (CKD) is a major public health concern. But there are limited machine learning studies on non-cancer patients with advanced CKD, and the results of machine learning studies on cancer patients with CKD may not apply directly on non-cancer patients. We aimed to conduct a comprehensive investigation of risk factors for a 3-year risk of death among non-cancer advanced CKD patients with an estimated glomerular filtration rate < 60.0 mL/min/1.73m2 by several machine learning algorithms. In this retrospective cohort study, we collected data from in-hospital and emergency care patients from 2 hospitals in Taiwan from 2009 to 2019, including their international classification of disease at admission and laboratory data from the hospital's electronic medical records (EMRs). Several machine learning algorithms were used to analyze the potential impact and degree of influence of each factor on mortality and survival. Data from 2 hospitals in northern Taiwan were collected with 6565 enrolled patients. After data cleaning, 26 risk factors and approximately 3887 advanced CKD patients from Shuang Ho Hospital were used as the training set. The validation set contained 2299 patients from Taipei Medical University Hospital. Predictive variables, such as albumin, PT-INR, and age, were the top 3 significant risk factors with paramount influence on mortality prediction. In the receiver operating characteristic curve, the random forest had the highest values for accuracy above 0.80. MLP, and Adaboost had better performance on sensitivity and F1-score compared to other methods. Additionally, SVM with linear kernel function had the highest specificity of 0.9983, while its sensitivity and F1-score were poor. Logistic regression had the best performance, with an area under the curve of 0.8527. Evaluating Taiwanese advanced CKD patients' EMRs could provide physicians with a good approximation of the patients' 3-year risk of death by machine learning algorithms.


Asunto(s)
Hospitalización , Insuficiencia Renal Crónica , Humanos , Estudios Retrospectivos , Factores de Riesgo , Aprendizaje Automático , Insuficiencia Renal Crónica/complicaciones
11.
Bioengineering (Basel) ; 11(4)2024 Apr 19.
Artículo en Inglés | MEDLINE | ID: mdl-38671820

RESUMEN

BACKGROUND AND OBJECTIVE: Local advanced rectal cancer (LARC) poses significant treatment challenges due to its location and high recurrence rates. Accurate early detection is vital for treatment planning. With magnetic resonance imaging (MRI) being resource-intensive, this study explores using artificial intelligence (AI) to interpret computed tomography (CT) scans as an alternative, providing a quicker, more accessible diagnostic tool for LARC. METHODS: In this retrospective study, CT images of 1070 T3-4 rectal cancer patients from 2010 to 2022 were analyzed. AI models, trained on 739 cases, were validated using two test sets of 134 and 197 cases. By utilizing techniques such as nonlocal mean filtering, dynamic histogram equalization, and the EfficientNetB0 algorithm, we identified images featuring characteristics of a positive circumferential resection margin (CRM) for the diagnosis of locally advanced rectal cancer (LARC). Importantly, this study employs an innovative approach by using both hard and soft voting systems in the second stage to ascertain the LARC status of cases, thus emphasizing the novelty of the soft voting system for improved case identification accuracy. The local recurrence rates and overall survival of the cases predicted by our model were assessed to underscore its clinical value. RESULTS: The AI model exhibited high accuracy in identifying CRM-positive images, achieving an area under the curve (AUC) of 0.89 in the first test set and 0.86 in the second. In a patient-based analysis, the model reached AUCs of 0.84 and 0.79 using a hard voting system. Employing a soft voting system, the model attained AUCs of 0.93 and 0.88, respectively. Notably, AI-identified LARC cases exhibited a significantly higher five-year local recurrence rate and displayed a trend towards increased mortality across various thresholds. Furthermore, the model's capability to predict adverse clinical outcomes was superior to those of traditional assessments. CONCLUSION: AI can precisely identify CRM-positive LARC cases from CT images, signaling an increased local recurrence and mortality rate. Our study presents a swifter and more reliable method for detecting LARC compared to traditional CT or MRI techniques.

12.
Bioengineering (Basel) ; 11(5)2024 Apr 25.
Artículo en Inglés | MEDLINE | ID: mdl-38790288

RESUMEN

An intensive care unit (ICU) is a special ward in the hospital for patients who require intensive care. It is equipped with many instruments monitoring patients' vital signs and supported by the medical staff. However, continuous monitoring demands a massive workload of medical care. To ease the burden, we aim to develop an automatic detection model to monitor when brain anomalies occur. In this study, we focus on electroencephalography (EEG), which monitors the brain electroactivity of patients continuously. It is mainly for the diagnosis of brain malfunction. We propose the gated-recurrent-unit-based (GRU-based) model for detecting brain anomalies; it predicts whether the spike or sharp wave happens within a short time window. Based on the banana montage setting, the proposed model exploits characteristics of multiple channels simultaneously to detect anomalies. It is trained, validated, and tested on separated EEG data and achieves more than 90% testing performance on sensitivity, specificity, and balanced accuracy. The proposed anomaly detection model detects the existence of a spike or sharp wave precisely; it will notify the ICU medical staff, who can provide immediate follow-up treatment. Consequently, it can reduce the medical workload in the ICU significantly.

13.
Int J Biomed Imaging ; 2024: 6114826, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38706878

RESUMEN

A challenge in accurately identifying and classifying left ventricular hypertrophy (LVH) is distinguishing it from hypertrophic cardiomyopathy (HCM) and Fabry disease. The reliance on imaging techniques often requires the expertise of multiple specialists, including cardiologists, radiologists, and geneticists. This variability in the interpretation and classification of LVH leads to inconsistent diagnoses. LVH, HCM, and Fabry cardiomyopathy can be differentiated using T1 mapping on cardiac magnetic resonance imaging (MRI). However, differentiation between HCM and Fabry cardiomyopathy using echocardiography or MRI cine images is challenging for cardiologists. Our proposed system named the MRI short-axis view left ventricular hypertrophy classifier (MSLVHC) is a high-accuracy standardized imaging classification model developed using AI and trained on MRI short-axis (SAX) view cine images to distinguish between HCM and Fabry disease. The model achieved impressive performance, with an F1-score of 0.846, an accuracy of 0.909, and an AUC of 0.914 when tested on the Taipei Veterans General Hospital (TVGH) dataset. Additionally, a single-blinding study and external testing using data from the Taichung Veterans General Hospital (TCVGH) demonstrated the reliability and effectiveness of the model, achieving an F1-score of 0.727, an accuracy of 0.806, and an AUC of 0.918, demonstrating the model's reliability and usefulness. This AI model holds promise as a valuable tool for assisting specialists in diagnosing LVH diseases.

14.
Int J Cardiol ; 402: 131851, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38360099

RESUMEN

BACKGROUND: Based solely on pre-ablation characteristics, previous risk scores have demonstrated variable predictive performance. This study aimed to predict the recurrence of AF after catheter ablation by using artificial intelligence (AI)-enabled pre-ablation computed tomography (PVCT) images and pre-ablation clinical data. METHODS: A total of 638 drug-refractory paroxysmal atrial fibrillation (AF) patients undergone ablation were recruited. For model training, we used left atria (LA) acquired from pre-ablation PVCT slices (126,288 images). A total of 29 clinical variables were collected before ablation, including baseline characteristics, medical histories, laboratory results, transthoracic echocardiographic parameters, and 3D reconstructed LA volumes. The I-Score was applied to select variables for model training. For the prediction of one-year AF recurrence, PVCT deep-learning and clinical variable machine-learning models were developed. We then applied machine learning to ensemble the PVCT and clinical variable models. RESULTS: The PVCT model achieved an AUC of 0.63 in the test set. Various combinations of clinical variables selected by I-Score can yield an AUC of 0.72, which is significantly better than all variables or features selected by nonparametric statistics (AUCs of 0.66 to 0.69). The ensemble model (PVCT images and clinical variables) significantly improved predictive performance up to an AUC of 0.76 (sensitivity of 86.7% and specificity of 51.0%). CONCLUSIONS: Before ablation, AI-enabled PVCT combined with I-Score features was applicable in predicting recurrence in paroxysmal AF patients. Based on all possible predictors, the I-Score is capable of identifying the most influential combination.


Asunto(s)
Fibrilación Atrial , Ablación por Catéter , Humanos , Fibrilación Atrial/diagnóstico por imagen , Fibrilación Atrial/cirugía , Inteligencia Artificial , Resultado del Tratamiento , Atrios Cardíacos/diagnóstico por imagen , Atrios Cardíacos/cirugía , Ablación por Catéter/métodos , Recurrencia , Valor Predictivo de las Pruebas
15.
Genome Res ; 20(6): 826-36, 2010 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-20445163

RESUMEN

Gene expression is regulated both by cis elements, which are DNA segments closely linked to the genes they regulate, and by trans factors, which are usually proteins capable of diffusing to unlinked genes. Understanding the patterns and sources of regulatory variation is crucial for understanding phenotypic and genome evolution. Here, we measure genome-wide allele-specific expression by deep sequencing to investigate the patterns of cis and trans expression variation between two strains of Saccharomyces cerevisiae. We propose a statistical modeling framework based on the binomial distribution that simultaneously addresses normalization of read counts derived from different parents and estimating the cis and trans expression variation parameters. We find that expression polymorphism in yeast is common for both cis and trans, though trans variation is more common. Constraint in expression evolution is correlated with other hallmarks of constraint, including gene essentiality, number of protein interaction partners, and constraint in amino acid substitution, indicating that both cis and trans polymorphism are clearly under purifying selection, though trans variation appears to be more sensitive to selective constraint. Comparing interspecific expression divergence between S. cerevisiae and S. paradoxus to our intraspecific variation suggests a significant departure from a neutral model of molecular evolution. A further examination of correlation between polymorphism and divergence within each category suggests that cis divergence is more frequently mediated by positive Darwinian selection than is trans divergence.


Asunto(s)
Regulación Fúngica de la Expresión Génica , Saccharomyces cerevisiae/genética , Selección Genética , ADN de Hongos/genética , Evolución Molecular , Genoma Fúngico , Polimorfismo de Nucleótido Simple
16.
Sci Rep ; 13(1): 13582, 2023 08 21.
Artículo en Inglés | MEDLINE | ID: mdl-37604860

RESUMEN

We demonstrate that isomorphically mapping gray-level medical image matrices onto energy spaces underlying the framework of fast data density functional transform (fDDFT) can achieve the unsupervised recognition of lesion morphology. By introducing the architecture of geometric deep learning and metrics of graph neural networks, gridized density functionals of the fDDFT establish an unsupervised feature-aware mechanism with global convolutional kernels to extract the most likely lesion boundaries and produce lesion segmentation. An AutoEncoder-assisted module reduces the computational complexity from [Formula: see text] to [Formula: see text], thus efficiently speeding up global convolutional operations. We validate their performance utilizing various open-access datasets and discuss limitations. The inference time of each object in large three-dimensional datasets is 1.76 s on average. The proposed gridized density functionals have activation capability synergized with gradient ascent operations, hence can be modularized and embedded in pipelines of modern deep neural networks. Algorithms of geometric stability and similarity convergence also raise the accuracy of unsupervised recognition and segmentation of lesion images. Their performance achieves the standard requirement for conventional deep neural networks; the median dice score is higher than 0.75. The experiment shows that the synergy of fDDFT and a naïve neural network improves the training and inference time by 58% and 51%, respectively, and the dice score raises to 0.9415. This advantage facilitates fast computational modeling in interdisciplinary applications and clinical investigation.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Neoplasias Encefálicas/clasificación , Neoplasias Encefálicas/patología , Humanos , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas , Diagnóstico por Imagen , Conjuntos de Datos como Asunto
17.
J Chin Med Assoc ; 86(1): 122-130, 2023 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-36306391

RESUMEN

BACKGROUND: The World Health Organization reported that cardiovascular disease is the most common cause of death worldwide. On average, one person dies of heart disease every 26 min worldwide. Deep learning approaches are characterized by the appropriate combination of abnormal features based on numerous annotated images. The constructed convolutional neural network (CNN) model can identify normal states of reversible and irreversible myocardial defects and alert physicians for further diagnosis. METHODS: Cadmium zinc telluride single-photon emission computed tomography myocardial perfusion resting-state images were collected at Chang Gung Memorial Hospital, Kaohsiung Medical Center, Kaohsiung, Taiwan, and were analyzed with a deep learning convolutional neural network to classify myocardial perfusion images for coronary heart diseases. RESULTS: In these grey-scale images, the heart blood flow distribution was the most crucial feature. The deep learning technique of You Only Look Once was used to determine the myocardial defect area and crop the images. After surrounding noise had been eliminated, a three-dimensional CNN model was used to identify patients with coronary heart diseases. The prediction area under the curve, accuracy, sensitivity, and specificity was 90.97, 87.08, 86.49, and 87.41%, respectively. CONCLUSION: Our prototype system can considerably reduce the time required for image interpretation and improve the quality of medical care. It can assist clinical experts by offering accurate coronary heart disease diagnosis in practice.


Asunto(s)
Enfermedad de la Arteria Coronaria , Aprendizaje Profundo , Isquemia Miocárdica , Imagen de Perfusión Miocárdica , Humanos , Imagen de Perfusión Miocárdica/métodos , Tomografía Computarizada de Emisión de Fotón Único/métodos , Isquemia Miocárdica/diagnóstico por imagen , Corazón
18.
Comput Methods Programs Biomed ; 242: 107845, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37852147

RESUMEN

BACKGROUND: To develop deep learning models for medical diagnosis, it is important to collect more medical data from several medical institutions. Due to the regulations for privacy concerns, it is infeasible to collect data from various medical institutions to one institution for centralized learning. Federated Learning (FL) provides a feasible approach to jointly train the deep learning model with data stored in various medical institutions instead of collected together. However, the resulting FL models could be biased towards institutions with larger training datasets. METHODOLOGY: In this study, we propose the applicable method of Dynamically Synthetic Images for Federated Learning (DSIFL) that aims to integrate the information of local institutions with heterogeneous types of data. The main technique of DSIFL is to develop a synthetic method that can dynamically adjust the number of synthetic images similar to local data that are misclassified by the current model. The resulting global model can handle the diversity in heterogeneous types of data collected in local medical institutions by including the training of synthetic images similar to misclassified cases in local collections. RESULTS: In model performance evaluation metrics, we focus on the accuracy of each client's dataset. Finally, the accuracy of the model of DSIFL in the experiments can achieve the higher accuracy of the FL approach. CONCLUSION: In this study, we propose the framework of DSIFL that achieves improvements over the conventional FL approach. We conduct empirical studies with two kinds of medical images. We compare the performance by variants of FL vs. DSIFL approaches. The performance by individual training is used as the baseline, whereas the performance by centralized learning is used as the target for the comparison studies. The empirical findings suggest that the DSIFL has improved performance over the FL via the technique of dynamically synthetic images in training.


Asunto(s)
Benchmarking , Privacidad , Humanos , Investigación Empírica
19.
J Xray Sci Technol ; 20(3): 339-49, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22948355

RESUMEN

Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method.


Asunto(s)
Algoritmos , Corteza Cerebral/diagnóstico por imagen , Imagenología Tridimensional/métodos , Distribución Normal , Tomografía de Emisión de Positrones/métodos , Animales , Simulación por Computador , Fantasmas de Imagen , Ratas
20.
JMIR Med Inform ; 10(11): e40878, 2022 Nov 02.
Artículo en Inglés | MEDLINE | ID: mdl-36322109

RESUMEN

BACKGROUND: In recent years, the progress and generalization surrounding portable ultrasonic probes has made ultrasound (US) a useful tool for physicians when making a diagnosis. With the advent of machine learning and deep learning, the development of a computer-aided diagnostic system for screening renal US abnormalities can assist general practitioners in the early detection of pediatric kidney diseases. OBJECTIVE: In this paper, we sought to evaluate the diagnostic performance of deep learning techniques to classify kidney images as normal and abnormal. METHODS: We chose 330 normal and 1269 abnormal pediatric renal US images for establishing a model for artificial intelligence. The abnormal images involved stones, cysts, hyperechogenicity, space-occupying lesions, and hydronephrosis. We performed preprocessing of the original images for subsequent deep learning. We redefined the final connecting layers for classification of the extracted features as abnormal or normal from the ResNet-50 pretrained model. The performances of the model were tested by a validation data set using area under the receiver operating characteristic curve, accuracy, specificity, and sensitivity. RESULTS: The deep learning model, 94 MB parameters in size, based on ResNet-50, was built for classifying normal and abnormal images. The accuracy, (%)/area under curve, of the validated images of stone, cyst, hyperechogenicity, space-occupying lesions, and hydronephrosis were 93.2/0.973, 91.6/0.940, 89.9/0.940, 91.3/0.934, and 94.1/0.996, respectively. The accuracy of normal image classification in the validation data set was 90.1%. Overall accuracy of (%)/area under curve was 92.9/0.959.. CONCLUSIONS: We established a useful, computer-aided model for automatic classification of pediatric renal US images in terms of normal and abnormal categories.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA