Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Digit Health ; 10: 20552076241281200, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39372813

RESUMO

Background: Obtaining tachycardia electrocardiograms (ECGs) in patients with paroxysmal supraventricular tachycardia (PSVT) is often challenging. Sinus rhythm ECGs are of limited predictive value for PSVT types in patients without preexcitation. This study aimed to explore the classification of atrioventricular nodal reentry tachycardia (AVNRT) and concealed atrioventricular reentry tachycardia (AVRT) using sinus rhythm ECGs through deep learning. Methods: This retrospective study included patients diagnosed with either AVNRT or concealed AVRT, validated through electrophysiological studies. A modified ResNet-34 deep learning model, pre-trained on a public ECG database, was employed to classify sinus rhythm ECGs with underlying AVNRT or concealed AVRT. Various configurations were compared using ten-fold cross-validation on the training set, and the best-performing configuration was tested on the hold-out test set. Results: The study analyzed 833 patients with AVNRT and 346 with concealed AVRT. Among ECG features, the corrected QT intervals exhibited the highest area under the receiver operating characteristic curve (AUROC) of 0.602. The performance of the deep learning model significantly improved after pre-training, showing an AUROC of 0.726 compared to 0.668 without pre-training (p < 0.001). No significant difference was found in AUROC between 12-lead and precordial 6-lead ECGs (p = 0.265). On the test set, deep learning achieved modest performance in differentiating the two types of arrhythmias, with an AUROC of 0.708, an AUPRC of 0.875, an F1-score of 0.750, a sensitivity of 0.670, and a specificity of 0.649. Conclusion: The deep-learning classification of AVNRT and concealed AVRT using sinus rhythm ECGs is feasible, indicating potential for aiding in the non-invasive diagnosis of these arrhythmias.

2.
Comput Biol Med ; 182: 109088, 2024 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-39353296

RESUMO

Feature attribution methods can visually highlight specific input regions containing influential aspects affecting a deep learning model's prediction. Recently, the use of feature attribution methods in electrocardiogram (ECG) classification has been sharply increasing, as they assist clinicians in understanding the model's decision-making process and assessing the model's reliability. However, a careful study to identify suitable methods for ECG datasets has been lacking, leading researchers to select methods without a thorough understanding of their appropriateness. In this work, we conduct a large-scale assessment by considering eleven popular feature attribution methods across five large ECG datasets using a model based on the ResNet-18 architecture. Our experiments include both automatic evaluations and human evaluations. Annotated datasets were utilized for automatic evaluations and three cardiac experts were involved for human evaluations. We found that Guided Grad-CAM, particularly when its absolute values are utilized, achieves the best performance. When Guided Grad-CAM was utilized as the feature attribution method, cardiac experts confirmed that it can identify diagnostically relevant electrophysiological characteristics, although its effectiveness varied across the 17 different diagnoses that we have investigated.

3.
Neural Netw ; 179: 106584, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39142174

RESUMO

Contrastive learning has emerged as a cornerstone in unsupervised representation learning. Its primary paradigm involves an instance discrimination task utilizing InfoNCE loss where the loss has been proven to be a form of mutual information. Consequently, it has become a common practice to analyze contrastive learning using mutual information as a measure. Yet, this analysis approach presents difficulties due to the necessity of estimating mutual information for real-world applications. This creates a gap between the elegance of its mathematical foundation and the complexity of its estimation, thereby hampering the ability to derive solid and meaningful insights from mutual information analysis. In this study, we introduce three novel methods and a few related theorems, aimed at enhancing the rigor of mutual information analysis. Despite their simplicity, these methods can carry substantial utility. Leveraging these approaches, we reassess three instances of contrastive learning analysis, illustrating the capacity of the proposed methods to facilitate deeper comprehension or to rectify pre-existing misconceptions. The main results can be summarized as follows: (1) While small batch sizes influence the range of training loss, they do not inherently limit learned representation's information content or affect downstream performance adversely; (2) Mutual information, with careful selection of positive pairings and post-training estimation, proves to be a superior measure for evaluating practical networks; and (3) Distinguishing between task-relevant and irrelevant information presents challenges, yet irrelevant information sources do not necessarily compromise the generalization of downstream tasks.


Assuntos
Redes Neurais de Computação , Humanos , Algoritmos , Aprendizagem/fisiologia , Aprendizado de Máquina não Supervisionado
4.
Korean Circ J ; 53(10): 677-689, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37653713

RESUMO

BACKGROUND AND OBJECTIVES: There is limited evidence regarding machine-learning prediction for the recurrence of atrial fibrillation (AF) after electrical cardioversion (ECV). This study aimed to predict the recurrence of AF after ECV using machine learning of clinical features and electrocardiograms (ECGs) in persistent AF patients. METHODS: We analyzed patients who underwent successful ECV for persistent AF. Machine learning was designed to predict patients with 1-month recurrence. Individual 12-lead ECGs were collected before and after ECV. Various clinical features were collected and trained the extreme gradient boost (XGBoost)-based model. Ten-fold cross-validation was used to evaluate the performance of the model. The performance was compared to the C-statistics of the selected clinical features. RESULTS: Among 718 patients (mean age 63.5±9.3 years, men 78.8%), AF recurred in 435 (60.6%) patients after 1 month. With the XGBoost-based model, the areas under the receiver operating characteristic curves (AUROCs) were 0.57, 0.60, and 0.63 if the model was trained by clinical features, ECGs, and both (the final model), respectively. For the final model, the sensitivity, specificity, and F1-score were 84.7%, 28.2%, and 0.73, respectively. Although the AF duration showed the best predictive performance (AUROC, 0.58) among the clinical features, it was significantly lower than that of the final machine-learning model (p<0.001). Additional training of extended monitoring data of 15-minute single-lead ECG and photoplethysmography in available patients (n=261) did not significantly improve the model's performance. CONCLUSIONS: Machine learning showed modest performance in predicting AF recurrence after ECV in persistent AF patients, warranting further validation studies.

5.
Neural Netw ; 161: 165-177, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36745941

RESUMO

Low-rank compression of a neural network is one of the popular compression techniques, where it has been known to have two main challenges. The first challenge is determining the optimal rank of all the layers and the second is training the neural network into a compression-friendly form. To overcome the two challenges, we propose BSR (Beam-search and Stable Rank), a low-rank compression algorithm that embodies an efficient rank-selection method and a unique compression-friendly training method. For the rank selection, BSR employs a modified beam search that can perform a joint optimization of the rank allocations over all the layers in contrast to the previously used heuristic methods. For the compression-friendly training, BSR adopts a regularization loss derived from a modified stable rank, which can control the rank while incurring almost no harm in performance. Experiment results confirm that BSR is effective and superior when compared to the existing low-rank compression methods. For CIFAR10 on ResNet56, BSR not only achieves compression but also provides a performance improvement over the baseline model's performance for the compression ratio of up to 0.82. For CIFAR100 on ResNet56 and ImageNet on AlexNet, BSR outperforms the previous SOTA method, LC, by 4.7% and by 6.7% on the average, respectively. BSR is also effective for EfficientNet-B0 and MobileNetV2 that are known for their efficient design in terms of parameters and computational cost. We also show that BSR provides a competitive performance when compared with the recent pruning compression algorithms. As with pruning, BSR can be easily combined with quantization for an additional compression.


Assuntos
Compressão de Dados , Compressão de Dados/métodos , Algoritmos , Redes Neurais de Computação
6.
Comput Educ ; 163: 104041, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33046948

RESUMO

Despite the potential of learning analytics for personalized learning, it is seldom used to support collaborative learning particularly in face-to-face (F2F) learning contexts. This study uses learning analytics to develop a dashboard system that provides adaptive support for F2F collaborative argumentation (FCA). This study developed two dashboards for students and instructors, which enabled students to monitor their FCA process through adaptive feedback and helped the instructor provide adaptive support at the right time. The effectiveness of the dashboards was examined in a university class with 88 students (56 females, 32 males) for 4 weeks. The dashboards significantly improved the FCA process and outcomes, encouraging students to actively participate in FCA and create high-quality arguments. Students had a positive attitude toward the dashboard and perceived it as useful and easy to use. These findings indicate the usefulness of learning analytics dashboards in improving collaborative learning through adaptive feedback and support. Suggestions are provided on how to design dashboards for adaptive support in F2F learning contexts using learning analytics.

7.
Sci Data ; 6(1): 193, 2019 10 08.
Artigo em Inglês | MEDLINE | ID: mdl-31594953

RESUMO

AMI has been gradually replacing conventional meters because newer models can acquire more informative energy consumption data. The additional information has enabled significant advances in many fields, including energy disaggregation, energy consumption pattern analysis and prediction, demand response, and user segmentation. However, the quality of AMI data varies significantly across publicly available datasets, and low sampling rates and numbers of houses monitored seriously limit practical analyses. To address these challenges, we herein present the ENERTALK dataset, which contains both aggregate and per-appliance measurements sampled at 15 Hz from 22 houses. Among the publicly available datasets with both aggregate and per-appliance measurements, 15 Hz was the highest sampling rate. The number of houses (22) was the second-largest where the largest one had a sampling rate of 1 Hz. The ENERTALK dataset is also the first Korean open dataset on residential electricity consumption.

8.
Sensors (Basel) ; 18(5)2018 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-29772823

RESUMO

In this paper, we provide findings from an energy saving experiment in a university building, where an IoT platform with 1 Hz sampling sensors was deployed to collect electric power consumption data. The experiment was a reward setup with daily feedback delivered by an energy delegate for one week, and energy saving of 25.4% was achieved during the experiment. Post-experiment sustainability, defined as 10% or more of energy saving, was also accomplished for 44 days without any further intervention efforts. The saving was possible mainly because of the data-driven intervention designs with high-resolution data in terms of sampling frequency and number of sensors, and the high-resolution data turned out to be pivotal for an effective waste behavior investigation. While the quantitative result was encouraging, we also noticed many uncontrollable factors, such as exams, papers due, office allocation shuffling, graduation, and new-comers, that affected the result in the campus environment. To confirm that the quantitative result was due to behavior changes, rather than uncontrollable factors, we developed several data-driven behavior detection measures. With these measures, it was possible to analyze behavioral changes, as opposed to simply analyzing quantitative fluctuations. Overall, we conclude that the space-time resolution of data can be crucial for energy saving, and potentially for many other data-driven energy applications.

9.
Arch Biochem Biophys ; 645: 42-49, 2018 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-29427590

RESUMO

Nanoceria were synthesized by discharging plasma at 800 V with a frequency of 30 kHz for 0-25 min using a pulsed unipolar power supply into solutions containing 1 or 2 mM of Ce(NO3)2. UV-Vis spectroscopy showed a characteristic absorbance maxima at 304-320 nm for the nanoceria with increase in the intensity of the peaks as the concentration of Ce(NO3)2 increased. The peaks exhibited transition red shift due to nanoceria formation. High resolution transmission electron microscopy revealed that spherical nanoparticles with an average size of 7.0 ±â€¯0.2 nm were formed by discharging plasma for 15 min. The nanoceria showed excellent pH dependent antioxidant properties in hydroxyl and superoxide anion radical scavenging assays. Effect of the nanoceria on cell viability in vitro and inhibition of reactive oxygen species (ROS) by the nanoceria were examined using HeLa cell lines. As the results, no toxic effect was found up to 1600 µg mL-1 of nanoceria and they had an effective antioxidant property. Therefore, the nanoceria synthesized by one-step solution plasma process without employing hazardous chemicals have potential for utilization as antioxidant biomaterials and sustained release in the stream to scavenge ROS in the modern medicine.


Assuntos
Antioxidantes/síntese química , Antioxidantes/farmacologia , Materiais Biocompatíveis/síntese química , Materiais Biocompatíveis/farmacologia , Cério/química , Cério/farmacologia , Gases em Plasma/química , Antioxidantes/química , Materiais Biocompatíveis/química , Técnicas de Química Sintética , Células HeLa , Humanos , Estresse Oxidativo/efeitos dos fármacos , Espécies Reativas de Oxigênio/metabolismo , Soluções
10.
PLoS One ; 12(7): e0180735, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28678880

RESUMO

Internet-connected devices, especially mobile devices such as smartphones, have become widely accessible in the past decade. Interaction with such devices has evolved into frequent and short-duration usage, and this phenomenon has resulted in a pervasive popularity of casual games in the game sector. On the other hand, development of casual games has become easier than ever as a result of the advancement of development tools. With the resulting fierce competition, now both acquisition and retention of users are the prime concerns in the field. In this study, we focus on churn prediction of mobile and online casual games. While churn prediction and analysis can provide important insights and action cues on retention, its application using play log data has been primitive or very limited in the casual game area. Most of the existing methods cannot be applied to casual games because casual game players tend to churn very quickly and they do not pay periodic subscription fees. Therefore, we focus on the new players and formally define churn using observation period (OP) and churn prediction period (CP). Using the definition, we develop a standard churn analysis process for casual games. We cover essential topics such as pre-processing of raw data, feature engineering including feature analysis, churn prediction modeling using traditional machine learning algorithms (logistic regression, gradient boosting, and random forests) and two deep learning algorithms (CNN and LSTM), and sensitivity analysis for OP and CP. Play log data of three different casual games are considered by analyzing a total of 193,443 unique player records and 10,874,958 play log records. While the analysis results provide useful insights, the overall results indicate that a small number of well-chosen features used as performance metrics might be sufficient for making important action decisions and that OP and CP should be properly chosen depending on the analysis goal.


Assuntos
Internet , Jogos de Vídeo , Sinais (Psicologia) , Humanos , Resolução de Problemas
11.
Accid Anal Prev ; 75: 1-15, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25460086

RESUMO

Transportation continues to be an integral part of modern life, and the importance of road traffic safety cannot be overstated. Consequently, recent road traffic safety studies have focused on analysis of risk factors that impact fatality and injury level (severity) of traffic accidents. While some of the risk factors, such as drug use and drinking, are widely known to affect severity, an accurate modeling of their influences is still an open research topic. Furthermore, there are innumerable risk factors that are waiting to be discovered or analyzed. A promising approach is to investigate historical traffic accident data that have been collected in the past decades. This study inspects traffic accident reports that have been accumulated by the California Highway Patrol (CHP) since 1973 for which each accident report contains around 100 data fields. Among them, we investigate 25 fields between 2004 and 2010 that are most relevant to car accidents. Using two classification methods, the Naive Bayes classifier and the decision tree classifier, the relative importance of the data fields, i.e., risk factors, is revealed with respect to the resulting severity level. Performances of the classifiers are compared to each other and a binary logistic regression model is used as the basis for the comparisons. Some of the high-ranking risk factors are found to be strongly dependent on each other, and their incremental gains on estimating or modeling severity level are evaluated quantitatively. The analysis shows that only a handful of the risk factors in the data dominate the severity level and that dependency among the top risk factors is an imperative trait to consider for an accurate analysis.


Assuntos
Acidentes de Trânsito/classificação , Acidentes de Trânsito/estatística & dados numéricos , Algoritmos , Segurança , Acidentes de Trânsito/mortalidade , Teorema de Bayes , California , Árvores de Decisões , Humanos , Modelos Logísticos , Curva ROC , Fatores de Risco
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA