Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Cureus ; 13(8): e17520, 2021 Aug.
Article in English | MEDLINE | ID: mdl-34603890

ABSTRACT

Objectives The primary goal of our study is to evaluate the mortality rate in inpatient recipients of multivessel percutaneous coronary intervention (MVPCI) and to evaluate the demographic risk factors and medical complications that increase the risk of in-hospital mortality. Methods We conducted a cross-sectional study using the Nationwide Inpatient Sample (NIS, 2016) and included 127,145 inpatients who received MVPCI as a primary procedure in United States' hospitals. We used a multivariable logistic regression model adjusted for demographic confounders to measure the odds ratio (OR) of association of medical complications and in-hospital mortality risk in MVPCI recipients. Results The in-hospital mortality rate was 2% in MVPCI recipients and was seen majorly in older-age adults (>64 years, 74%) and males (61%). Even though the prevalence of mortality among females was comparatively low, yet in the regression model, they were at a higher risk for in-hospital mortality than males (OR 1.2; 95% CI 1.13-1.37). While comparing ethnicities, in-hospital mortality was prevalent in whites (79%) followed by blacks (9%) and Hispanics (7.5%). Patients who developed cardiogenic shock were at higher odds of in-hospital mortality (OR 9.2; 95% CI 8.27-10.24) followed by respiratory failure (OR 5.9; 95% CI 5.39-6.64) and ventricular fibrillation (OR 3.5; 95% CI 3.18-3.92). Conclusion Accelerated use of MVPCI made it important to study in-hospital mortality risk factors allowing us to devise strategies to improve the utilization and improve the quality of life of these at-risk patients. Despite its effectiveness and comparatively lower mortality profile, aggressive usage of MVPCI is restricted due to the periprocedural complications and morbidity profile of the patients.

2.
Neural Netw ; 143: 489-499, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34280608

ABSTRACT

Recognition of ancient Korean-Chinese cursive character (Hanja) is a challenging problem mainly because of large number of classes, damaged cursive characters, various hand-writing styles, and similar confusable characters. They also suffer from lack of training data and class imbalance issues. To address these problems, we propose a unified Regularized Low-shot Attention Transfer with Imbalance τ-Normalizing (RELATIN) framework. This handles the problem with instance-poor classes using a novel low-shot regularizer that encourages the norm of the weight vectors for classes with few samples to be aligned to those of many-shot classes. To overcome the class imbalance problem, we incorporate a decoupled classifier to rectify the decision boundaries via classifier weight-scaling into the proposed low-shot regularizer framework. To address the limited training data issue, the proposed framework performs Jensen-Shannon divergence based data augmentation and incorporate an attention module that aligns the most attentive features of the pretrained network to a target network. We verify the proposed RELATIN framework using highly-imbalanced ancient cursive handwritten character datasets. The results suggest that (i) the extreme class imbalance has a detrimental effect on classification performance; (ii) the proposed low-shot regularizer aligns the norm of the classifier in favor of classes with few samples; (iii) weight-scaling of decoupled classifier for addressing class imbalance appeared to be dominant in all the other baseline conditions; (iv) further addition of the attention module attempts to select more representative features maps from base pretrained model; (v) the proposed (RELATIN) framework results in superior representations to address extreme class imbalance issue.


Subject(s)
Attention , Recognition, Psychology
3.
Neural Netw ; 118: 208-219, 2019 Oct.
Article in English | MEDLINE | ID: mdl-31299625

ABSTRACT

Multimodal emotion understanding enables AI systems to interpret human emotions. With accelerated video surge, emotion understanding remains challenging due to inherent data ambiguity and diversity of video content. Although deep learning has made a considerable progress in big data feature learning, they are viewed as deterministic models used in a "black-box" manner which does not have capabilities to represent inherent ambiguities with data. Since the possibility theory of fuzzy logic focuses on knowledge representation and reasoning under uncertainty, we intend to incorporate the concepts of fuzzy logic into deep learning framework. This paper presents a novel convolutional neuro-fuzzy network, which is an integration of convolutional neural networks in fuzzy logic domain to extract high-level emotion features from text, audio, and visual modalities. The feature sets extracted by fuzzy convolutional layers are compared with those of convolutional neural networks at the same level using t-distributed Stochastic Neighbor Embedding. This paper demonstrates a multimodal emotion understanding framework with an adaptive neural fuzzy inference system that can generate new rules to classify emotions. For emotion understanding of movie clips, we concatenate audio, visual, and text features extracted using the proposed convolutional neuro-fuzzy network to train adaptive neural fuzzy inference system. In this paper, we go one step further to explain how deep learning arrives at a conclusion that can guide us to an interpretable AI. To identify which visual/text/audio aspects are important for emotion understanding, we use direct linear non-Gaussian additive model to explain the relevance in terms of causal relationships between features of deep hidden layers. The critical features extracted are input to the proposed multimodal framework to achieve higher accuracy.


Subject(s)
Deep Learning , Emotions , Fuzzy Logic , Motion Pictures , Neural Networks, Computer , Algorithms , Deep Learning/classification , Emotions/physiology , Humans , Motion Pictures/classification , Photic Stimulation/methods
4.
J Neurosci Methods ; 203(1): 163-72, 2012 Jan 15.
Article in English | MEDLINE | ID: mdl-21911006

ABSTRACT

The amplitude of EEG µ-rhythm is large when the subject does not perform or imagine movement and attenuates when the subject either performs or imagines movement. The knowledge of EEG individual frequency components in the time-domain provides useful insight into the classification process. Identification of subject-specific reactive band is crucial for accurate event classification in brain-computer interfaces (BCI). This work develops a simple time-frequency decomposition method for EEG µ rhythm by adaptive modeling. With the time-domain decomposition of the signal, subject-specific reactive band identification method is proposed. Study is conducted on 30 subjects for optimal band selection for four movement classes. Our results show that over 93% the subjects have an optimal band and selection of this band improves the relative power spectral density by 200% with respect to normalized power.


Subject(s)
Brain/physiology , Electroencephalography , Models, Neurological , Models, Theoretical , User-Computer Interface , Adult , Algorithms , Female , Humans , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...