Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Int J Heart Fail ; 6(1): 11-19, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38303917

RESUMO

The prevalence of heart failure (HF) is increasing, necessitating accurate diagnosis and tailored treatment. The accumulation of clinical information from patients with HF generates big data, which poses challenges for traditional analytical methods. To address this, big data approaches and artificial intelligence (AI) have been developed that can effectively predict future observations and outcomes, enabling precise diagnoses and personalized treatments of patients with HF. Machine learning (ML) is a subfield of AI that allows computers to analyze data, find patterns, and make predictions without explicit instructions. ML can be supervised, unsupervised, or semi-supervised. Deep learning is a branch of ML that uses artificial neural networks with multiple layers to find complex patterns. These AI technologies have shown significant potential in various aspects of HF research, including diagnosis, outcome prediction, classification of HF phenotypes, and optimization of treatment strategies. In addition, integrating multiple data sources, such as electrocardiography, electronic health records, and imaging data, can enhance the diagnostic accuracy of AI algorithms. Currently, wearable devices and remote monitoring aided by AI enable the earlier detection of HF and improved patient care. This review focuses on the rationale behind utilizing AI in HF and explores its various applications.

2.
Front Cardiovasc Med ; 10: 1130216, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37324622

RESUMO

Background: Because of the short half-life of non-vitamin K antagonist oral anticoagulants (NOACs), consistent drug adherence is crucial to maintain the effect of anticoagulants for stroke prevention in atrial fibrillation (AF). Considering the low adherence to NOACs in practice, we developed a mobile health platform that provides an alert for drug intake, visual confirmation of drug administration, and a list of medication intake history. This study aims to evaluate whether this smartphone app-based intervention will increase drug adherence compared with usual care in patients with AF requiring NOACs in a large population. Methods: This prospective, randomized, open-label, multicenter trial (RIVOX-AF study) will include a total of 1,042 patients (521 patients in the intervention group and 521 patients in the control group) from 13 tertiary hospitals in South Korea. Patients with AF aged ≥19 years with one or more comorbidities, including heart failure, myocardial infarction, stable angina, hypertension, or diabetes mellitus, will be included in this study. Participants will be randomly assigned to either the intervention group (MEDI-app) or the conventional treatment group in a 1:1 ratio using a web-based randomization service. The intervention group will use a smartphone app that includes an alarm for drug intake, visual confirmation of drug administration through a camera check, and presentation of a list of medication intake history. The primary endpoint is adherence to rivaroxaban by pill count measurements at 12 and 24 weeks. The key secondary endpoints are clinical composite endpoints, including systemic embolic events, stroke, major bleeding requiring transfusion or hospitalization, or death during the 24 weeks of follow-up. Discussion: This randomized controlled trial will investigate the feasibility and efficacy of smartphone apps and mobile health platforms in improving adherence to NOACs. Trial registration: The study design has been registered in ClinicalTrial.gov (NCT05557123).

3.
Sensors (Basel) ; 23(9)2023 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-37177574

RESUMO

Multimodal emotion recognition has gained much traction in the field of affective computing, human-computer interaction (HCI), artificial intelligence (AI), and user experience (UX). There is growing demand to automate analysis of user emotion towards HCI, AI, and UX evaluation applications for providing affective services. Emotions are increasingly being used, obtained through the videos, audio, text or physiological signals. This has led to process emotions from multiple modalities, usually combined through ensemble-based systems with static weights. Due to numerous limitations like missing modality data, inter-class variations, and intra-class similarities, an effective weighting scheme is thus required to improve the aforementioned discrimination between modalities. This article takes into account the importance of difference between multiple modalities and assigns dynamic weights to them by adapting a more efficient combination process with the application of generalized mixture (GM) functions. Therefore, we present a hybrid multimodal emotion recognition (H-MMER) framework using multi-view learning approach for unimodal emotion recognition and introducing multimodal feature fusion level, and decision level fusion using GM functions. In an experimental study, we evaluated the ability of our proposed framework to model a set of four different emotional states (Happiness, Neutral, Sadness, and Anger) and found that most of them can be modeled well with significantly high accuracy using GM functions. The experiment shows that the proposed framework can model emotional states with an average accuracy of 98.19% and indicates significant gain in terms of performance in contrast to traditional approaches. The overall evaluation results indicate that we can identify emotional states with high accuracy and increase the robustness of an emotion classification system required for UX measurement.


Assuntos
Algoritmos , Inteligência Artificial , Humanos , Emoções/fisiologia , Aprendizagem , Reconhecimento Psicológico , Eletroencefalografia/métodos
4.
Appl Intell (Dordr) ; 51(5): 2890-2907, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34764573

RESUMO

Coronavirus disease 2019 (COVID-19) is a novel harmful respiratory disease that has rapidly spread worldwide. At the end of 2019, COVID-19 emerged as a previously unknown respiratory disease in Wuhan, Hubei Province, China. The world health organization (WHO) declared the coronavirus outbreak a pandemic in the second week of March 2020. Simultaneous deep learning detection and classification of COVID-19 based on the full resolution of digital X-ray images is the key to efficiently assisting patients by enabling physicians to reach a fast and accurate diagnosis decision. In this paper, a simultaneous deep learning computer-aided diagnosis (CAD) system based on the YOLO predictor is proposed that can detect and diagnose COVID-19, differentiating it from eight other respiratory diseases: atelectasis, infiltration, pneumothorax, masses, effusion, pneumonia, cardiomegaly, and nodules. The proposed CAD system was assessed via five-fold tests for the multi-class prediction problem using two different databases of chest X-ray images: COVID-19 and ChestX-ray8. The proposed CAD system was trained with an annotated training set of 50,490 chest X-ray images. The regions on the entire X-ray images with lesions suspected of being due to COVID-19 were simultaneously detected and classified end-to-end via the proposed CAD predictor, achieving overall detection and classification accuracies of 96.31% and 97.40%, respectively. Most test images from patients with confirmed COVID-19 and other respiratory diseases were correctly predicted, achieving average intersection over union (IoU) greater than 90%. Applying deep learning regularizers of data balancing and augmentation improved the COVID-19 diagnostic performance by 6.64% and 12.17% in terms of the overall accuracy and the F1-score, respectively. It is feasible to achieve a diagnosis based on individual chest X-ray images with the proposed CAD system within 0.0093 s. Thus, the CAD system presented in this paper can make a prediction at the rate of 108 frames/s (FPS), which is close to real-time. The proposed deep learning CAD system can reliably differentiate COVID-19 from other respiratory diseases. The proposed deep learning model seems to be a reliable tool that can be used to practically assist health care systems, patients, and physicians.

5.
IEEE J Biomed Health Inform ; 25(7): 2686-2697, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33264095

RESUMO

OBJECTIVE: With the scenario of limited labeled dataset, this paper introduces a deep learning-based approach that leverages Diabetic Retinopathy (DR) severity recognition performance using fundus images combined with wide-field swept-source optical coherence tomography angiography (SS-OCTA). METHODS: The proposed architecture comprises a backbone convolutional network associated with a Twofold Feature Augmentation mechanism, namely TFA-Net. The former includes multiple convolution blocks extracting representational features at various scales. The latter is constructed in a two-stage manner, i.e., the utilization of weight-sharing convolution kernels and the deployment of a Reverse Cross-Attention (RCA) stream. RESULTS: The proposed model achieves a Quadratic Weighted Kappa rate of 90.2% on the small-sized internal KHUMC dataset. The robustness of the RCA stream is also evaluated by the single-modal Messidor dataset, of which the obtained mean Accuracy (94.8%) and Area Under Receiver Operating Characteristic (99.4%) outperform those of the state-of-the-arts significantly. CONCLUSION: Utilizing a network strongly regularized at feature space to learn the amalgamation of different modalities is of proven effectiveness. Thanks to the widespread availability of multi-modal retinal imaging for each diabetes patient nowadays, such approach can reduce the heavy reliance on large quantity of labeled visual data. SIGNIFICANCE: Our TFA-Net is able to coordinate hybrid information of fundus photos and wide-field SS-OCTA for exhaustively exploiting DR-oriented biomarkers. Moreover, the embedded feature-wise augmentation scheme can enrich generalization ability efficiently despite learning from small-scale labeled data.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Angiografia , Retinopatia Diabética/diagnóstico por imagem , Fundo de Olho , Humanos , Retina/diagnóstico por imagem , Tomografia de Coerência Óptica
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1992-1995, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018394

RESUMO

Diabetic Retinopathy (DR), the complication leading to vision loss, is generally graded according to the amalgamation of various structural factors in fundus photography such as number of microaneurysms, hemorrhages, vascular abnormalities, etc. To this end, Convolution Neural Network (CNN) with impressively representational power has been exhaustively utilized to address this problem. However, while existing multi-stream networks are costly, the conventional CNNs do not consider multiple levels of semantic context, which suffers from the loss of spatial correlations between the aforementioned DR-related signs. Therefore, this paper proposes a Densely Reversed Attention based CNN (DRAN) to leverage the learnable integration of channel-wise attention at multi-level features in a pretrained network for unambiguously involving spatial representations of important DR-oriented factors. Consequently, the proposed approach gains a quadratic weighted kappa of 85.6% on Kaggle DR detection dataset, which is competitive with the state-of-the-arts.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Microaneurisma , Atenção , Retinopatia Diabética/diagnóstico , Técnicas de Diagnóstico Oftalmológico , Humanos , Redes Neurais de Computação
7.
Int J Med Inform ; 132: 103926, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31605882

RESUMO

BACKGROUND: Diabetic Retinopathy (DR) is considered a pathology of retinal vascular complications, which stays in the top causes of vision impairment and blindness. Therefore, precisely inspecting its progression enables the ophthalmologists to set up appropriate next-visit schedule and cost-effective treatment plans. In the literature, existing work only makes use of numerical attributes in Electronic Medical Records (EMR) for acquiring such kind of DR-oriented knowledge through conventional machine learning techniques, which require an exhaustive job of engineering most impactful risk factors. OBJECTIVE: In this paper, an approach of deep bimodal learning is introduced to leverage the performance of DR risk progression identification. METHODS: In particular, we further involve valuable clinical information of fundus photography in addition to the aforementioned systemic attributes. Accordingly, a Trilogy of Skip-connection Deep Networks, namely Tri-SDN, is proposed to exhaustively exploit underlying relationships between the baseline and follow-up information of the fundus images and EMR-based attributes. Besides that, we adopt Skip-Connection Blocks as basis components of the Tri-SDN for making the end-to-end flow of signals more efficient during feedforward and backpropagation processes. RESULTS: Through a 10-fold cross validation strategy on a private dataset of 96 diabetic mellitus patients, the proposed method attains superior performance over the conventional EMR-modality learning approach in terms of Accuracy (90.6%), Sensitivity (96.5%), Precision (88.7%), Specificity (82.1%), and Area Under Receiver Operating Characteristics (88.8%). CONCLUSIONS: The experimental results show that the proposed Tri-SDN can combine features of different modalities (i.e., fundus images and EMR-based numerical risk factors) smoothly and effectively during training and testing processes, respectively. As a consequence, with impressive performance of DR risk progression recognition, the proposed approach is able to help the ophthalmologists properly decide follow-up schedule and subsequent treatment plans.


Assuntos
Algoritmos , Retinopatia Diabética/diagnóstico , Registros Eletrônicos de Saúde/estatística & dados numéricos , Fundo de Olho , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Redes Neurais de Computação , Retinopatia Diabética/diagnóstico por imagem , Retinopatia Diabética/etiologia , Humanos , Fotografação , Curva ROC , Fatores de Risco
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 36-39, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31945839

RESUMO

With the recent advent of deep learning in medical image processing, retinal blood vessel segmentation topic has been comprehensively handled by numerous research works. However, since the ratio between the number of vessel and background pixels is heavily imbalanced, many attempts utilized patches augmented from original fundus images along with fully convolutional networks for addressing such pixel-wise labeling problem, which significantly costs computational resources. In this paper, a method using Round-wise Features Aggregation on Bracket-shaped convolutional neural networks (RFA-BNet) is proposed to exclude the necessity of patches augmentation while efficiently handling the irregular and diverse representation of retinal vessels. Particularly, given raw fundus images, typical feature maps extracted from a pretrained backbone network are employed for a bracket-shaped decoder, wherein middle-scale features are continuously exploited round-by-round. Then, the decoded maps having highest resolution of each round are aggregated to enable the built model to flexibly learn various degrees of embedded semantic details while retaining proper annotations of thin and small vessels. Finally, the proposed approach showed its effectiveness in terms of sensitivity (0.7932), specificity (0.9741), accuracy (0.9511), and AUROC (0.9732) on DRIVE dataset.


Assuntos
Redes Neurais de Computação , Vasos Retinianos , Aprendizado Profundo , Fundo de Olho , Processamento de Imagem Assistida por Computador
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 2478-2481, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946400

RESUMO

Nowadays human activity recognition (HAR) plays an crucial role in the healthcare and wellness domains, for example, HAR contributes to context-aware systems like elder home assistance and care as a core technology. Despite promising performance in terms of recognition accuracy achieved by the advancement of machine learning for classification tasks, most of the existing HAR approaches, which adopt low-level handcrafted features, cannot completely deal with practical activities. Therefore, in this paper, we present an efficient wearable sensor based activity recognition method that allows encoding inertial data into color image data for learning highly discriminative features by convolutional neural networks (CNNs). The proposed data encoding technique converts tri-axial samples to color pixels and then arranges them for image-formed representation. Our method reaches the recognition accuracy of over 95% on two challenging activities datasets and further outperforms other deep learning-based HAR approaches.


Assuntos
Atividades Cotidianas , Redes Neurais de Computação , Dispositivos Eletrônicos Vestíveis , Humanos , Aprendizado de Máquina
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...