Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-37028018

RESUMO

Getting prompt insights about health and well-being in a non-invasive way is one of the most popular features available on wearable devices. Among all vital signs available, heart rate (HR) monitoring is one of the most important since other measurements are based on it. Real-time HR estimation in wearables mostly relies on photoplethysmography (PPG), which is a fair technique to handle such a task. However, PPG is vulnerable to motion artifacts (MA). As a consequence, the HR estimated from PPG signals is strongly affected during physical exercises. Different approaches have been proposed to deal with this problem, however, they struggle to handle exercises with strong movements, such as a running session. In this paper, we present a new method for HR estimation in wearables that uses an accelerometer signal and user demographics to support the HR prediction when the PPG signal is affected by motion artifacts. This algorithm requires a tiny memory allocation and allows on-device personalization since the model parameters are finetuned in real time during workout executions. Also, the model may predict HR for a few minutes without using a PPG, which represents a useful contribution to an HR estimation pipeline. We evaluate our model on five different exercise datasets - performed on treadmills and in outdoor environments - and the results show that our method can improve the coverage of a PPG-based HR estimator while keeping a similar error performance, which is particularly useful to improve user experience.

2.
IEEE J Biomed Health Inform ; 25(9): 3554-3563, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33635800

RESUMO

Computer-aided skin cancer classification systems built with deep neural networks usually yield predictions based only on images of skin lesions. Despite presenting promising results, it is possible to achieve higher performance by taking into account patient demographics, which are important clues that human experts consider during skin lesion screening. In this article, we deal with the problem of combining images and metadata features using deep learning models applied to skin cancer classification. We propose the Metadata Processing Block (MetaBlock), a novel algorithm that uses metadata to support data classification by enhancing the most relevant features extracted from the images throughout the classification pipeline. We compared the proposed method with two other combination approaches: the MetaNet and one based on features concatenation. Results obtained for two different skin lesion datasets show that our method improves classification for all tested models and performs better than the other combination approaches in 6 out of 10 scenarios.


Assuntos
Aprendizado Profundo , Neoplasias Cutâneas , Dermoscopia , Humanos , Metadados , Redes Neurais de Computação , Neoplasias Cutâneas/diagnóstico por imagem
3.
Data Brief ; 32: 106221, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32939378

RESUMO

Over the past few years, different Computer-Aided Diagnosis (CAD) systems have been proposed to tackle skin lesion analysis. Most of these systems work only for dermoscopy images since there is a strong lack of public clinical images archive available to evaluate the aforementioned CAD systems. To fill this gap, we release a skin lesion benchmark composed of clinical images collected from smartphone devices and a set of patient clinical data containing up to 21 features. The dataset consists of 1373 patients, 1641 skin lesions, and 2298 images for six different diagnostics: three skin diseases and three skin cancers. In total, 58.4% of the skin lesions are biopsy-proven, including 100% of the skin cancers. By releasing this benchmark, we aim to support future research and the development of new tools to assist clinicians to detect skin cancer.

4.
Comput Biol Med ; 116: 103545, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31760271

RESUMO

Skin cancer is one of the most common types of cancer worldwide. Over the past few years, different approaches have been proposed to deal with automated skin cancer detection. Nonetheless, most of them are based only on dermoscopic images and do not take into account the patient clinical information, an important clue towards clinical diagnosis. In this work, we present an approach to fill this gap. First, we introduce a new dataset composed of clinical images, collected using smartphones, and clinical data related to the patient. Next, we propose a straightforward method that includes an aggregation mechanism in well-known deep learning models to combine features from images and clinical data. Last, we carry out experiments to compare the models' performance with and without using this mechanism. The results present an improvement of approximately 7% in balanced accuracy when the aggregation method is applied. Overall, the impact of clinical data on models' performance is significant and shows the importance of including these features on automated skin cancer detection.


Assuntos
Aprendizado Profundo , Diagnóstico por Computador/métodos , Neoplasias Cutâneas/diagnóstico , Algoritmos , Bases de Dados Factuais , Humanos , Pele/diagnóstico por imagem , Pele/patologia , Neoplasias Cutâneas/patologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA