Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Sensors (Basel) ; 20(13)2020 Jul 06.
Artículo en Inglés | MEDLINE | ID: mdl-32640587

RESUMEN

Smartwatch battery limitations are one of the biggest hurdles to their acceptability in the consumer market. To our knowledge, despite promising studies analyzing smartwatch battery data, there has been little research that has analyzed the battery usage of a diverse set of smartwatches in a real-world setting. To address this challenge, this paper utilizes a smartwatch dataset collected from 832 real-world users, including different smartwatch brands and geographic locations. First, we employ clustering to identify common patterns of smartwatch battery utilization; second, we introduce a transparent low-parameter convolutional neural network model, which allows us to identify the latent patterns of smartwatch battery utilization. Our model converts the battery consumption rate into a binary classification problem; i.e., low and high consumption. Our model has 85.3% accuracy in predicting high battery discharge events, outperforming other machine learning algorithms that have been used in state-of-the-art research. Besides this, it can be used to extract information from filters of our deep learning model, based on learned filters of the feature extractor, which is impossible for other models. Third, we introduce an indexing method that includes a longitudinal study to quantify smartwatch battery quality changes over time. Our novel findings can assist device manufacturers, vendors and application developers, as well as end-users, to improve smartwatch battery utilization.

2.
Comput Biol Med ; 149: 106033, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-36041270

RESUMEN

Medical image segmentation is a key initial step in several therapeutic applications. While most of the automatic segmentation models are supervised, which require a well-annotated paired dataset, we introduce a novel annotation-free pipeline to perform segmentation of COVID-19 CT images. Our pipeline consists of three main subtasks: automatically generating a 3D pseudo-mask in self-supervised mode using a generative adversarial network (GAN), leveraging the quality of the pseudo-mask, and building a multi-objective segmentation model to predict lesions. Our proposed 3D GAN architecture removes infected regions from COVID-19 images and generates synthesized healthy images while keeping the 3D structure of the lung the same. Then, a 3D pseudo-mask is generated by subtracting the synthesized healthy images from the original COVID-19 CT images. We enhanced pseudo-masks using a contrastive learning approach to build a region-aware segmentation model to focus more on the infected area. The final segmentation model can be used to predict lesions in COVID-19 CT images without any manual annotation at the pixel level. We show that our approach outperforms the existing state-of-the-art unsupervised and weakly-supervised segmentation techniques on three datasets by a reasonable margin. Specifically, our method improves the segmentation results for the CT images with low infection by increasing sensitivity by 20% and the dice score up to 4%. The proposed pipeline overcomes some of the major limitations of existing unsupervised segmentation approaches and opens up a novel horizon for different applications of medical image segmentation.


Asunto(s)
COVID-19 , Procesamiento de Imagen Asistido por Computador , COVID-19/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Pulmón/diagnóstico por imagen , Tomografía Computarizada por Rayos X
3.
Diagnostics (Basel) ; 12(12)2022 Dec 16.
Artículo en Inglés | MEDLINE | ID: mdl-36553200

RESUMEN

Background: It is known that oral diseases such as periodontal (gum) disease are closely linked to various systemic diseases and disorders. Deep learning advances have the potential to make major contributions to healthcare, particularly in the domains that rely on medical imaging. Incorporating non-imaging information based on clinical and laboratory data may allow clinicians to make more comprehensive and accurate decisions. Methods: Here, we developed a multimodal deep learning method to predict systemic diseases and disorders from oral health conditions. A dual-loss autoencoder was used in the first phase to extract periodontal disease-related features from 1188 panoramic radiographs. Then, in the second phase, we fused the image features with the demographic data and clinical information taken from electronic health records (EHR) to predict systemic diseases. We used receiver operation characteristics (ROC) and accuracy to evaluate our model. The model was further validated by an unseen test dataset. Findings: According to our findings, the top three most accurately predicted chapters, in order, are the Chapters III, VI and IX. The results indicated that the proposed model could predict systemic diseases belonging to Chapters III, VI and IX, with AUC values of 0.92 (95% CI, 0.90-94), 0.87 (95% CI, 0.84-89) and 0.78 (95% CI, 0.75-81), respectively. To assess the robustness of the models, we performed the evaluation on the unseen test dataset for these chapters and the results showed an accuracy of 0.88, 0.82 and 0.72 for Chapters III, VI and IX, respectively. Interpretation: The present study shows that the combination of panoramic radiograph and clinical oral features could be considered to train a fusion deep learning model for predicting systemic diseases and disorders.

4.
NPJ Digit Med ; 4(1): 29, 2021 Feb 18.
Artículo en Inglés | MEDLINE | ID: mdl-33603193

RESUMEN

Coronavirus disease 2019 (Covid-19) is highly contagious with limited treatment options. Early and accurate diagnosis of Covid-19 is crucial in reducing the spread of the disease and its accompanied mortality. Currently, detection by reverse transcriptase-polymerase chain reaction (RT-PCR) is the gold standard of outpatient and inpatient detection of Covid-19. RT-PCR is a rapid method; however, its accuracy in detection is only ~70-75%. Another approved strategy is computed tomography (CT) imaging. CT imaging has a much higher sensitivity of ~80-98%, but similar accuracy of 70%. To enhance the accuracy of CT imaging detection, we developed an open-source framework, CovidCTNet, composed of a set of deep learning algorithms that accurately differentiates Covid-19 from community-acquired pneumonia (CAP) and other lung diseases. CovidCTNet increases the accuracy of CT imaging detection to 95% compared to radiologists (70%). CovidCTNet is designed to work with heterogeneous and small sample sizes independent of the CT imaging hardware. To facilitate the detection of Covid-19 globally and assist radiologists and physicians in the screening process, we are releasing all algorithms and model parameter details as open-source. Open-source sharing of CovidCTNet enables developers to rapidly improve and optimize services while preserving user privacy and data ownership.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA