Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
JMIR Aging ; 7: e53793, 2024 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-39283346

RESUMO

Background: Cognitive impairment and dementia pose a significant challenge to the aging population, impacting the well-being, quality of life, and autonomy of affected individuals. As the population ages, this will place enormous strain on health care and economic systems. While computerized cognitive training programs have demonstrated some promise in addressing cognitive decline, adherence to these interventions can be challenging. Objective: The objective of this study is to improve the accuracy of predicting adherence lapses to ultimately develop tailored adherence support systems to promote engagement with cognitive training among older adults. Methods: Data from 2 previously conducted cognitive training intervention studies were used to forecast adherence levels among older participants. Deep convolutional neural networks were used to leverage their feature learning capabilities and predict adherence patterns based on past behavior. Domain adaptation (DA) was used to address the challenge of limited training data for each participant, by using data from other participants with similar playing patterns. Time series data were converted into image format using Gramian angular fields, to facilitate clustering of participants during DA. To the best of our knowledge, this is the first effort to use DA techniques to predict older adults' daily adherence to cognitive training programs. Results: Our results demonstrated the promise and potential of deep neural networks and DA for predicting adherence lapses. In all 3 studies, using 2 independent datasets, DA consistently produced the best accuracy values. Conclusions: Our findings highlight that deep learning and DA techniques can aid in the development of adherence support systems for computerized cognitive training, as well as for other interventions aimed at improving health, cognition, and well-being. These techniques can improve engagement and maximize the benefits of such interventions, ultimately enhancing the quality of life of individuals at risk for cognitive impairments. This research informs the development of more effective interventions, benefiting individuals and society by improving conditions associated with aging.


Assuntos
Disfunção Cognitiva , Aprendizado Profundo , Humanos , Idoso , Feminino , Masculino , Disfunção Cognitiva/psicologia , Disfunção Cognitiva/terapia , Idoso de 80 Anos ou mais , Cooperação do Paciente/psicologia , Qualidade de Vida/psicologia , Treino Cognitivo
2.
Med Image Anal ; 92: 103066, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38141453

RESUMO

Fetoscopy laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS). The procedure involves photocoagulation pathological anastomoses to restore a physiological blood exchange among twins. The procedure is particularly challenging, from the surgeon's side, due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility due to amniotic fluid turbidity, and variability in illumination. These challenges may lead to increased surgery time and incomplete ablation of pathological anastomoses, resulting in persistent TTTS. Computer-assisted intervention (CAI) can provide TTTS surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking. Research in this domain has been hampered by the lack of high-quality data to design, develop and test CAI algorithms. Through the Fetoscopic Placental Vessel Segmentation and Registration (FetReg2021) challenge, which was organized as part of the MICCAI2021 Endoscopic Vision (EndoVis) challenge, we released the first large-scale multi-center TTTS dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms with a focus on creating drift-free mosaics from long duration fetoscopy videos. For this challenge, we released a dataset of 2060 images, pixel-annotated for vessels, tool, fetus and background classes, from 18 in-vivo TTTS fetoscopy procedures and 18 short video clips of an average length of 411 frames for developing placental scene segmentation and frame registration for mosaicking techniques. Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fetoscopic procedures and 6 short clips. For the segmentation task, overall baseline performed was the top performing (aggregated mIoU of 0.6763) and was the best on the vessel class (mIoU of 0.5817) while team RREB was the best on the tool (mIoU of 0.6335) and fetus (mIoU of 0.5178) classes. For the registration task, overall the baseline performed better than team SANO with an overall mean 5-frame SSIM of 0.9348. Qualitatively, it was observed that team SANO performed better in planar scenarios, while baseline was better in non-planner scenarios. The detailed analysis showed that no single team outperformed on all 6 test fetoscopic videos. The challenge provided an opportunity to create generalized solutions for fetoscopic scene understanding and mosaicking. In this paper, we present the findings of the FetReg2021 challenge, alongside reporting a detailed literature review for CAI in TTTS fetoscopy. Through this challenge, its analysis and the release of multi-center fetoscopic data, we provide a benchmark for future research in this field.


Assuntos
Transfusão Feto-Fetal , Placenta , Feminino , Humanos , Gravidez , Algoritmos , Transfusão Feto-Fetal/diagnóstico por imagem , Transfusão Feto-Fetal/cirurgia , Transfusão Feto-Fetal/patologia , Fetoscopia/métodos , Feto , Placenta/diagnóstico por imagem
3.
Med Image Anal ; 85: 102747, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36702038

RESUMO

We present our novel deep multi-task learning method for medical image segmentation. Existing multi-task methods demand ground truth annotations for both the primary and auxiliary tasks. Contrary to it, we propose to generate the pseudo-labels of an auxiliary task in an unsupervised manner. To generate the pseudo-labels, we leverage Histogram of Oriented Gradients (HOGs), one of the most widely used and powerful hand-crafted features for detection. Together with the ground truth semantic segmentation masks for the primary task and pseudo-labels for the auxiliary task, we learn the parameters of the deep network to minimize the loss of both the primary task and the auxiliary task jointly. We employed our method on two powerful and widely used semantic segmentation networks: UNet and U2Net to train in a multi-task setup. To validate our hypothesis, we performed experiments on two different medical image segmentation data sets. From the extensive quantitative and qualitative results, we observe that our method consistently improves the performance compared to the counter-part method. Moreover, our method is the winner of FetReg Endovis Sub-challenge on Semantic Segmentation organised in conjunction with MICCAI 2021. Code and implementation details are available at:https://github.com/thetna/medical_image_segmentation.


Assuntos
Aprendizado Profundo , Humanos , Semântica , Mãos , Processamento de Imagem Assistida por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA