Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 82
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Eur Arch Otorhinolaryngol ; 281(8): 4255-4264, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38698163

RESUMEN

PURPOSE: Informative image selection in laryngoscopy has the potential for improving automatic data extraction alone, for selective data storage and a faster review process, or in combination with other artificial intelligence (AI) detection or diagnosis models. This paper aims to demonstrate the feasibility of AI in providing automatic informative laryngoscopy frame selection also capable of working in real-time providing visual feedback to guide the otolaryngologist during the examination. METHODS: Several deep learning models were trained and tested on an internal dataset (n = 5147 images) and then tested on an external test set (n = 646 images) composed of both white light and narrow band images. Four videos were used to assess the real-time performance of the best-performing model. RESULTS: ResNet-50, pre-trained with the pretext strategy, reached a precision = 95% vs. 97%, recall = 97% vs, 89%, and the F1-score = 96% vs. 93% on the internal and external test set respectively (p = 0.062). The four testing videos are provided in the supplemental materials. CONCLUSION: The deep learning model demonstrated excellent performance in identifying diagnostically relevant frames within laryngoscopic videos. With its solid accuracy and real-time capabilities, the system is promising for its development in a clinical setting, either autonomously for objective quality control or in conjunction with other algorithms within a comprehensive AI toolset aimed at enhancing tumor detection and diagnosis.


Asunto(s)
Aprendizaje Profundo , Laringoscopía , Humanos , Laringoscopía/métodos , Grabación en Video , Estudios de Factibilidad , Enfermedades de la Laringe/diagnóstico , Enfermedades de la Laringe/diagnóstico por imagen
2.
J Cardiovasc Magn Reson ; 24(1): 62, 2022 11 28.
Artículo en Inglés | MEDLINE | ID: mdl-36437452

RESUMEN

BACKGROUND: Segmentation of cardiovascular magnetic resonance (CMR) images is an essential step for evaluating dimensional and functional ventricular parameters as ejection fraction (EF) but may be limited by artifacts, which represent the major challenge to automatically derive clinical information. The aim of this study is to investigate the accuracy of a deep learning (DL) approach for automatic segmentation of cardiac structures from CMR images characterized by magnetic susceptibility artifact in patient with cardiac implanted electronic devices (CIED). METHODS: In this retrospective study, 230 patients (100 with CIED) who underwent clinically indicated CMR were used to developed and test a DL model. A novel convolutional neural network was proposed to extract the left ventricle (LV) and right (RV) ventricle endocardium and LV epicardium. In order to perform a successful segmentation, it is important the network learns to identify salient image regions even during local magnetic field inhomogeneities. The proposed network takes advantage from a spatial attention module to selectively process the most relevant information and focus on the structures of interest. To improve segmentation, especially for images with artifacts, multiple loss functions were minimized in unison. Segmentation results were assessed against manual tracings and commercial CMR analysis software cvi42(Circle Cardiovascular Imaging, Calgary, Alberta, Canada). An external dataset of 56 patients with CIED was used to assess model generalizability. RESULTS: In the internal datasets, on image with artifacts, the median Dice coefficients for end-diastolic LV cavity, LV myocardium and RV cavity, were 0.93, 0.77 and 0.87 and 0.91, 0.82, and 0.83 in end-systole, respectively. The proposed method reached higher segmentation accuracy than commercial software, with performance comparable to expert inter-observer variability (bias ± 95%LoA): LVEF 1 ± 8% vs 3 ± 9%, RVEF - 2 ± 15% vs 3 ± 21%. In the external cohort, EF well correlated with manual tracing (intraclass correlation coefficient: LVEF 0.98, RVEF 0.93). The automatic approach was significant faster than manual segmentation in providing cardiac parameters (approximately 1.5 s vs 450 s). CONCLUSIONS: Experimental results show that the proposed method reached promising performance in cardiac segmentation from CMR images with susceptibility artifacts and alleviates time consuming expert physician contour segmentation.


Asunto(s)
Artefactos , Inteligencia Artificial , Humanos , Estudios Retrospectivos , Valor Predictivo de las Pruebas , Imagen por Resonancia Magnética/métodos , Atención
3.
Curr Heart Fail Rep ; 19(2): 38-51, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35142985

RESUMEN

PURPOSE OF REVIEW: Application of deep learning (DL) is growing in the last years, especially in the healthcare domain. This review presents the current state of DL techniques applied to electronic health record structured data, physiological signals, and imaging modalities for the management of heart failure (HF), focusing in particular on diagnosis, prognosis, and re-hospitalization risk, to explore the level of maturity of DL in this field. RECENT FINDINGS: DL allows a better integration of different data sources to distillate more accurate outcomes in HF patients, thus resulting in better performance when compared to conventional evaluation methods. While applications in image and signal processing for HF diagnosis have reached very high performance, the application of DL to electronic health records and its multisource data for prediction could still be improved, despite the already promising results. Embracing the current big data era, DL can improve performance compared to conventional techniques and machine learning approaches. DL algorithms have potential to provide more efficient care and improve outcomes of HF patients, although further investigations are needed to overcome current limitations, including results generalizability and transparency and explicability of the evidences supporting the process.


Asunto(s)
Aprendizaje Profundo , Insuficiencia Cardíaca , Algoritmos , Macrodatos , Insuficiencia Cardíaca/diagnóstico , Insuficiencia Cardíaca/terapia , Humanos , Aprendizaje Automático
4.
World J Surg ; 45(5): 1585-1594, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33594578

RESUMEN

BACKGROUND: The use of innovative methodologies, such as Surgical Data Science (SDS), based on artificial intelligence (AI) could prove to be useful for extracting knowledge from clinical data overcoming limitations inherent in medical registries analysis. The aim of the study is to verify if the application of an AI analysis to our database could develop a model able to predict cardiopulmonary complications in patients submitted to lung resection. METHODS: We retrospectively analyzed data of patients submitted to lobectomy, bilobectomy, segmentectomy and pneumonectomy (January 2006-December 2018). Fifty preoperative characteristics were used for predicting the occurrence of cardiopulmonary complications. The prediction model was developed by training and testing a machine learning (ML) algorithm (XGBOOST) able to deal with registries characterized by missing data. We calculated the receiver operating characteristic curve, true positive rate (TPR), positive predictive value (PPV) and accuracy of the model. RESULTS: We analyzed 1360 patients (lobectomy: 80.7%, segmentectomy: 11.9%, bilobectomy 3.7%, pneumonectomy: 3.7%) and 23.3% of them experienced cardiopulmonary complications. XGBOOST algorithm generated a model able to predict complications with an area under the curve of 0.75, a TPR of 0.76, a PPV of 0.68. The model's accuracy was 0.70. The algorithm included all the variables in the model regardless of their completeness. CONCLUSIONS: Using SDS principles in thoracic surgery for the first time, we developed an ML model able to predict cardiopulmonary complications after lung resection based on 50 patient characteristics. The prediction was also possible even in the case of those patients for whom we had incomplete data. This model could improve the process of counseling and the perioperative management of lung resection candidates.


Asunto(s)
Cirugía Torácica , Inteligencia Artificial , Ciencia de los Datos , Humanos , Aprendizaje Automático , Estudios Retrospectivos
5.
Measurement (Lond) ; 184: 109946, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36540410

RESUMEN

This study defines a methodology to measure physical activity (PA) in ageing people working in a social garden while maintaining social distancing (SD) during COVID-19 pandemic. A real-time location system (RTLS) with embedded inertial measurement unit (IMU) sensors is used for measuring PA and SD. The position of each person is tracked to assess their SD, finding that the RTLS/IMU can measure the time in which interpersonal distance is not kept with a maximum uncertainty of 1.54 min, which compared to the 15-min. limit suggested to reduce risk of transmission at less than 1.5 m, proves the feasibility of the measurement. The data collected by the accelerometers of the IMU sensors are filtered using discrete wavelet transform and used to measure the PA in ageing people with an uncertainty-based thresholding method. PA and SD time measurements were demonstrated exploiting the experimental test in a pilot case with real users.

6.
Liver Transpl ; 26(10): 1224-1232, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32426934

RESUMEN

The worldwide implementation of a liver graft pool using marginal livers (ie, grafts with a high risk of technical complications and impaired function or with a risk of transmitting infection or malignancy to the recipient) has led to a growing interest in developing methods for accurate evaluation of graft quality. Liver steatosis is associated with a higher risk of primary nonfunction, early graft dysfunction, and poor graft survival rate. The present study aimed to analyze the value of artificial intelligence (AI) in the assessment of liver steatosis during procurement compared with liver biopsy evaluation. A total of 117 consecutive liver grafts from brain-dead donors were included and classified into 2 cohorts: ≥30 versus <30% hepatic steatosis. AI analysis required the presence of an intraoperative smartphone liver picture as well as a graft biopsy and donor data. First, a new algorithm arising from current visual recognition methods was developed, trained, and validated to obtain automatic liver graft segmentation from smartphone images. Second, a fully automated texture analysis and classification of the liver graft was performed by machine-learning algorithms. Automatic liver graft segmentation from smartphone images achieved an accuracy (Acc) of 98%, whereas the analysis of the liver graft features (cropped picture and donor data) showed an Acc of 89% in graft classification (≥30 versus <30%). This study demonstrates that AI has the potential to assess steatosis in a handy and noninvasive way to reliably identify potential nontransplantable liver grafts and to avoid improper graft utilization.


Asunto(s)
Hígado Graso , Trasplante de Hígado , Inteligencia Artificial , Hígado Graso/diagnóstico por imagen , Supervivencia de Injerto , Humanos , Hígado/diagnóstico por imagen , Hígado/cirugía , Trasplante de Hígado/efectos adversos , Donantes de Tejidos
7.
Sensors (Basel) ; 20(18)2020 Sep 18.
Artículo en Inglés | MEDLINE | ID: mdl-32962134

RESUMEN

Background: Heartbeat detection is a crucial step in several clinical fields. Laser Doppler Vibrometer (LDV) is a promising non-contact measurement for heartbeat detection. The aim of this work is to assess whether machine learning can be used for detecting heartbeat from the carotid LDV signal. Methods: The performances of Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF) and K-Nearest Neighbor (KNN) were compared using the leave-one-subject-out cross-validation as the testing protocol in an LDV dataset collected from 28 subjects. The classification was conducted on LDV signal windows, which were labeled as beat, if containing a beat, or no-beat, otherwise. The labeling procedure was performed using electrocardiography as the gold standard. Results: For the beat class, the f1-score (f1) values were 0.93, 0.93, 0.95, 0.96 for RF, DT, KNN and SVM, respectively. No statistical differences were found between the classifiers. When testing the SVM on the full-length (10 min long) LDV signals, to simulate a real-world application, we achieved a median macro-f1 of 0.76. Conclusions: Using machine learning for heartbeat detection from carotid LDV signals showed encouraging results, representing a promising step in the field of contactless cardiovascular signal analysis.


Asunto(s)
Frecuencia Cardíaca , Aprendizaje Automático , Máquina de Vectores de Soporte , Electrocardiografía , Humanos , Rayos Láser , Vibración
8.
MAGMA ; 32(2): 187-195, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-30460430

RESUMEN

OBJECTIVE: The aim of this paper is to investigate the use of fully convolutional neural networks (FCNNs) to segment scar tissue in the left ventricle from cardiac magnetic resonance with late gadolinium enhancement (CMR-LGE) images. METHODS: A successful FCNN in the literature (the ENet) was modified and trained to provide scar-tissue segmentation. Two segmentation protocols (Protocol 1 and Protocol 2) were investigated, the latter limiting the scar-segmentation search area to the left ventricular myocardial tissue region. CMR-LGE from 30 patients with ischemic-heart disease were retrospectively analyzed, for a total of 250 images, presenting high variability in terms of scar dimension and location. Segmentation results were assessed against manual scar-tissue tracing using one-patient-out cross validation. RESULTS: Protocol 2 outperformed Protocol 1 significantly (p value < 0.05), with median sensitivity and Dice similarity coefficient equal to 88.07% [inter-quartile range (IQR) 18.84%] and 71.25% (IQR 31.82%), respectively. DISCUSSION: Both segmentation protocols were able to detect scar tissues in the CMR-LGE images but higher performance was achieved when limiting the search area to the myocardial region. The findings of this paper represent an encouraging starting point for the use of FCNNs for the segmentation of nonviable scar tissue from CMR-LGE images.


Asunto(s)
Cicatriz/diagnóstico por imagen , Aprendizaje Profundo , Ventrículos Cardíacos/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Isquemia Miocárdica/diagnóstico por imagen , Medios de Contraste , Femenino , Gadolinio , Humanos , Aumento de la Imagen/métodos , Imagen por Resonancia Magnética/estadística & datos numéricos , Masculino , Redes Neurales de la Computación , Estudios Retrospectivos
9.
Comput Med Imaging Graph ; 116: 102405, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38824716

RESUMEN

Over the past decade, deep-learning (DL) algorithms have become a promising tool to aid clinicians in identifying fetal head standard planes (FHSPs) during ultrasound (US) examination. However, the adoption of these algorithms in clinical settings is still hindered by the lack of large annotated datasets. To overcome this barrier, we introduce FetalBrainAwareNet, an innovative framework designed to synthesize anatomically accurate images of FHSPs. FetalBrainAwareNet introduces a cutting-edge approach that utilizes class activation maps as a prior in its conditional adversarial training process. This approach fosters the presence of the specific anatomical landmarks in the synthesized images. Additionally, we investigate specialized regularization terms within the adversarial training loss function to control the morphology of the fetal skull and foster the differentiation between the standard planes, ensuring that the synthetic images faithfully represent real US scans in both structure and overall appearance. The versatility of our FetalBrainAwareNet framework is highlighted by its ability to generate high-quality images of three predominant FHSPs using a singular, integrated framework. Quantitative (Fréchet inception distance of 88.52) and qualitative (t-SNE) results suggest that our framework generates US images with greater variability compared to state-of-the-art methods. By using the synthetic images generated with our framework, we increase the accuracy of FHSP classifiers by 3.2% compared to training the same classifiers solely with real acquisitions. These achievements suggest that using our synthetic images to increase the training set could provide benefits to enhance the performance of DL algorithms for FHSPs classification that could be integrated in real clinical scenarios.


Asunto(s)
Algoritmos , Encéfalo , Ultrasonografía Prenatal , Humanos , Ultrasonografía Prenatal/métodos , Encéfalo/diagnóstico por imagen , Femenino , Aprendizaje Profundo , Embarazo , Procesamiento de Imagen Asistido por Computador/métodos
10.
Comput Med Imaging Graph ; 113: 102350, 2024 04.
Artículo en Inglés | MEDLINE | ID: mdl-38340574

RESUMEN

Recent advances in medical imaging have highlighted the critical development of algorithms for individual vertebral segmentation on computed tomography (CT) scans. Essential for diagnostic accuracy and treatment planning in orthopaedics, neurosurgery and oncology, these algorithms face challenges in clinical implementation, including integration into healthcare systems. Consequently, our focus lies in exploring the application of knowledge distillation (KD) methods to train shallower networks capable of efficiently segmenting vertebrae in CT scans. This approach aims to reduce segmentation time, enhance suitability for emergency cases, and optimize computational and memory resource efficiency. Building upon prior research in the field, a two-step segmentation approach was employed. Firstly, the spine's location was determined by predicting a heatmap, indicating the probability of each voxel belonging to the spine. Subsequently, an iterative segmentation of vertebrae was performed from the top to the bottom of the CT volume over the located spine, using a memory instance to record the already segmented vertebrae. KD methods were implemented by training a teacher network with performance similar to that found in the literature, and this knowledge was distilled to a shallower network (student). Two KD methods were applied: (1) using the soft outputs of both networks and (2) matching logits. Two publicly available datasets, comprising 319 CT scans from 300 patients and a total of 611 cervical, 2387 thoracic, and 1507 lumbar vertebrae, were used. To ensure dataset balance and robustness, effective data augmentation methods were applied, including cleaning the memory instance to replicate the first vertebra segmentation. The teacher network achieved an average Dice similarity coefficient (DSC) of 88.22% and a Hausdorff distance (HD) of 7.71 mm, showcasing performance similar to other approaches in the literature. Through knowledge distillation from the teacher network, the student network's performance improved, with an average DSC increasing from 75.78% to 84.70% and an HD decreasing from 15.17 mm to 8.08 mm. Compared to other methods, our teacher network exhibited up to 99.09% fewer parameters, 90.02% faster inference time, 88.46% shorter total segmentation time, and 89.36% less associated carbon (CO2) emission rate. Regarding our student network, it featured 75.00% fewer parameters than our teacher, resulting in a 36.15% reduction in inference time, a 33.33% decrease in total segmentation time, and a 42.96% reduction in CO2 emissions. This study marks the first exploration of applying KD to the problem of individual vertebrae segmentation in CT, demonstrating the feasibility of achieving comparable performance to existing methods using smaller neural networks.


Asunto(s)
Dióxido de Carbono , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Redes Neurales de la Computación , Algoritmos , Vértebras Lumbares
11.
Comput Biol Med ; 174: 108430, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38613892

RESUMEN

BACKGROUND: To investigate the effectiveness of contrastive learning, in particular SimClr, in reducing the need for large annotated ultrasound (US) image datasets for fetal standard plane identification. METHODS: We explore SimClr advantage in the cases of both low and high inter-class variability, considering at the same time how classification performance varies according to different amounts of labels used. This evaluation is performed by exploiting contrastive learning through different training strategies. We apply both quantitative and qualitative analyses, using standard metrics (F1-score, sensitivity, and precision), Class Activation Mapping (CAM), and t-Distributed Stochastic Neighbor Embedding (t-SNE). RESULTS: When dealing with high inter-class variability classification tasks, contrastive learning does not bring a significant advantage; whereas it results to be relevant for low inter-class variability classification, specifically when initialized with ImageNet weights. CONCLUSIONS: Contrastive learning approaches are typically used when a large number of unlabeled data is available, which is not representative of US datasets. We proved that SimClr either as pre-training with backbone initialized via ImageNet weights or used in an end-to-end dual-task may impact positively the performance over standard transfer learning approaches, under a scenario in which the dataset is small and characterized by low inter-class variability.


Asunto(s)
Ultrasonografía Prenatal , Humanos , Ultrasonografía Prenatal/métodos , Embarazo , Femenino , Aprendizaje Automático , Feto/diagnóstico por imagen , Algoritmos , Interpretación de Imagen Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos
12.
Int J Comput Assist Radiol Surg ; 19(3): 481-492, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38066354

RESUMEN

PURPOSE: In twin-to-twin transfusion syndrome (TTTS), abnormal vascular anastomoses in the monochorionic placenta can produce uneven blood flow between the two fetuses. In the current practice, TTTS is treated surgically by closing abnormal anastomoses using laser ablation. This surgery is minimally invasive and relies on fetoscopy. Limited field of view makes anastomosis identification a challenging task for the surgeon. METHODS: To tackle this challenge, we propose a learning-based framework for in vivo fetoscopy frame registration for field-of-view expansion. The novelties of this framework rely on a learning-based keypoint proposal network and an encoding strategy to filter (i) irrelevant keypoints based on fetoscopic semantic image segmentation and (ii) inconsistent homographies. RESULTS: We validate our framework on a dataset of six intraoperative sequences from six TTTS surgeries from six different women against the most recent state-of-the-art algorithm, which relies on the segmentation of placenta vessels. CONCLUSION: The proposed framework achieves higher performance compared to the state of the art, paving the way for robust mosaicking to provide surgeons with context awareness during TTTS surgery.


Asunto(s)
Transfusión Feto-Fetal , Terapia por Láser , Embarazo , Femenino , Humanos , Fetoscopía/métodos , Transfusión Feto-Fetal/diagnóstico por imagen , Transfusión Feto-Fetal/cirugía , Placenta/cirugía , Placenta/irrigación sanguínea , Terapia por Láser/métodos , Algoritmos
13.
Med Biol Eng Comput ; 2024 Aug 06.
Artículo en Inglés | MEDLINE | ID: mdl-39105884

RESUMEN

This work proposes a convolutional neural network (CNN) that utilizes different combinations of parametric images computed from cine cardiac magnetic resonance (CMR) images, to classify each slice for possible myocardial scar tissue presence. The CNN performance comparison in respect to expert interpretation of CMR with late gadolinium enhancement (LGE) images, used as ground truth (GT), was conducted on 206 patients (158 scar, 48 control) from Centro Cardiologico Monzino (Milan, Italy) at both slice- and patient-levels. Left ventricle dynamic features were extracted in non-enhanced cine images using parametric images based on both Fourier and monogenic signal analyses. The CNN, fed with cine images and Fourier-based parametric images, achieved an area under the ROC curve of 0.86 (accuracy 0.79, F1 0.81, sensitivity 0.9, specificity 0.65, and negative (NPV) and positive (PPV) predictive values 0.83 and 0.77, respectively), for individual slice classification. Remarkably, it exhibited 1.0 prediction accuracy (F1 0.98, sensitivity 1.0, specificity 0.9, NPV 1.0, and PPV 0.97) in patient classification as a control or pathologic. The proposed approach represents a first step towards scar detection in contrast-free CMR images. Patient-level results suggest its preliminary potential as a screening tool to guide decisions regarding LGE-CMR prescription, particularly in cases where indication is uncertain.

14.
Laryngoscope ; 134(6): 2826-2834, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38174772

RESUMEN

OBJECTIVE: To investigate the potential of deep learning for automatically delineating (segmenting) laryngeal cancer superficial extent on endoscopic images and videos. METHODS: A retrospective study was conducted extracting and annotating white light (WL) and Narrow-Band Imaging (NBI) frames to train a segmentation model (SegMENT-Plus). Two external datasets were used for validation. The model's performances were compared with those of two otolaryngology residents. In addition, the model was tested on real intraoperative laryngoscopy videos. RESULTS: A total of 3933 images of laryngeal cancer from 557 patients were used. The model achieved the following median values (interquartile range): Dice Similarity Coefficient (DSC) = 0.83 (0.70-0.90), Intersection over Union (IoU) = 0.83 (0.73-0.90), Accuracy = 0.97 (0.95-0.99), Inference Speed = 25.6 (25.1-26.1) frames per second. The external testing cohorts comprised 156 and 200 images. SegMENT-Plus performed similarly on all three datasets for DSC (p = 0.05) and IoU (p = 0.07). No significant differences were noticed when separately analyzing WL and NBI test images on DSC (p = 0.06) and IoU (p = 0.78) and when analyzing the model versus the two residents on DSC (p = 0.06) and IoU (Senior vs. SegMENT-Plus, p = 0.13; Junior vs. SegMENT-Plus, p = 1.00). The model was then tested on real intraoperative laryngoscopy videos. CONCLUSION: SegMENT-Plus can accurately delineate laryngeal cancer boundaries in endoscopic images, with performances equal to those of two otolaryngology residents. The results on the two external datasets demonstrate excellent generalization capabilities. The computation speed of the model allowed its application on videolaryngoscopies simulating real-time use. Clinical trials are needed to evaluate the role of this technology in surgical practice and resection margin improvement. LEVEL OF EVIDENCE: III Laryngoscope, 134:2826-2834, 2024.


Asunto(s)
Aprendizaje Profundo , Neoplasias Laríngeas , Laringoscopía , Imagen de Banda Estrecha , Humanos , Laringoscopía/métodos , Imagen de Banda Estrecha/métodos , Neoplasias Laríngeas/diagnóstico por imagen , Neoplasias Laríngeas/cirugía , Neoplasias Laríngeas/patología , Estudios Retrospectivos , Grabación en Video , Masculino , Femenino , Persona de Mediana Edad , Luz , Anciano
15.
JMIR Aging ; 7: e50537, 2024 Apr 29.
Artículo en Inglés | MEDLINE | ID: mdl-38386279

RESUMEN

BACKGROUND: The rise in life expectancy is associated with an increase in long-term and gradual cognitive decline. Treatment effectiveness is enhanced at the early stage of the disease. Therefore, there is a need to find low-cost and ecological solutions for mass screening of community-dwelling older adults. OBJECTIVE: This work aims to exploit automatic analysis of free speech to identify signs of cognitive function decline. METHODS: A sample of 266 participants older than 65 years were recruited in Italy and Spain and were divided into 3 groups according to their Mini-Mental Status Examination (MMSE) scores. People were asked to tell a story and describe a picture, and voice recordings were used to extract high-level features on different time scales automatically. Based on these features, machine learning algorithms were trained to solve binary and multiclass classification problems by using both mono- and cross-lingual approaches. The algorithms were enriched using Shapley Additive Explanations for model explainability. RESULTS: In the Italian data set, healthy participants (MMSE score≥27) were automatically discriminated from participants with mildly impaired cognitive function (20≤MMSE score≤26) and from those with moderate to severe impairment of cognitive function (11≤MMSE score≤19) with accuracy of 80% and 86%, respectively. Slightly lower performance was achieved in the Spanish and multilanguage data sets. CONCLUSIONS: This work proposes a transparent and unobtrusive assessment method, which might be included in a mobile app for large-scale monitoring of cognitive functionality in older adults. Voice is confirmed to be an important biomarker of cognitive decline due to its noninvasive and easily accessible nature.


Asunto(s)
Disfunción Cognitiva , Habla , Humanos , Anciano , Femenino , Masculino , Disfunción Cognitiva/diagnóstico , Estudios Transversales , Italia/epidemiología , Anciano de 80 o más Años , Habla/fisiología , España/epidemiología , Pruebas de Estado Mental y Demencia , Aprendizaje Automático , Algoritmos
16.
Med Image Anal ; 92: 103066, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38141453

RESUMEN

Fetoscopy laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS). The procedure involves photocoagulation pathological anastomoses to restore a physiological blood exchange among twins. The procedure is particularly challenging, from the surgeon's side, due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility due to amniotic fluid turbidity, and variability in illumination. These challenges may lead to increased surgery time and incomplete ablation of pathological anastomoses, resulting in persistent TTTS. Computer-assisted intervention (CAI) can provide TTTS surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking. Research in this domain has been hampered by the lack of high-quality data to design, develop and test CAI algorithms. Through the Fetoscopic Placental Vessel Segmentation and Registration (FetReg2021) challenge, which was organized as part of the MICCAI2021 Endoscopic Vision (EndoVis) challenge, we released the first large-scale multi-center TTTS dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms with a focus on creating drift-free mosaics from long duration fetoscopy videos. For this challenge, we released a dataset of 2060 images, pixel-annotated for vessels, tool, fetus and background classes, from 18 in-vivo TTTS fetoscopy procedures and 18 short video clips of an average length of 411 frames for developing placental scene segmentation and frame registration for mosaicking techniques. Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fetoscopic procedures and 6 short clips. For the segmentation task, overall baseline performed was the top performing (aggregated mIoU of 0.6763) and was the best on the vessel class (mIoU of 0.5817) while team RREB was the best on the tool (mIoU of 0.6335) and fetus (mIoU of 0.5178) classes. For the registration task, overall the baseline performed better than team SANO with an overall mean 5-frame SSIM of 0.9348. Qualitatively, it was observed that team SANO performed better in planar scenarios, while baseline was better in non-planner scenarios. The detailed analysis showed that no single team outperformed on all 6 test fetoscopic videos. The challenge provided an opportunity to create generalized solutions for fetoscopic scene understanding and mosaicking. In this paper, we present the findings of the FetReg2021 challenge, alongside reporting a detailed literature review for CAI in TTTS fetoscopy. Through this challenge, its analysis and the release of multi-center fetoscopic data, we provide a benchmark for future research in this field.


Asunto(s)
Transfusión Feto-Fetal , Placenta , Femenino , Humanos , Embarazo , Algoritmos , Transfusión Feto-Fetal/diagnóstico por imagen , Transfusión Feto-Fetal/cirugía , Transfusión Feto-Fetal/patología , Fetoscopía/métodos , Feto , Placenta/diagnóstico por imagen
17.
Artículo en Inglés | MEDLINE | ID: mdl-38083494

RESUMEN

The identification of fetal-head standard planes (FHSPs) from ultrasound (US) images is of fundamental importance to visualize cerebral structures and diagnose neural anomalies during gestation in a standardized way. To support the activity of healthcare operators, deep-learning algorithms have been proposed to classify these planes. To date, the translation of such algorithms in clinical practice is hampered by several factors, including the lack of large annotated datasets to train robust and generalizable algorithms. This paper proposes an approach to generate synthetic FHSP images with conditional generative adversarial network (cGAN), using class activation maps (CAMs) obtained from FHSP classification algorithms as cGAN conditional prior. Using the largest publicly available FHSP dataset, we generated realistic images of the three common FHSPs: trans-cerebellum, trans-thalamic and trans-ventricular. The evaluation through t-SNE shows the potential of the proposed approach to attenuate the problem of limited availability of annotated FHSP images.


Asunto(s)
Algoritmos , Encéfalo , Femenino , Embarazo , Humanos , Encéfalo/diagnóstico por imagen , Ultrasonografía Prenatal/métodos , Cerebelo , Feto
18.
Health Policy ; 127: 80-86, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36509555

RESUMEN

Industry 4.0 technologies are expected to enhance healthcare quality at the minimum cost feasible by using innovative solutions based on a fruitful exchange of knowledge and resources among institutions, firms and academia. These collaborative mechanisms are likely to occur in an innovation ecosystem where different stakeholders and resources interact to provide ground-breaking solutions to the market. The paper proposes a framework for studying the creation and development of innovation ecosystems in the healthcare sector by using a set of interrelated dimensions including, technology, value, and capabilities within a Triple-Helix model guided by focal actors. The model is applied to an exemplary Italian innovation ecosystem providing cloud and artificial intelligence-based solutions to general practitioners (GPs) under the focal role of the Italian association of GPs. Primary and secondary data are examined starting from the innovation ecosystem's origins and continuing until the COVID-19 crisis. The findings show that the pandemic represented the turning point that altered the ecosystem's dimensions in order to find immediate solutions for monitoring health conditions and organizing the booking of swabs and vaccines. The data triangulation points out the technical, organizational, and administrative barriers hindering the widespread adoption of these solutions at the national and regional levels, revealing several implications for health policy.


Asunto(s)
COVID-19 , Humanos , Ecosistema , Sector de Atención de Salud , Inteligencia Artificial , Tecnología
19.
J Neural Eng ; 20(2)2023 04 04.
Artículo en Inglés | MEDLINE | ID: mdl-36893458

RESUMEN

Objective.The optic nerve is a good location for a visual neuroprosthesis. It can be targeted when a subject cannot receive a retinal prosthesis and it is less invasive than a cortical implant. The effectiveness of an electrical neuroprosthesis depends on the combination of the stimulation parameters which must be optimized, and an optimization strategy might be performing closed-loop stimulation using the evoked cortical response as feedback. However, it is necessary to identify target cortical activation patterns and to associate the cortical activity with the visual stimuli present in the visual field of the subjects. Visual stimuli decoding should be performed on large areas of the visual cortex, and with a method as translational as possible to shift the study to human subjects in the future. The aim of this work is to develop an algorithm that meets these requirements and can be leveraged to automatically associate a cortical activation pattern with the visual stimulus that generated it.Approach.Three mice were presented with ten different visual stimuli, and their primary visual cortex response was recorded using wide-field calcium imaging. Our decoding algorithm relies on a convolutional neural network (CNN), trained to classify the visual stimuli from the correspondent wide-field images. Several experiments were performed to identify the best training strategy and investigate the possibility of generalization.Main results.The best classification accuracy was 75.38% ± 4.77%, obtained pre-training the CNN on the MNIST digits dataset and fine-tuning it on our dataset. Generalization was possible pre-training the CNN to classify Mouse 1 dataset and fine-tuning it on Mouse 2 and Mouse 3, with accuracies of 64.14% ± 10.81% and 51.53% ± 6.48% respectively.Significance.The combination of wide-field calcium imaging and CNNs can be used to classify the cortical responses to simple visual stimuli and might be a viable alternative to existing decoding methodologies. It also allows us to consider the cortical activation as reliable feedback in future optic nerve stimulation experiments.


Asunto(s)
Calcio , Corteza Visual , Humanos , Animales , Ratones , Redes Neurales de la Computación , Algoritmos , Corteza Visual/fisiología , Campos Visuales
20.
Comput Biol Med ; 163: 107194, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37421736

RESUMEN

BACKGROUND AND OBJECTIVES: Patients suffering from neurological diseases may develop dysarthria, a motor speech disorder affecting the execution of speech. Close and quantitative monitoring of dysarthria evolution is crucial for enabling clinicians to promptly implement patients' management strategies and maximizing effectiveness and efficiency of communication functions in term of restoring, compensating or adjusting. In the clinical assessment of orofacial structures and functions, at rest condition or during speech and non-speech movements, a qualitative evaluation is usually performed, throughout visual observation. METHODS: To overcome limitations posed by qualitative assessments, this work presents a store-and-forward self-service telemonitoring system that integrates, within its cloud architecture, a convolutional neural network (CNN) for analyzing video recordings acquired by individuals with dysarthria. This architecture - called facial landmark Mask RCNN - aims at locating facial landmarks as a prior for assessing the orofacial functions related to speech and examining dysarthria evolution in neurological diseases. RESULTS: When tested on the Toronto NeuroFace dataset, a publicly available annotated dataset of video recordings from patients with amyotrophic lateral sclerosis (ALS) and stroke, the proposed CNN achieved a normalized mean error equal to 1.79 on localizing the facial landmarks. We also tested our system in a real-life scenario on 11 bulbar-onset ALS subjects, obtaining promising outcomes in terms of facial landmark position estimation. DISCUSSION AND CONCLUSIONS: This preliminary study represents a relevant step towards the use of remote tools to support clinicians in monitoring the evolution of dysarthria.


Asunto(s)
Esclerosis Amiotrófica Lateral , Disartria , Humanos , Disartria/diagnóstico , Nube Computacional , Habla , Grabación en Video
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA