Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Bioengineering (Basel) ; 10(3)2023 Feb 21.
Article in English | MEDLINE | ID: mdl-36978673

ABSTRACT

The SARS-CoV-2 pandemic challenged health systems worldwide, thus advocating for practical, quick and highly trustworthy diagnostic instruments to help medical personnel. It features a long incubation period and a high contagion rate, causing bilateral multi-focal interstitial pneumonia, generally growing into acute respiratory distress syndrome (ARDS), causing hundreds of thousands of casualties worldwide. Guidelines for first-line diagnosis of pneumonia suggest Chest X-rays (CXR) for patients exhibiting symptoms. Potential alternatives include Computed Tomography (CT) scans and Lung UltraSound (LUS). Deep learning (DL) has been helpful in diagnosis using CT scans, LUS, and CXR, whereby the former commonly yields more precise results. CXR and CT scans present several drawbacks, including high costs. Radiation-free LUS imaging requires high expertise, and physicians thus underutilise it. LUS demonstrated a strong correlation with CT scans and reliability in pneumonia detection, even in the early stages. Here, we present an LUS video-classification approach based on contemporary DL strategies in close collaboration with Fondazione IRCCS Policlinico San Matteo's Emergency Department (ED) of Pavia. This research addressed SARS-CoV-2 patterns detection, ranked according to three severity scales by operating a trustworthy dataset comprising ultrasounds from linear and convex probes in 5400 clips from 450 hospitalised subjects. The main contributions of this study are related to the adoption of a standardised severity ranking scale to evaluate pneumonia. This evaluation relies on video summarisation through key-frame selection algorithms. Then, we designed and developed a video-classification architecture which emerged as the most promising. In contrast, the literature primarily concentrates on frame-pattern recognition. By using advanced techniques such as transfer learning and data augmentation, we were able to achieve an F1-Score of over 89% across all classes.

2.
Sensors (Basel) ; 22(22)2022 Nov 18.
Article in English | MEDLINE | ID: mdl-36433516

ABSTRACT

Currently, one of the most common causes of death worldwide is cancer. The development of innovative methods to support the early and accurate detection of cancers is required to increase the recovery rate of patients. Several studies have shown that medical Hyperspectral Imaging (HSI) combined with artificial intelligence algorithms is a powerful tool for cancer detection. Various preprocessing methods are commonly applied to hyperspectral data to improve the performance of the algorithms. However, there is currently no standard for these methods, and no studies have compared them so far in the medical field. In this work, we evaluated different combinations of preprocessing steps, including spatial and spectral smoothing, Min-Max scaling, Standard Normal Variate normalization, and a median spatial smoothing technique, with the goal of improving tumor detection in three different HSI databases concerning colorectal, esophagogastric, and brain cancers. Two machine learning and deep learning models were used to perform the pixel-wise classification. The results showed that the choice of preprocessing method affects the performance of tumor identification. The method that showed slightly better results with respect to identifing colorectal tumors was Median Filter preprocessing (0.94 of area under the curve). On the other hand, esophagogastric and brain tumors were more accurately identified using Min-Max scaling preprocessing (0.93 and 0.92 of area under the curve, respectively). However, it is observed that the Median Filter method smooths sharp spectral features, resulting in high variability in the classification performance. Therefore, based on these results, obtained with different databases acquired by different HSI instrumentation, the most relevant preprocessing technique identified in this work is Min-Max scaling.


Subject(s)
Artificial Intelligence , Brain Neoplasms , Humans , Databases, Factual , Algorithms , Diagnostic Imaging
3.
Sensors (Basel) ; 22(19)2022 Sep 21.
Article in English | MEDLINE | ID: mdl-36236240

ABSTRACT

Cancer originates from the uncontrolled growth of healthy cells into a mass. Chromophores, such as hemoglobin and melanin, characterize skin spectral properties, allowing the classification of lesions into different etiologies. Hyperspectral imaging systems gather skin-reflected and transmitted light into several wavelength ranges of the electromagnetic spectrum, enabling potential skin-lesion differentiation through machine learning algorithms. Challenged by data availability and tiny inter and intra-tumoral variability, here we introduce a pipeline based on deep neural networks to diagnose hyperspectral skin cancer images, targeting a handheld device equipped with a low-power graphical processing unit for routine clinical testing. Enhanced by data augmentation, transfer learning, and hyperparameter tuning, the proposed architectures aim to meet and improve the well-known dermatologist-level detection performances concerning both benign-malignant and multiclass classification tasks, being able to diagnose hyperspectral data considering real-time constraints. Experiments show 87% sensitivity and 88% specificity for benign-malignant classification and specificity above 80% for the multiclass scenario. AUC measurements suggest classification performance improvement above 90% with adequate thresholding. Concerning binary segmentation, we measured skin DICE and IOU higher than 90%. We estimated 1.21 s, at most, consuming 5 Watts to segment the epidermal lesions with the U-Net++ architecture, meeting the imposed time limit. Hence, we can diagnose hyperspectral epidermal data assuming real-time constraints.


Subject(s)
Melanoma , Skin Neoplasms , Dermoscopy/methods , Humans , Melanins , Neural Networks, Computer , Skin Neoplasms/diagnosis , Skin Neoplasms/pathology
4.
Sensors (Basel) ; 22(16)2022 Aug 17.
Article in English | MEDLINE | ID: mdl-36015906

ABSTRACT

In recent years, researchers designed several artificial intelligence solutions for healthcare applications, which usually evolved into functional solutions for clinical practice. Furthermore, deep learning (DL) methods are well-suited to process the broad amounts of data acquired by wearable devices, smartphones, and other sensors employed in different medical domains. Conceived to serve the role of diagnostic tool and surgical guidance, hyperspectral images emerged as a non-contact, non-ionizing, and label-free technology. However, the lack of large datasets to efficiently train the models limits DL applications in the medical field. Hence, its usage with hyperspectral images is still at an early stage. We propose a deep convolutional generative adversarial network to generate synthetic hyperspectral images of epidermal lesions, targeting skin cancer diagnosis, and overcome small-sized datasets challenges to train DL architectures. Experimental results show the effectiveness of the proposed framework, capable of generating synthetic data to train DL classifiers.


Subject(s)
Artificial Intelligence , Skin Neoplasms , Delivery of Health Care , Humans , Neural Networks, Computer , Skin Neoplasms/diagnosis
5.
Comput Biol Med ; 136: 104742, 2021 09.
Article in English | MEDLINE | ID: mdl-34388462

ABSTRACT

The Covid-19 European outbreak in February 2020 has challenged the world's health systems, eliciting an urgent need for effective and highly reliable diagnostic instruments to help medical personnel. Deep learning (DL) has been demonstrated to be useful for diagnosis using both computed tomography (CT) scans and chest X-rays (CXR), whereby the former typically yields more accurate results. However, the pivoting function of a CT scan during the pandemic presents several drawbacks, including high cost and cross-contamination problems. Radiation-free lung ultrasound (LUS) imaging, which requires high expertise and is thus being underutilised, has demonstrated a strong correlation with CT scan results and a high reliability in pneumonia detection even in the early stages. In this study, we developed a system based on modern DL methodologies in close collaboration with Fondazione IRCCS Policlinico San Matteo's Emergency Department (ED) of Pavia. Using a reliable dataset comprising ultrasound clips originating from linear and convex probes in 2908 frames from 450 hospitalised patients, we conducted an investigation into detecting Covid-19 patterns and ranking them considering two severity scales. This study differs from other research projects by its novel approach involving four and seven classes. Patients admitted to the ED underwent 12 LUS examinations in different chest parts, each evaluated according to standardised severity scales. We adopted residual convolutional neural networks (CNNs), transfer learning, and data augmentation techniques. Hence, employing methodological hyperparameter tuning, we produced state-of-the-art results meeting F1 score levels, averaged over the number of classes considered, exceeding 98%, and thereby manifesting stable measurements over precision and recall.


Subject(s)
COVID-19 , Deep Learning , Pneumonia , Humans , Lung/diagnostic imaging , Pneumonia/diagnostic imaging , Reproducibility of Results , SARS-CoV-2
6.
Diagnostics (Basel) ; 11(5)2021 Apr 23.
Article in English | MEDLINE | ID: mdl-33922829

ABSTRACT

BACKGROUND: COVID-19 is an emerging infectious disease, that is heavily challenging health systems worldwide. Admission Arterial Blood Gas (ABG) and Lung Ultrasound (LUS) can be of great help in clinical decision making, especially during the current pandemic and the consequent overcrowding of the Emergency Department (ED). The aim of the study was to demonstrate the capability of alveolar-to-arterial oxygen difference (AaDO2) in predicting the need for subsequent oxygen support and survival in patients with COVID-19 infection, especially in the presence of baseline normal PaO2/FiO2 ratio (P/F) values. METHODS: A cohort of 223 swab-confirmed COVID-19 patients underwent clinical evaluation, blood tests, ABG and LUS in the ED. LUS score was derived from 12 ultrasound lung windows. AaDO2 was derived as AaDO2 = ((FiO2) (Atmospheric pressure - H2O pressure) - (PaCO2/R)) - PaO2. Endpoints were subsequent oxygen support need and survival. RESULTS: A close relationship between AaDO2 and P/F and between AaDO2 and LUS score was observed (R2 = 0.88 and R2 = 0.67, respectively; p < 0.001 for both). In the subgroup of patients with P/F between 300 and 400, 94.7% (n = 107) had high AaDO2 values, and 51.4% (n = 55) received oxygen support, with 2 ICU admissions and 10 deaths. According to ROC analysis, AaDO2 > 39.4 had 83.6% sensitivity and 90.5% specificity (AUC 0.936; p < 0.001) in predicting subsequent oxygen support, whereas a LUS score > 6 showed 89.7% sensitivity and 75.0% specificity (AUC 0.896; p < 0.001). Kaplan-Meier curves showed different mortality in the AaDO2 subgroups (p = 0.0025). CONCLUSIONS: LUS and AaDO2 are easy and effective tools, which allow bedside risk stratification in patients with COVID-19, especially when P/F values, signs, and symptoms are not indicative of severe lung dysfunction.

SELECTION OF CITATIONS
SEARCH DETAIL
...