RESUMO
This paper describes the methodology followed to implement social distancing recommendations in the COVID-19 context along the beaches of the coast of Gipuzkoa (Basque Country, Northern Spain) by means of automated coastal videometry. The coastal videometry network of Gipuzkoa, based on the KostaSystem technology, covers 14 beaches, with 12 stations, along 50 km of coastline. A beach user detection algorithm based on a machine learning approach has been developed allowing for automatic assessment of beach attendance in real time at regional scale. For each beach, a simple classification of occupancy (low, medium, high, and full) was estimated as a function of the beach user density (BUD), obtained in real time from the images and the maximum beach carrying capacity (BCC), estimated based on the minimal social distance recommended by the authorities. This information was displayed in real time via a web/mobile app and was simultaneously sent to beach managers who controlled the beach access. The results showed a strong receptivity from beach users (more than 50.000 app downloads) and that real time information of beach occupation can help in short-term/daily beach management. In the longer term, the analysis of this information provides the necessary data for beach carrying capacity management and can help the authorities in controlling and in determining their maximum capacity.
RESUMO
The automatic detection of pulse during out-of-hospital cardiac arrest (OHCA) is necessary for the early recognition of the arrest and the detection of return of spontaneous circulation (end of the arrest). The only signal available in every single defibrillator and valid for the detection of pulse is the electrocardiogram (ECG). In this study we propose two deep neural network (DNN) architectures to detect pulse using short ECG segments (5 s), i.e., to classify the rhythm into pulseless electrical activity (PEA) or pulse-generating rhythm (PR). A total of 3914 5-s ECG segments, 2372 PR and 1542 PEA, were extracted from 279 OHCA episodes. Data were partitioned patient-wise into training (80%) and test (20%) sets. The first DNN architecture was a fully convolutional neural network, and the second architecture added a recurrent layer to learn temporal dependencies. Both DNN architectures were tuned using Bayesian optimization, and the results for the test set were compared to state-of-the art PR/PEA discrimination algorithms based on machine learning and hand crafted features. The PR/PEA classifiers were evaluated in terms of sensitivity (Se) for PR, specificity (Sp) for PEA, and the balanced accuracy (BAC), the average of Se and Sp. The Se/Sp/BAC of the DNN architectures were 94.1%/92.9%/93.5% for the first one, and 95.5%/91.6%/93.5% for the second one. Both architectures improved the performance of state of the art methods by more than 1.5 points in BAC.
RESUMO
Colorectal cancer is one of the leading death causes worldwide, but, fortunately, early detection highly increases survival rates, with the adenoma detection rate being one surrogate marker for colonoscopy quality. Artificial intelligence and deep learning methods have been applied with great success to improve polyp detection and localization and, therefore, the adenoma detection rate. In this regard, a comparison with clinical experts is required to prove the added value of the systems. Nevertheless, there is no standardized comparison in a laboratory setting before their clinical validation. The ClinExpPICCOLO comprises 65 unedited endoscopic images that represent the clinical setting. They include white light imaging and narrow band imaging, with one third of the images containing a lesion but, differently to another public datasets, the lesion does not appear well-centered in the image. Together with the dataset, an expert clinical performance baseline has been established with the performance of 146 gastroenterologists, who were required to locate the lesions in the selected images. Results shows statistically significant differences between experience groups. Expert gastroenterologists' accuracy was 77.74, while sensitivity and specificity were 86.47 and 74.33, respectively. These values can be established as minimum values for a DL method before performing a clinical trial in the hospital setting.
RESUMO
Plant fungal diseases are one of the most important causes of crop yield losses. Therefore, plant disease identification algorithms have been seen as a useful tool to detect them at early stages to mitigate their effects. Although deep-learning based algorithms can achieve high detection accuracies, they require large and manually annotated image datasets that is not always accessible, specially for rare and new diseases. This study focuses on the development of a plant disease detection algorithm and strategy requiring few plant images (Few-shot learning algorithm). We extend previous work by using a novel challenging dataset containing more than 100,000 images. This dataset includes images of leaves, panicles and stems of five different crops (barley, corn, rape seed, rice, and wheat) for a total of 17 different diseases, where each disease is shown at different disease stages. In this study, we propose a deep metric learning based method to extract latent space representations from plant diseases with just few images by means of a Siamese network and triplet loss function. This enhances previous methods that require a support dataset containing a high number of annotated images to perform metric learning and few-shot classification. The proposed method was compared over a traditional network that was trained with the cross-entropy loss function. Exhaustive experiments have been performed for validating and measuring the benefits of metric learning techniques over classical methods. Results show that the features extracted by the metric learning based approach present better discriminative and clustering properties. Davis-Bouldin index and Silhouette score values have shown that triplet loss network improves the clustering properties with respect to the categorical-cross entropy loss. Overall, triplet loss approach improves the DB index value by 22.7% and Silhouette score value by 166.7% compared to the categorical cross-entropy loss model. Moreover, the F-score parameter obtained from the Siamese network with the triplet loss performs better than classical approaches when there are few images for training, obtaining a 6% improvement in the F-score mean value. Siamese networks with triplet loss have improved the ability to learn different plant diseases using few images of each class. These networks based on metric learning techniques improve clustering and classification results over traditional categorical cross-entropy loss networks for plant disease identification.
RESUMO
Colorectal cancer presents one of the most elevated incidences of cancer worldwide. Colonoscopy relies on histopathology analysis of hematoxylin-eosin (H&E) images of the removed tissue. Novel techniques such as multi-photon microscopy (MPM) show promising results for performing real-time optical biopsies. However, clinicians are not used to this imaging modality and correlation between MPM and H&E information is not clear. The objective of this paper is to describe and make publicly available an extensive dataset of fully co-registered H&E and MPM images that allows the research community to analyze the relationship between MPM and H&E histopathological images and the effect of the semantic gap that prevents clinicians from correctly diagnosing MPM images. The dataset provides a fully scanned tissue images at 10x optical resolution (0.5 µm/px) from 50 samples of lesions obtained by colonoscopies and colectomies. Diagnostics capabilities of TPF and H&E images were compared. Additionally, TPF tiles were virtually stained into H&E images by means of a deep-learning model. A panel of 5 expert pathologists evaluated the different modalities into three classes (healthy, adenoma/hyperplastic, and adenocarcinoma). Results showed that the performance of the pathologists over MPM images was 65% of the H&E performance while the virtual staining method achieved 90%. MPM imaging can provide appropriate information for diagnosing colorectal cancer without the need for H&E staining. However, the existing semantic gap among modalities needs to be corrected.
RESUMO
BACKGROUND: Alzheimer's is a degenerative dementing disorder that starts with a mild memory impairment and progresses to a total loss of mental and physical faculties. The sooner the diagnosis is made, the better for the patient, as preventive actions and treatment can be started. Although tests such as the Mini-Mental State Tests Examination are usually used for early identification, diagnosis relies on magnetic resonance imaging (MRI) brain analysis. METHODS: Public initiatives such as the OASIS (Open Access Series of Imaging Studies) collection provide neuroimaging datasets openly available for research purposes. In this work, a new method based on deep learning and image processing techniques for MRI-based Alzheimer's diagnosis is proposed and compared with previous literature works. RESULTS: Our method achieves a balance accuracy (BAC) up to 0.93 for image-based automated diagnosis of the disease, and a BAC of 0.88 for the establishment of the disease stage (healthy tissue, very mild and severe stage). CONCLUSIONS: Results obtained surpassed the state-of-the-art proposals using the OASIS collection. This demonstrates that deep learning-based strategies are an effective tool for building a robust solution for Alzheimer's-assisted diagnosis based on MRI data.
RESUMO
BACKGROUND: Colorectal cancer has a high incidence rate worldwide, with over 1.8 million new cases and 880,792 deaths in 2018. Fortunately, its early detection significantly increases the survival rate, reaching a cure rate of 90% when diagnosed at a localized stage. Colonoscopy is the gold standard technique for detection and removal of colorectal lesions with potential to evolve into cancer. When polyps are found in a patient, the current procedure is their complete removal. However, in this process, gastroenterologists cannot assure complete resection and clean margins which are given by the histopathology analysis of the removed tissue, which is performed at laboratory. AIMS: In this paper, we demonstrate the capabilities of multiphoton microscopy (MPM) technology to provide imaging biomarkers that can be extracted by deep learning techniques to identify malignant neoplastic colon lesions and distinguish them from healthy, hyperplastic, or benign neoplastic tissue, without the need for histopathological staining. MATERIALS AND METHODS: To this end, we present a novel MPM public dataset containing 14,712 images obtained from 42 patients and grouped into 2 classes. A convolutional neural network is trained on this dataset and a spatially coherent predictions scheme is applied for performance improvement. RESULTS: We obtained a sensitivity of 0.8228 ± 0.1575 and a specificity of 0.9114 ± 0.0814 on detecting malignant neoplastic lesions. We also validated this approach to estimate the self-confidence of the network on its own predictions, obtaining a mean sensitivity of 0.8697 and a mean specificity of 0.9524 with the 18.67% of the images classified as uncertain. CONCLUSIONS: This work lays the foundations for performing in vivo optical colon biopsies by combining this novel imaging technology together with deep learning algorithms, hence avoiding unnecessary polyp resection and allowing in situ diagnosis assessment.
RESUMO
BACKGROUND: Deep learning diagnostic algorithms are proving comparable results with human experts in a wide variety of tasks, and they still require a huge amount of well-annotated data for training, which is often non affordable. Metric learning techniques have allowed a reduction in the required annotated data allowing few-shot learning over deep learning architectures. AIMS AND OBJECTIVES: In this work, we analyze the state-of-the-art loss functions such as triplet loss, contrastive loss, and multi-class N-pair loss for the visual embedding extraction of hematoxylin and eosin (H&E) microscopy images and we propose a novel constellation loss function that takes advantage of the visual distances of the embeddings of the negative samples and thus, performing a regularization that increases the quality of the extracted embeddings. MATERIALS AND METHODS: To this end, we employed the public H&E imaging dataset from the University Medical Center Mannheim (Germany) that contains tissue samples from low-grade and high-grade primary tumors of digitalized colorectal cancer tissue slides. These samples are divided into eight different textures (1. tumour epithelium, 2. simple stroma, 3. complex stroma, 4. immune cells, 5. debris and mucus, 6. mucosal glands, 7. adipose tissue and 8. background,). The dataset was divided randomly into train and test splits and the training split was used to train a classifier to distinguish among the different textures with just 20 training images. The process was repeated 10 times for each loss function. Performance was compared both for cluster compactness and for classification accuracy on separating the aforementioned textures. RESULTS: Our results show that the proposed loss function outperforms the other methods by obtaining more compact clusters (Davis-Boulding: 1.41 ± 0.08, Silhouette: 0.37 ± 0.02) and better classification capabilities (accuracy: 85.0 ± 0.6) over H and E microscopy images. We demonstrate that the proposed constellation loss can be successfully used in the medical domain in situations of data scarcity.
RESUMO
PURPOSE: Data augmentation is a common technique to overcome the lack of large annotated databases, a usual situation when applying deep learning to medical imaging problems. Nevertheless, there is no consensus on which transformations to apply for a particular field. This work aims at identifying the effect of different transformations on polyp segmentation using deep learning. METHODS: A set of transformations and ranges have been selected, considering image-based (width and height shift, rotation, shear, zooming, horizontal and vertical flip and elastic deformation), pixel-based (changes in brightness and contrast) and application-based (specular lights and blurry frames) transformations. A model has been trained under the same conditions without data augmentation transformations (baseline) and for each of the transformation and ranges, using CVC-EndoSceneStill and Kvasir-SEG, independently. Statistical analysis is performed to compare the baseline performance against results of each range of each transformation on the same test set for each dataset. RESULTS: This basic method identifies the most adequate transformations for each dataset. For CVC-EndoSceneStill, changes in brightness and contrast significantly improve the model performance. On the contrary, Kvasir-SEG benefits to a greater extent from the image-based transformations, especially rotation and shear. Augmentation with synthetic specular lights also improves the performance. CONCLUSION: Despite being infrequently used, pixel-based transformations show a great potential to improve polyp segmentation in CVC-EndoSceneStill. On the other hand, image-based transformations are more suitable for Kvasir-SEG. Problem-based transformations behave similarly in both datasets. Polyp area, brightness and contrast of the dataset have an influence on these differences.
Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Pólipos Intestinais/cirurgia , Cirurgia Assistida por Computador , Bases de Dados Factuais , Humanos , Pólipos Intestinais/diagnóstico por imagemRESUMO
Colorectal cancer has a great incidence rate worldwide, but its early detection significantly increases the survival rate. Colonoscopy is the gold standard procedure for diagnosis and removal of colorectal lesions with potential to evolve into cancer and computer-aided detection systems can help gastroenterologists to increase the adenoma detection rate, one of the main indicators for colonoscopy quality and predictor for colorectal cancer prevention. The recent success of deep learning approaches in computer vision has also reached this field and has boosted the number of proposed methods for polyp detection, localization and segmentation. Through a systematic search, 35 works have been retrieved. The current systematic review provides an analysis of these methods, stating advantages and disadvantages for the different categories used; comments seven publicly available datasets of colonoscopy images; analyses the metrics used for reporting and identifies future challenges and recommendations. Convolutional neural networks are the most used architecture together with an important presence of data augmentation strategies, mainly based on image transformations and the use of patches. End-to-end methods are preferred over hybrid methods, with a rising tendency. As for detection and localization tasks, the most used metric for reporting is the recall, while Intersection over Union is highly used in segmentation. One of the major concerns is the difficulty for a fair comparison and reproducibility of methods. Even despite the organization of challenges, there is still a need for a common validation framework based on a large, annotated and publicly available database, which also includes the most convenient metrics to report results. Finally, it is also important to highlight that efforts should be focused in the future on proving the clinical value of the deep learning based methods, by increasing the adenoma detection rate.
Assuntos
Pólipos do Colo , Neoplasias Colorretais , Aprendizado Profundo , Pólipos do Colo/diagnóstico por imagem , Colonoscopia , Neoplasias Colorretais/diagnóstico , Detecção Precoce de Câncer , Humanos , Reprodutibilidade dos TestesRESUMO
Pulse detection during out-of-hospital cardiac arrest remains challenging for both novel and expert rescuers because current methods are inaccurate and time-consuming. There is still a need to develop automatic methods for pulse detection, where the most challenging scenario is the discrimination between pulsed rhythms (PR, pulse) and pulseless electrical activity (PEA, no pulse). Thoracic impedance (TI) acquired through defibrillation pads has been proven useful for detecting pulse as it shows small fluctuations with every heart beat. In this study we analyse the use of deep learning techniques to detect pulse using only the TI signal. The proposed neural network, composed by convolutional and recurrent layers, outperformed state of the art methods, and achieved a balanced accuracy of 90% for segments as short as 3 s.
Assuntos
Impedância Elétrica , Redes Neurais de Computação , Parada Cardíaca Extra-Hospitalar/diagnóstico , Pulso Arterial , Reanimação Cardiopulmonar , HumanosRESUMO
Early defibrillation by an automated external defibrillator (AED) is key for the survival of out-of-hospital cardiac arrest (OHCA) patients. ECG feature extraction and machine learning have been successfully used to detect ventricular fibrillation (VF) in AED shock decision algorithms. Recently, deep learning architectures based on 1D Convolutional Neural Networks (CNN) have been proposed for this task. This study introduces a deep learning architecture based on 1D-CNN layers and a Long Short-Term Memory (LSTM) network for the detection of VF. Two datasets were used, one from public repositories of Holter recordings captured at the onset of the arrhythmia, and a second from OHCA patients obtained minutes after the onset of the arrest. Data was partitioned patient-wise into training (80%) to design the classifiers, and test (20%) to report the results. The proposed architecture was compared to 1D-CNN only deep learners, and to a classical approach based on VF-detection features and a support vector machine (SVM) classifier. The algorithms were evaluated in terms of balanced accuracy (BAC), the unweighted mean of the sensitivity (Se) and specificity (Sp). The BAC, Se, and Sp of the architecture for 4-s ECG segments was 99.3%, 99.7%, and 98.9% for the public data, and 98.0%, 99.2%, and 96.7% for OHCA data. The proposed architecture outperformed all other classifiers by at least 0.3-points in BAC in the public data, and by 2.2-points in the OHCA data. The architecture met the 95% Sp and 90% Se requirements of the American Heart Association in both datasets for segment lengths as short as 3-s. This is, to the best of our knowledge, the most accurate VF detection algorithm to date, especially on OHCA data, and it would enable an accurate shock no shock diagnosis in a very short time.
Assuntos
Aprendizado Profundo , Diagnóstico por Computador/métodos , Redes Neurais de Computação , Fibrilação Ventricular/diagnóstico , Algoritmos , Bases de Dados Factuais/estatística & dados numéricos , Desfibriladores/estatística & dados numéricos , Diagnóstico por Computador/estatística & dados numéricos , Cardioversão Elétrica/métodos , Cardioversão Elétrica/estatística & dados numéricos , Eletrocardiografia/estatística & dados numéricos , Eletrocardiografia Ambulatorial/estatística & dados numéricos , Humanos , Memória de Curto Prazo , Parada Cardíaca Extra-Hospitalar/diagnóstico , Parada Cardíaca Extra-Hospitalar/terapia , Processamento de Sinais Assistido por Computador , Máquina de Vetores de SuporteRESUMO
Biopsies for diagnosis can sometimes be replaced by non-invasive techniques such as CT and MRI. Surgeons require accurate and efficient methods that allow proper segmentation of the organs in order to ensure the most reliable intervention planning. Automated liver segmentation is a difficult and open problem where CT has been more widely explored than MRI. MRI liver segmentation represents a challenge due to the presence of characteristic artifacts, such as partial volumes, noise and low contrast. In this paper, we present a novel method for multichannel MRI automatic liver segmentation. The proposed method consists of the minimization of a 3D active surface by means of the dual approach to the variational formulation of the underlying problem. This active surface evolves over a probability map that is based on a new compact descriptor comprising spatial and multisequence information which is further modeled by means of a liver statistical model. This proposed 3D active surface approach naturally integrates volumetric regularization in the statistical model. The advantages of the compact visual descriptor together with the proposed approach result in a fast and accurate 3D segmentation method. The method was tested on 18 healthy liver studies and results were compared to a gold standard made by expert radiologists. Comparisons with other state-of-the-art approaches are provided by means of nine well established quality metrics. The obtained results improve these methodologies, achieving a Dice Similarity Coefficient of 98.59.
Assuntos
Fígado/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Humanos , Propriedades de SuperfícieRESUMO
EN: Nail polish has traditionally been assumed to absorb light emitted by pulse oximeters and to interfere with the detection and measurement of oxygenated hemoglobin. In a systematic review of the literature we aimed to assess the influence of nail polish on the measurement of oxygen saturation by pulse oximetry (SpO2). A search protocol for online databases (MEDLINE, Embase, Web of Science, Scopus, Cumulative Index to Nursing and Allied Health Literature, and IBECS [the Spanish health sciences index]) was established to find clinical trials or observational studies published between 1999 and February 2014. Twelve nonrandomized clinical trials were found. Ten were in healthy volunteers. One of the remaining 2 studies was in critical patients undergoing mechanical ventilation, and the other was in patients with stable chronic obstructive pulmonary disease. One study recreated the low oxygen level of high altitudes, while the others were done in normal atmospheric conditions. Differences between pulse oximeters and type of nail polish were found. Nail polish was associated with a statistically significant decrease in SpO2 for at least 1 color in all but 2 studies. However, the differences were within the standard error (±2.0%) of the pulse oximeters used. The authors of the studies all concluded that although nail polish might change SpO2 readings significantly, the variations are not clinically significant.
ES: Tradicionalmente se ha considerado que el esmalte de uñas puede absorber luz emitida por los pulsioxímetros e interferir en la detección y medida de la hemoglobina oxigenada. Mediante la realización de una revisión sistemática se ha pretendido evaluar la influencia del esmalte de uñas en los valores de saturación de oxígeno (SpO2) en pacientes sometidos a pulsioximetría. Se elaboró un protocolo de búsqueda para ser utilizado en seis bases de datos (Medline, Embase, WOS, Scopus, CINAHL e IBECS), y se consideraron ensayos clínicos o estudios observacionales publicados entre enero de 1999 y febrero de 2014. Fueron incluidos 12 ensayos clínicos no aleatorizados realizados en voluntarios sanos, salvo en dos estudios: uno empleó pacientes críticos sometidos a ventilación mecánica y otro utilizó personas con enfermedad pulmonar obstructiva crónica (EPOC) estables. Además, con excepción de un ensayo que recreó condiciones de hipoxia leve en altitud, el resto de trabajos se realizó en condiciones de normoxia. Se observaron diferencias en función del modelo de pulsioxímetro y del tipo de cosmético utilizado. Excepto en dos estudios, el esmalte de uñas produjo una reducción estadísticamente significativa de la SpO2 en al menos un color. Sin embargo, estas variaciones se presentaron dentro del rango de error estándar de los pulsioxímetros (± 2,0%). Existe consenso entre los autores de los estudios en que, aunque la laca de uñas puede producir una alteración estadísticamente significativa de los valores de saturación de oxígeno, estas variaciones carecen de relevancia clínica.