Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
J Microsc ; 293(1): 38-58, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38053244

RESUMEN

Here, we present a comprehensive holography-based system designed for detecting microparticles through microscopic holographic projections of water samples. This system is designed for researchers who may be unfamiliar with holographic technology but are engaged in microparticle research, particularly in the field of water analysis. Additionally, our innovative system can be deployed for environmental monitoring as a component of an autonomous sailboat robot. Our system's primary application is for large-scale classification of diverse microplastics that are prevalent in water bodies worldwide. This paper provides a step-by-step guide for constructing our system and outlines its entire processing pipeline, including hologram acquisition for image reconstruction.

2.
Sensors (Basel) ; 24(5)2024 Feb 28.
Artículo en Inglés | MEDLINE | ID: mdl-38475092

RESUMEN

COVID-19 analysis from medical imaging is an important task that has been intensively studied in the last years due to the spread of the COVID-19 pandemic. In fact, medical imaging has often been used as a complementary or main tool to recognize the infected persons. On the other hand, medical imaging has the ability to provide more details about COVID-19 infection, including its severity and spread, which makes it possible to evaluate the infection and follow-up the patient's state. CT scans are the most informative tool for COVID-19 infection, where the evaluation of COVID-19 infection is usually performed through infection segmentation. However, segmentation is a tedious task that requires much effort and time from expert radiologists. To deal with this limitation, an efficient framework for estimating COVID-19 infection as a regression task is proposed. The goal of the Per-COVID-19 challenge is to test the efficiency of modern deep learning methods on COVID-19 infection percentage estimation (CIPE) from CT scans. Participants had to develop an efficient deep learning approach that can learn from noisy data. In addition, participants had to cope with many challenges, including those related to COVID-19 infection complexity and crossdataset scenarios. This paper provides an overview of the COVID-19 infection percentage estimation challenge (Per-COVID-19) held at MIA-COVID-2022. Details of the competition data, challenges, and evaluation metrics are presented. The best performing approaches and their results are described and discussed.


Asunto(s)
COVID-19 , Pandemias , Humanos , Benchmarking , Cintigrafía , Tomografía Computarizada por Rayos X
3.
Sensors (Basel) ; 23(9)2023 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-37177764

RESUMEN

Developing computer-aided approaches for cancer diagnosis and grading is currently receiving an increasing demand: this could take over intra- and inter-observer inconsistency, speed up the screening process, increase early diagnosis, and improve the accuracy and consistency of the treatment-planning processes.The third most common cancer worldwide and the second most common in women is colorectal cancer (CRC). Grading CRC is a key task in planning appropriate treatments and estimating the response to them. Unfortunately, it has not yet been fully demonstrated how the most advanced models and methodologies of machine learning can impact this crucial task.This paper systematically investigates the use of advanced deep models (convolutional neural networks and transformer architectures) to improve colon carcinoma detection and grading from histological images. To the best of our knowledge, this is the first attempt at using transformer architectures and ensemble strategies for exploiting deep learning paradigms for automatic colon cancer diagnosis. Results on the largest publicly available dataset demonstrated a substantial improvement with respect to the leading state-of-the-art methods. In particular, by exploiting a transformer architecture, it was possible to observe a 3% increase in accuracy in the detection task (two-class problem) and up to a 4% improvement in the grading task (three-class problem) by also integrating an ensemble strategy.


Asunto(s)
Carcinoma , Neoplasias del Colon , Aprendizaje Profundo , Humanos , Femenino , Detección Precoz del Cáncer , Neoplasias del Colon/diagnóstico
4.
Sensors (Basel) ; 23(3)2023 Feb 03.
Artículo en Inglés | MEDLINE | ID: mdl-36772733

RESUMEN

Alzheimer's disease (AD) is the most common form of dementia. Computer-aided diagnosis (CAD) can help in the early detection of associated cognitive impairment. The aim of this work is to improve the automatic detection of dementia in MRI brain data. For this purpose, we used an established pipeline that includes the registration, slicing, and classification steps. The contribution of this research was to investigate for the first time, to our knowledge, three current and promising deep convolutional models (ResNet, DenseNet, and EfficientNet) and two transformer-based architectures (MAE and DeiT) for mapping input images to clinical diagnosis. To allow a fair comparison, the experiments were performed on two publicly available datasets (ADNI and OASIS) using multiple benchmarks obtained by changing the number of slices per subject extracted from the available 3D voxels. The experiments showed that very deep ResNet and DenseNet models performed better than the shallow ResNet and VGG versions tested in the literature. It was also found that transformer architectures, and DeiT in particular, produced the best classification results and were more robust to the noise added by increasing the number of slices. A significant improvement in accuracy (up to 7%) was achieved compared to the leading state-of-the-art approaches, paving the way for the use of CAD approaches in real-world applications.


Asunto(s)
Enfermedad de Alzheimer , Aprendizaje Profundo , Humanos , Enfermedad de Alzheimer/diagnóstico por imagen , Redes Neurales de la Computación , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen
5.
Environ Res ; 204(Pt D): 112348, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-34767822

RESUMEN

Since the start of the COVID-19 pandemic many studies investigated the correlation between climate variables such as air quality, humidity and temperature and the lethality of COVID-19 around the world. In this work we investigate the use of climate variables, as additional features to train a data-driven multivariate forecast model to predict the short-term expected number of COVID-19 deaths in Brazilian states and major cities. The main idea is that by adding these climate features as inputs to the training of data-driven models, the predictive performance improves when compared to equivalent single input models. We use a Stacked LSTM as the network architecture for both the multivariate and univariate model. We compare both approaches by training forecast models for the COVID-19 deaths time series of the city of São Paulo. In addition, we present a previous analysis based on grouping K-means on AQI curves. The results produced will allow achieving the application of transfer learning, once a locality is eventually added to the task, regressing out using a model based on the cluster of similarities in the AQI curve. The experiments show that the best multivariate model is more skilled than the best standard data-driven univariate model that we could find, using as evaluation metrics the average fitting error, average forecast error, and the profile of the accumulated deaths for the forecast. These results show that by adding more useful features as input to a multivariate approach could further improve the quality of the prediction models.


Asunto(s)
Contaminación del Aire , COVID-19 , Contaminación del Aire/análisis , Brasil , Humanos , Humedad , Pandemias , SARS-CoV-2 , Temperatura
6.
Sensors (Basel) ; 22(3)2022 Jan 24.
Artículo en Inglés | MEDLINE | ID: mdl-35161612

RESUMEN

Neurodevelopmental disorders (NDD) are impairments of the growth and development of the brain and/or central nervous system. In the light of clinical findings on early diagnosis of NDD and prompted by recent advances in hardware and software technologies, several researchers tried to introduce automatic systems to analyse the baby's movement, even in cribs. Traditional technologies for automatic baby motion analysis leverage contact sensors. Alternatively, remotely acquired video data (e.g., RGB or depth) can be used, with or without active/passive markers positioned on the body. Markerless approaches are easier to set up and maintain (without any human intervention) and they work well on non-collaborative users, making them the most suitable technologies for clinical applications involving children. On the other hand, they require complex computational strategies for extracting knowledge from data, and then, they strongly depend on advances in computer vision and machine learning, which are among the most expanding areas of research. As a consequence, also markerless video-based analysis of movements in children for NDD has been rapidly expanding but, to the best of our knowledge, there is not yet a survey paper providing a broad overview of how recent scientific developments impacted it. This paper tries to fill this gap and it lists specifically designed data acquisition tools and publicly available datasets as well. Besides, it gives a glimpse of the most promising techniques in computer vision, machine learning and pattern recognition which could be profitably exploited for children motion analysis in videos.


Asunto(s)
Aprendizaje Automático , Enfermedades del Sistema Nervioso , Niño , Humanos , Movimiento (Física) , Movimiento , Programas Informáticos
7.
Sensors (Basel) ; 21(17)2021 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-34502769

RESUMEN

Since the appearance of the COVID-19 pandemic (at the end of 2019, Wuhan, China), the recognition of COVID-19 with medical imaging has become an active research topic for the machine learning and computer vision community. This paper is based on the results obtained from the 2021 COVID-19 SPGC challenge, which aims to classify volumetric CT scans into normal, COVID-19, or community-acquired pneumonia (Cap) classes. To this end, we proposed a deep-learning-based approach (CNR-IEMN) that consists of two main stages. In the first stage, we trained four deep learning architectures with a multi-tasks strategy for slice-level classification. In the second stage, we used the previously trained models with an XG-boost classifier to classify the whole CT scan into normal, COVID-19, or Cap classes. Our approach achieved a good result on the validation set, with an overall accuracy of 87.75% and 96.36%, 52.63%, and 95.83% sensitivities for COVID-19, Cap, and normal, respectively. On the other hand, our approach achieved fifth place on the three test datasets of SPGC in the COVID-19 challenge, where our approach achieved the best result for COVID-19 sensitivity. In addition, our approach achieved second place on two of the three testing sets.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Humanos , Pandemias , SARS-CoV-2 , Tomografía Computarizada por Rayos X
8.
Sensors (Basel) ; 21(5)2021 Mar 03.
Artículo en Inglés | MEDLINE | ID: mdl-33802428

RESUMEN

The recognition of COVID-19 infection from X-ray images is an emerging field in the learning and computer vision community. Despite the great efforts that have been made in this field since the appearance of COVID-19 (2019), the field still suffers from two drawbacks. First, the number of available X-ray scans labeled as COVID-19-infected is relatively small. Second, all the works that have been carried out in the field are separate; there are no unified data, classes, and evaluation protocols. In this work, based on public and newly collected data, we propose two X-ray COVID-19 databases, which are three-class COVID-19 and five-class COVID-19 datasets. For both databases, we evaluate different deep learning architectures. Moreover, we propose an Ensemble-CNNs approach which outperforms the deep learning architectures and shows promising results in both databases. In other words, our proposed Ensemble-CNNs achieved a high performance in the recognition of COVID-19 infection, resulting in accuracies of 100% and 98.1% in the three-class and five-class scenarios, respectively. In addition, our approach achieved promising results in the overall recognition accuracy of 75.23% and 81.0% for the three-class and five-class scenarios, respectively. We make our databases of COVID-19 X-ray scans publicly available to encourage other researchers to use it as a benchmark for their studies and comparisons.


Asunto(s)
COVID-19/diagnóstico por imagen , Aprendizaje Profundo , Redes Neurales de la Computación , Radiografía Torácica , Algoritmos , Humanos , Rayos X
9.
Sensors (Basel) ; 20(13)2020 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-32635375

RESUMEN

The automatic detection of eye positions, their temporal consistency, and their mapping into a line of sight in the real world (to find where a person is looking at) is reported in the scientific literature as gaze tracking. This has become a very hot topic in the field of computer vision during the last decades, with a surprising and continuously growing number of application fields. A very long journey has been made from the first pioneering works, and this continuous search for more accurate solutions process has been further boosted in the last decade when deep neural networks have revolutionized the whole machine learning area, and gaze tracking as well. In this arena, it is being increasingly useful to find guidance through survey/review articles collecting most relevant works and putting clear pros and cons of existing techniques, also by introducing a precise taxonomy. This kind of manuscripts allows researchers and technicians to choose the better way to move towards their application or scientific goals. In the literature, there exist holistic and specifically technological survey documents (even if not updated), but, unfortunately, there is not an overview discussing how the great advancements in computer vision have impacted gaze tracking. Thus, this work represents an attempt to fill this gap, also introducing a wider point of view that brings to a new taxonomy (extending the consolidated ones) by considering gaze tracking as a more exhaustive task that aims at estimating gaze target from different perspectives: from the eye of the beholder (first-person view), from an external camera framing the beholder's, from a third-person view looking at the scene where the beholder is placed in, and from an external view independent from the beholder.


Asunto(s)
Movimientos Oculares , Tecnología de Seguimiento Ocular/instrumentación , Ojo , Fijación Ocular , Computadores , Humanos , Redes Neurales de la Computación
10.
Sensors (Basel) ; 20(21)2020 Nov 07.
Artículo en Inglés | MEDLINE | ID: mdl-33171757

RESUMEN

Diatoms are among the dominant phytoplankters in marine and freshwater habitats, and important biomarkers of water quality, making their identification and classification one of the current challenges for environmental monitoring. To date, taxonomy of the species populating a water column is still conducted by marine biologists on the basis of their own experience. On the other hand, deep learning is recognized as the elective technique for solving image classification problems. However, a large amount of training data is usually needed, thus requiring the synthetic enlargement of the dataset through data augmentation. In the case of microalgae, the large variety of species that populate the marine environments makes it arduous to perform an exhaustive training that considers all the possible classes. However, commercial test slides containing one diatom element per class fixed in between two glasses are available on the market. These are usually prepared by expert diatomists for taxonomy purposes, thus constituting libraries of the populations that can be found in oceans. Here we show that such test slides are very useful for training accurate deep Convolutional Neural Networks (CNNs). We demonstrate the successful classification of diatoms based on a proper CNNs ensemble and a fully augmented dataset, i.e., creation starting from one single image per class available from a commercial glass slide containing 50 fixed species in a dry setting. This approach avoids the time-consuming steps of water sampling and labeling by skilled marine biologists. To accomplish this goal, we exploit the holographic imaging modality, which permits the accessing of a quantitative phase-contrast maps and a posteriori flexible refocusing due to its intrinsic 3D imaging capability. The network model is then validated by using holographic recordings of live diatoms imaged in water samples i.e., in their natural wet environmental condition.


Asunto(s)
Diatomeas/clasificación , Holografía , Aprendizaje Automático , Microscopía , Redes Neurales de la Computación
11.
Sensors (Basel) ; 18(11)2018 Nov 16.
Artículo en Inglés | MEDLINE | ID: mdl-30453518

RESUMEN

In this paper, a computational approach is proposed and put into practice to assess the capability of children having had diagnosed Autism Spectrum Disorders (ASD) to produce facial expressions. The proposed approach is based on computer vision components working on sequence of images acquired by an off-the-shelf camera in unconstrained conditions. Action unit intensities are estimated by analyzing local appearance and then both temporal and geometrical relationships, learned by Convolutional Neural Networks, are exploited to regularize gathered estimates. To cope with stereotyped movements and to highlight even subtle voluntary movements of facial muscles, a personalized and contextual statistical modeling of non-emotional face is formulated and used as a reference. Experimental results demonstrate how the proposed pipeline can improve the analysis of facial expressions produced by ASD children. A comparison of system's outputs with the evaluations performed by psychologists, on the same group of ASD children, makes evident how the performed quantitative analysis of children's abilities helps to go beyond the traditional qualitative ASD assessment/diagnosis protocols, whose outcomes are affected by human limitations in observing and understanding multi-cues behaviors such as facial expressions.


Asunto(s)
Cara/fisiología , Expresión Facial , Redes Neurales de la Computación , Adolescente , Algoritmos , Trastorno del Espectro Autista/diagnóstico , Niño , Emociones/fisiología , Femenino , Humanos , Masculino
12.
Sensors (Basel) ; 14(5): 8363-79, 2014 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-24824369

RESUMEN

This paper investigates the possibility of accurately detecting and tracking human gaze by using an unconstrained and noninvasive approach based on the head pose information extracted by an RGB-D device. The main advantages of the proposed solution are that it can operate in a totally unconstrained environment, it does not require any initial calibration and it can work in real-time. These features make it suitable for being used to assist human in everyday life (e.g., remote device control) or in specific actions (e.g., rehabilitation), and in general in all those applications where it is not possible to ask for user cooperation (e.g., when users with neurological impairments are involved). To evaluate gaze estimation accuracy, the proposed approach has been largely tested and results are then compared with the leading methods in the state of the art, which, in general, make use of strong constraints on the people movements, invasive/additional hardware and supervised pattern recognition modules. Experimental tests demonstrated that, in most cases, the errors in gaze estimation are comparable to the state of the art methods, although it works without additional constraints, calibration and supervised learning.


Asunto(s)
Movimientos Oculares/fisiología , Fijación Ocular/fisiología , Movimientos de la Cabeza/fisiología , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Monitoreo Ambulatorio/métodos , Dispositivos de Autoayuda , Colorimetría/instrumentación , Colorimetría/métodos , Diseño de Equipo , Análisis de Falla de Equipo , Estudios de Factibilidad , Humanos , Interpretación de Imagen Asistida por Computador/instrumentación , Imagenología Tridimensional/instrumentación , Monitoreo Ambulatorio/instrumentación , Reconocimiento de Normas Patrones Automatizadas/métodos , Postura/fisiología , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
13.
Sensors (Basel) ; 14(9): 17786-806, 2014 Sep 24.
Artículo en Inglés | MEDLINE | ID: mdl-25254304

RESUMEN

In this paper, an artificial olfactory system (Electronic Nose) that mimics the biological olfactory system is introduced. The device consists of a Large-Scale Chemical Sensor Array (16; 384 sensors, made of 24 different kinds of conducting polymer materials)that supplies data to software modules, which perform advanced data processing. In particular, the paper concentrates on the software components consisting, at first, of a crucial step that normalizes the heterogeneous sensor data and reduces their inherent noise. Cleaned data are then supplied as input to a data reduction procedure that extracts the most informative and discriminant directions in order to get an efficient representation in a lower dimensional space where it is possible to more easily find a robust mapping between the observed outputs and the characteristics of the odors in input to the device. Experimental qualitative proofs of the validity of the procedure are given by analyzing data acquired for two different pure analytes and their binary mixtures. Moreover, a classification task is performed in order to explore the possibility of automatically recognizing pure compounds and to predict binary mixture concentrations.


Asunto(s)
Técnicas Biosensibles , Odorantes/análisis , Bulbo Olfatorio , Humanos , Polímeros/análisis , Polímeros/clasificación
14.
Comput Biol Med ; 176: 108590, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38763066

RESUMEN

Over the past two decades, machine analysis of medical imaging has advanced rapidly, opening up significant potential for several important medical applications. As complicated diseases increase and the number of cases rises, the role of machine-based imaging analysis has become indispensable. It serves as both a tool and an assistant to medical experts, providing valuable insights and guidance. A particularly challenging task in this area is lesion segmentation, a task that is challenging even for experienced radiologists. The complexity of this task highlights the urgent need for robust machine learning approaches to support medical staff. In response, we present our novel solution: the D-TrAttUnet architecture. This framework is based on the observation that different diseases often target specific organs. Our architecture includes an encoder-decoder structure with a composite Transformer-CNN encoder and dual decoders. The encoder includes two paths: the Transformer path and the Encoders Fusion Module path. The Dual-Decoder configuration uses two identical decoders, each with attention gates. This allows the model to simultaneously segment lesions and organs and integrate their segmentation losses. To validate our approach, we performed evaluations on the Covid-19 and Bone Metastasis segmentation tasks. We also investigated the adaptability of the model by testing it without the second decoder in the segmentation of glands and nuclei. The results confirmed the superiority of our approach, especially in Covid-19 infections and the segmentation of bone metastases. In addition, the hybrid encoder showed exceptional performance in the segmentation of glands and nuclei, solidifying its role in modern medical image analysis.


Asunto(s)
Redes Neurales de la Computación , Humanos , COVID-19/diagnóstico por imagen , SARS-CoV-2 , Aprendizaje Automático , Procesamiento de Imagen Asistido por Computador/métodos , Interpretación de Imagen Asistida por Computador/métodos
15.
Med Image Anal ; 86: 102797, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36966605

RESUMEN

Since the emergence of the Covid-19 pandemic in late 2019, medical imaging has been widely used to analyze this disease. Indeed, CT-scans of the lungs can help diagnose, detect, and quantify Covid-19 infection. In this paper, we address the segmentation of Covid-19 infection from CT-scans. To improve the performance of the Att-Unet architecture and maximize the use of the Attention Gate, we propose the PAtt-Unet and DAtt-Unet architectures. PAtt-Unet aims to exploit the input pyramids to preserve the spatial awareness in all of the encoder layers. On the other hand, DAtt-Unet is designed to guide the segmentation of Covid-19 infection inside the lung lobes. We also propose to combine these two architectures into a single one, which we refer to as PDAtt-Unet. To overcome the blurry boundary pixels segmentation of Covid-19 infection, we propose a hybrid loss function. The proposed architectures were tested on four datasets with two evaluation scenarios (intra and cross datasets). Experimental results showed that both PAtt-Unet and DAtt-Unet improve the performance of Att-Unet in segmenting Covid-19 infections. Moreover, the combination architecture PDAtt-Unet led to further improvement. To Compare with other methods, three baseline segmentation architectures (Unet, Unet++, and Att-Unet) and three state-of-the-art architectures (InfNet, SCOATNet, and nCoVSegNet) were tested. The comparison showed the superiority of the proposed PDAtt-Unet trained with the proposed hybrid loss (PDEAtt-Unet) over all other methods. Moreover, PDEAtt-Unet is able to overcome various challenges in segmenting Covid-19 infections in four datasets and two evaluation scenarios.


Asunto(s)
COVID-19 , Pandemias , Humanos , Tomografía Computarizada por Rayos X , Procesamiento de Imagen Asistido por Computador
16.
Artículo en Inglés | MEDLINE | ID: mdl-36981646

RESUMEN

The epidemiology of COVID-19 presented major shifts during the pandemic period. Factors such as the most common symptoms and severity of infection, the circulation of different variants, the preparedness of health services, and control efforts based on pharmaceutical and non-pharmaceutical interventions played important roles in the disease incidence. The constant evolution and changes require the continuous mapping and assessing of epidemiological features based on time-series forecasting. Nonetheless, it is necessary to identify the events, patterns, and actions that were potential factors that affected daily COVID-19 cases. In this work, we analyzed several databases, including information on social mobility, epidemiological reports, and mass population testing, to identify patterns of reported cases and events that may indicate changes in COVID-19 behavior in the city of Araraquara, Brazil. In our analysis, we used a mathematical approach with the fast Fourier transform (FFT) to map possible events and machine learning model approaches such as Seasonal Auto-regressive Integrated Moving Average (ARIMA) and neural networks (NNs) for data interpretation and temporal prospecting. Our results showed a root-mean-square error (RMSE) of about 5 (more precisely, a 4.55 error over 71 cases for 20 March 2021 and a 5.57 error over 106 cases for 3 June 2021). These results demonstrated that FFT is a useful tool for supporting the development of the best prevention and control measures for COVID-19.


Asunto(s)
COVID-19 , Humanos , COVID-19/epidemiología , Modelos Estadísticos , Brasil/epidemiología , Redes Neurales de la Computación , Pandemias , Predicción
17.
J Imaging ; 7(3)2021 Mar 09.
Artículo en Inglés | MEDLINE | ID: mdl-34460707

RESUMEN

In recent years, automatic tissue phenotyping has attracted increasing interest in the Digital Pathology (DP) field. For Colorectal Cancer (CRC), tissue phenotyping can diagnose the cancer and differentiate between different cancer grades. The development of Whole Slide Images (WSIs) has provided the required data for creating automatic tissue phenotyping systems. In this paper, we study different hand-crafted feature-based and deep learning methods using two popular multi-classes CRC-tissue-type databases: Kather-CRC-2016 and CRC-TP. For the hand-crafted features, we use two texture descriptors (LPQ and BSIF) and their combination. In addition, two classifiers are used (SVM and NN) to classify the texture features into distinct CRC tissue types. For the deep learning methods, we evaluate four Convolutional Neural Network (CNN) architectures (ResNet-101, ResNeXt-50, Inception-v3, and DenseNet-161). Moreover, we propose two Ensemble CNN approaches: Mean-Ensemble-CNN and NN-Ensemble-CNN. The experimental results show that the proposed approaches outperformed the hand-crafted feature-based methods, CNN architectures and the state-of-the-art methods in both databases.

18.
J Imaging ; 7(9)2021 Sep 18.
Artículo en Inglés | MEDLINE | ID: mdl-34564115

RESUMEN

COVID-19 infection recognition is a very important step in the fight against the COVID-19 pandemic. In fact, many methods have been used to recognize COVID-19 infection including Reverse Transcription Polymerase Chain Reaction (RT-PCR), X-ray scan, and Computed Tomography scan (CT- scan). In addition to the recognition of the COVID-19 infection, CT scans can provide more important information about the evolution of this disease and its severity. With the extensive number of COVID-19 infections, estimating the COVID-19 percentage can help the intensive care to free up the resuscitation beds for the critical cases and follow other protocol for less severity cases. In this paper, we introduce COVID-19 percentage estimation dataset from CT-scans, where the labeling process was accomplished by two expert radiologists. Moreover, we evaluate the performance of three Convolutional Neural Network (CNN) architectures: ResneXt-50, Densenet-161, and Inception-v3. For the three CNN architectures, we use two loss functions: MSE and Dynamic Huber. In addition, two pretrained scenarios are investigated (ImageNet pretrained models and pretrained models using X-ray data). The evaluated approaches achieved promising results on the estimation of COVID-19 infection. Inception-v3 using Dynamic Huber loss function and pretrained models using X-ray data achieved the best performance for slice-level results: 0.9365, 5.10, and 9.25 for Pearson Correlation coefficient (PC), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE), respectively. On the other hand, the same approach achieved 0.9603, 4.01, and 6.79 for PCsubj, MAEsubj, and RMSEsubj, respectively, for subject-level results. These results prove that using CNN architectures can provide accurate and fast solution to estimate the COVID-19 infection percentage for monitoring the evolution of the patient state.

19.
Front Psychol ; 12: 678052, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34366997

RESUMEN

Several studies have found a delay in the development of facial emotion recognition and expression in children with an autism spectrum condition (ASC). Several interventions have been designed to help children to fill this gap. Most of them adopt technological devices (i.e., robots, computers, and avatars) as social mediators and reported evidence of improvement. Few interventions have aimed at promoting emotion recognition and expression abilities and, among these, most have focused on emotion recognition. Moreover, a crucial point is the generalization of the ability acquired during treatment to naturalistic interactions. This study aimed to evaluate the effectiveness of two technological-based interventions focused on the expression of basic emotions comparing a robot-based type of training with a "hybrid" computer-based one. Furthermore, we explored the engagement of the hybrid technological device introduced in the study as an intermediate step to facilitate the generalization of the acquired competencies in naturalistic settings. A two-group pre-post-test design was applied to a sample of 12 children (M = 9.33; ds = 2.19) with autism. The children were included in one of the two groups: group 1 received a robot-based type of training (n = 6); and group 2 received a computer-based type of training (n = 6). Pre- and post-intervention evaluations (i.e., time) of facial expression and production of four basic emotions (happiness, sadness, fear, and anger) were performed. Non-parametric ANOVAs found significant time effects between pre- and post-interventions on the ability to recognize sadness [t (1) = 7.35, p = 0.006; pre: M (ds) = 4.58 (0.51); post: M (ds) = 5], and to express happiness [t (1) = 5.72, p = 0.016; pre: M (ds) = 3.25 (1.81); post: M (ds) = 4.25 (1.76)], and sadness [t (1) = 10.89, p < 0; pre: M (ds) = 1.5 (1.32); post: M (ds) = 3.42 (1.78)]. The group*time interactions were significant for fear [t (1) = 1.019, p = 0.03] and anger expression [t (1) = 1.039, p = 0.03]. However, Mann-Whitney comparisons did not show significant differences between robot-based and computer-based training. Finally, no difference was found in the levels of engagement comparing the two groups in terms of the number of voice prompts given during interventions. Albeit the results are preliminary and should be interpreted with caution, this study suggests that two types of technology-based training, one mediated via a humanoid robot and the other via a pre-settled video of a peer, perform similarly in promoting facial recognition and expression of basic emotions in children with an ASC. The findings represent the first step to generalize the abilities acquired in a laboratory-trained situation to naturalistic interactions.

20.
Artículo en Inglés | MEDLINE | ID: mdl-32349259

RESUMEN

Epidemiological figures of the SARS-CoV-2 epidemic in Italy are higher than those observed in China. Our objective was to model the SARS-CoV-2 outbreak progression in Italian regions vs. Lombardy to assess the epidemic's progression. Our setting was Italy, and especially Lombardy, which is experiencing a heavy burden of SARS-CoV-2 infections. The peak of new daily cases of the epidemic has been reached on the 29th, while was delayed in Central and Southern Italian regions compared to Northern ones. In our models, we estimated the basic reproduction number (R0), which represents the average number of people that can be infected by a person who has already acquired the infection, both by fitting the exponential growth rate of the infection across a 1-month period and also by using day-by-day assessments based on single observations. We used the susceptible-exposed-infected-removed (SEIR) compartment model to predict the spreading of the pandemic in Italy. The two methods provide an agreement of values, although the first method based on exponential fit should provide a better estimation, being computed on the entire time series. Taking into account the growth rate of the infection across a 1-month period, each infected person in Lombardy has involved 4 other people (3.6 based on data of April 23rd) compared to a value of R0 = 2.68, as reported in the Chinese city of Wuhan. According to our model, Piedmont, Veneto, Emilia Romagna, Tuscany and Marche will reach an R0 value of up to 3.5. The R0 was 3.11 for Lazio and 3.14 for the Campania region, where the latter showed the highest value among the Southern Italian regions, followed by Apulia (3.11), Sicily (2.99), Abruzzo (3.0), Calabria (2.84), Basilicata (2.66), and Molise (2.6). The R0 value is decreased in Lombardy and the Northern regions, while it is increased in Central and Southern regions. The expected peak of the SEIR model is set at the end of March, at a national level, with Southern Italian regions reaching the peak in the first days of April. Regarding the strengths and limitations of this study, our model is based on assumptions that might not exactly correspond to the evolution of the epidemic. What we know about the SARS-CoV-2 epidemic is based on Chinese data that seems to be different than those from Italy; Lombardy is experiencing an evolution of the epidemic that seems unique inside Italy and Europe, probably due to demographic and environmental factors.


Asunto(s)
Infecciones por Coronavirus/epidemiología , Coronavirus , Brotes de Enfermedades , Pandemias , Neumonía Viral/epidemiología , Número Básico de Reproducción , Betacoronavirus , COVID-19 , China/epidemiología , Infecciones por Coronavirus/transmisión , Humanos , Italia/epidemiología , Neumonía Viral/transmisión , SARS-CoV-2 , Síndrome Respiratorio Agudo Grave/epidemiología , Sicilia/epidemiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA