Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Sensors (Basel) ; 23(9)2023 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-37177764

RESUMEN

Developing computer-aided approaches for cancer diagnosis and grading is currently receiving an increasing demand: this could take over intra- and inter-observer inconsistency, speed up the screening process, increase early diagnosis, and improve the accuracy and consistency of the treatment-planning processes.The third most common cancer worldwide and the second most common in women is colorectal cancer (CRC). Grading CRC is a key task in planning appropriate treatments and estimating the response to them. Unfortunately, it has not yet been fully demonstrated how the most advanced models and methodologies of machine learning can impact this crucial task.This paper systematically investigates the use of advanced deep models (convolutional neural networks and transformer architectures) to improve colon carcinoma detection and grading from histological images. To the best of our knowledge, this is the first attempt at using transformer architectures and ensemble strategies for exploiting deep learning paradigms for automatic colon cancer diagnosis. Results on the largest publicly available dataset demonstrated a substantial improvement with respect to the leading state-of-the-art methods. In particular, by exploiting a transformer architecture, it was possible to observe a 3% increase in accuracy in the detection task (two-class problem) and up to a 4% improvement in the grading task (three-class problem) by also integrating an ensemble strategy.


Asunto(s)
Carcinoma , Neoplasias del Colon , Aprendizaje Profundo , Humanos , Femenino , Detección Precoz del Cáncer , Neoplasias del Colon/diagnóstico
2.
Sensors (Basel) ; 23(3)2023 Feb 03.
Artículo en Inglés | MEDLINE | ID: mdl-36772733

RESUMEN

Alzheimer's disease (AD) is the most common form of dementia. Computer-aided diagnosis (CAD) can help in the early detection of associated cognitive impairment. The aim of this work is to improve the automatic detection of dementia in MRI brain data. For this purpose, we used an established pipeline that includes the registration, slicing, and classification steps. The contribution of this research was to investigate for the first time, to our knowledge, three current and promising deep convolutional models (ResNet, DenseNet, and EfficientNet) and two transformer-based architectures (MAE and DeiT) for mapping input images to clinical diagnosis. To allow a fair comparison, the experiments were performed on two publicly available datasets (ADNI and OASIS) using multiple benchmarks obtained by changing the number of slices per subject extracted from the available 3D voxels. The experiments showed that very deep ResNet and DenseNet models performed better than the shallow ResNet and VGG versions tested in the literature. It was also found that transformer architectures, and DeiT in particular, produced the best classification results and were more robust to the noise added by increasing the number of slices. A significant improvement in accuracy (up to 7%) was achieved compared to the leading state-of-the-art approaches, paving the way for the use of CAD approaches in real-world applications.


Asunto(s)
Enfermedad de Alzheimer , Aprendizaje Profundo , Humanos , Enfermedad de Alzheimer/diagnóstico por imagen , Redes Neurales de la Computación , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen
3.
Sensors (Basel) ; 22(3)2022 Jan 24.
Artículo en Inglés | MEDLINE | ID: mdl-35161612

RESUMEN

Neurodevelopmental disorders (NDD) are impairments of the growth and development of the brain and/or central nervous system. In the light of clinical findings on early diagnosis of NDD and prompted by recent advances in hardware and software technologies, several researchers tried to introduce automatic systems to analyse the baby's movement, even in cribs. Traditional technologies for automatic baby motion analysis leverage contact sensors. Alternatively, remotely acquired video data (e.g., RGB or depth) can be used, with or without active/passive markers positioned on the body. Markerless approaches are easier to set up and maintain (without any human intervention) and they work well on non-collaborative users, making them the most suitable technologies for clinical applications involving children. On the other hand, they require complex computational strategies for extracting knowledge from data, and then, they strongly depend on advances in computer vision and machine learning, which are among the most expanding areas of research. As a consequence, also markerless video-based analysis of movements in children for NDD has been rapidly expanding but, to the best of our knowledge, there is not yet a survey paper providing a broad overview of how recent scientific developments impacted it. This paper tries to fill this gap and it lists specifically designed data acquisition tools and publicly available datasets as well. Besides, it gives a glimpse of the most promising techniques in computer vision, machine learning and pattern recognition which could be profitably exploited for children motion analysis in videos.


Asunto(s)
Aprendizaje Automático , Enfermedades del Sistema Nervioso , Niño , Humanos , Movimiento (Física) , Movimiento , Programas Informáticos
4.
Sensors (Basel) ; 20(21)2020 Nov 07.
Artículo en Inglés | MEDLINE | ID: mdl-33171757

RESUMEN

Diatoms are among the dominant phytoplankters in marine and freshwater habitats, and important biomarkers of water quality, making their identification and classification one of the current challenges for environmental monitoring. To date, taxonomy of the species populating a water column is still conducted by marine biologists on the basis of their own experience. On the other hand, deep learning is recognized as the elective technique for solving image classification problems. However, a large amount of training data is usually needed, thus requiring the synthetic enlargement of the dataset through data augmentation. In the case of microalgae, the large variety of species that populate the marine environments makes it arduous to perform an exhaustive training that considers all the possible classes. However, commercial test slides containing one diatom element per class fixed in between two glasses are available on the market. These are usually prepared by expert diatomists for taxonomy purposes, thus constituting libraries of the populations that can be found in oceans. Here we show that such test slides are very useful for training accurate deep Convolutional Neural Networks (CNNs). We demonstrate the successful classification of diatoms based on a proper CNNs ensemble and a fully augmented dataset, i.e., creation starting from one single image per class available from a commercial glass slide containing 50 fixed species in a dry setting. This approach avoids the time-consuming steps of water sampling and labeling by skilled marine biologists. To accomplish this goal, we exploit the holographic imaging modality, which permits the accessing of a quantitative phase-contrast maps and a posteriori flexible refocusing due to its intrinsic 3D imaging capability. The network model is then validated by using holographic recordings of live diatoms imaged in water samples i.e., in their natural wet environmental condition.


Asunto(s)
Diatomeas/clasificación , Holografía , Aprendizaje Automático , Microscopía , Redes Neurales de la Computación
5.
Sensors (Basel) ; 18(11)2018 Nov 16.
Artículo en Inglés | MEDLINE | ID: mdl-30453518

RESUMEN

In this paper, a computational approach is proposed and put into practice to assess the capability of children having had diagnosed Autism Spectrum Disorders (ASD) to produce facial expressions. The proposed approach is based on computer vision components working on sequence of images acquired by an off-the-shelf camera in unconstrained conditions. Action unit intensities are estimated by analyzing local appearance and then both temporal and geometrical relationships, learned by Convolutional Neural Networks, are exploited to regularize gathered estimates. To cope with stereotyped movements and to highlight even subtle voluntary movements of facial muscles, a personalized and contextual statistical modeling of non-emotional face is formulated and used as a reference. Experimental results demonstrate how the proposed pipeline can improve the analysis of facial expressions produced by ASD children. A comparison of system's outputs with the evaluations performed by psychologists, on the same group of ASD children, makes evident how the performed quantitative analysis of children's abilities helps to go beyond the traditional qualitative ASD assessment/diagnosis protocols, whose outcomes are affected by human limitations in observing and understanding multi-cues behaviors such as facial expressions.


Asunto(s)
Cara/fisiología , Expresión Facial , Redes Neurales de la Computación , Adolescente , Algoritmos , Trastorno del Espectro Autista/diagnóstico , Niño , Emociones/fisiología , Femenino , Humanos , Masculino
6.
Foods ; 11(24)2022 Dec 07.
Artículo en Inglés | MEDLINE | ID: mdl-36553705

RESUMEN

The lentil (Lens culinaris Medik.) is one of the major pulse crops cultivated worldwide. However, in the last decades, lentil cultivation has decreased in many areas surrounding Mediterranean countries due to low yields, new lifestyles, and changed eating habits. Thus, many landraces and local varieties have disappeared, while local farmers are the only custodians of the treasure of lentil genetic resources. Recently, the lentil has been rediscovered to meet the needs of more sustainable agriculture and food systems. Here, we proposed an image analysis approach that, besides being a rapid and non-destructive method, can characterize seed size grading and seed coat morphology. The results indicated that image analysis can give much more detailed and precise descriptions of grain size and shape characteristics than can be practically achieved by manual quality assessment. Lentil size measurements combined with seed coat descriptors and the color attributes of the grains allowed us to develop an algorithm that was able to identify 64 red lentil genotypes collected at ICARDA with an accuracy approaching 98% for seed size grading and close to 93% for the classification of seed coat morphology.

7.
Front Psychol ; 12: 678052, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34366997

RESUMEN

Several studies have found a delay in the development of facial emotion recognition and expression in children with an autism spectrum condition (ASC). Several interventions have been designed to help children to fill this gap. Most of them adopt technological devices (i.e., robots, computers, and avatars) as social mediators and reported evidence of improvement. Few interventions have aimed at promoting emotion recognition and expression abilities and, among these, most have focused on emotion recognition. Moreover, a crucial point is the generalization of the ability acquired during treatment to naturalistic interactions. This study aimed to evaluate the effectiveness of two technological-based interventions focused on the expression of basic emotions comparing a robot-based type of training with a "hybrid" computer-based one. Furthermore, we explored the engagement of the hybrid technological device introduced in the study as an intermediate step to facilitate the generalization of the acquired competencies in naturalistic settings. A two-group pre-post-test design was applied to a sample of 12 children (M = 9.33; ds = 2.19) with autism. The children were included in one of the two groups: group 1 received a robot-based type of training (n = 6); and group 2 received a computer-based type of training (n = 6). Pre- and post-intervention evaluations (i.e., time) of facial expression and production of four basic emotions (happiness, sadness, fear, and anger) were performed. Non-parametric ANOVAs found significant time effects between pre- and post-interventions on the ability to recognize sadness [t (1) = 7.35, p = 0.006; pre: M (ds) = 4.58 (0.51); post: M (ds) = 5], and to express happiness [t (1) = 5.72, p = 0.016; pre: M (ds) = 3.25 (1.81); post: M (ds) = 4.25 (1.76)], and sadness [t (1) = 10.89, p < 0; pre: M (ds) = 1.5 (1.32); post: M (ds) = 3.42 (1.78)]. The group*time interactions were significant for fear [t (1) = 1.019, p = 0.03] and anger expression [t (1) = 1.039, p = 0.03]. However, Mann-Whitney comparisons did not show significant differences between robot-based and computer-based training. Finally, no difference was found in the levels of engagement comparing the two groups in terms of the number of voice prompts given during interventions. Albeit the results are preliminary and should be interpreted with caution, this study suggests that two types of technology-based training, one mediated via a humanoid robot and the other via a pre-settled video of a peer, perform similarly in promoting facial recognition and expression of basic emotions in children with an ASC. The findings represent the first step to generalize the abilities acquired in a laboratory-trained situation to naturalistic interactions.

8.
Springerplus ; 4: 645, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26543779

RESUMEN

Automatic facial expression recognition (FER) is a topic of growing interest mainly due to the rapid spread of assistive technology applications, as human-robot interaction, where a robust emotional awareness is a key point to best accomplish the assistive task. This paper proposes a comprehensive study on the application of histogram of oriented gradients (HOG) descriptor in the FER problem, highlighting as this powerful technique could be effectively exploited for this purpose. In particular, this paper highlights that a proper set of the HOG parameters can make this descriptor one of the most suitable to characterize facial expression peculiarities. A large experimental session, that can be divided into three different phases, was carried out exploiting a consolidated algorithmic pipeline. The first experimental phase was aimed at proving the suitability of the HOG descriptor to characterize facial expression traits and, to do this, a successful comparison with most commonly used FER frameworks was carried out. In the second experimental phase, different publicly available facial datasets were used to test the system on images acquired in different conditions (e.g. image resolution, lighting conditions, etc.). As a final phase, a test on continuous data streams was carried out on-line in order to validate the system in real-world operating conditions that simulated a real-time human-machine interaction.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA