Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
IEEE J Biomed Health Inform ; 28(2): 870-880, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38019619

RESUMO

Obstetrics and gynecology (OB/GYN) are areas of medicine that specialize in the care of women during pregnancy and childbirth and in the diagnosis of diseases of the female reproductive system. Ultrasound scanning has become ubiquitous in these branches of medicine, as breast or fetal ultrasound images can lead the sonographer and guide him through his diagnosis. However, ultrasound scan images require a lot of resources to annotate and are often unavailable for training purposes because of confidentiality reasons, which explains why deep learning methods are still not as commonly used to solve OB/GYN tasks as in other computer vision tasks. In order to tackle this lack of data for training deep neural networks in this context, we propose Prior-Guided Attribution (PGA), a novel method that takes advantage of prior spatial information during training by guiding part of its attribution towards these salient areas. Furthermore, we introduce a novel prior allocation strategy method to take into account several spatial priors at the same time while providing the model enough degrees of liberty to learn relevant features by itself. The proposed method only uses the additional information during training, without needing it during inference. After validating the different elements of the method as well as its genericity on a facial analysis problem, we demonstrate that the proposed PGA method constantly outperforms existing baselines on two ultrasound imaging OB/GYN tasks: breast cancer detection and scan plane detection with segmentation prior maps.


Assuntos
Ginecologia , Internato e Residência , Obstetrícia , Humanos , Gravidez , Masculino , Feminino , Ginecologia/educação , Obstetrícia/educação , Mama , Redes Neurais de Computação
2.
IEEE Trans Pattern Anal Mach Intell ; 45(3): 3664-3676, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35653454

RESUMO

Pruning Deep Neural Networks (DNNs) is a prominent field of study in the goal of inference runtime acceleration. In this paper, we introduce a novel data-free pruning protocol RED++. Only requiring a trained neural network, and not specific to any particular DNN, we exploit an adaptive data-free scalar hashing which exhibits redundancies among neuron weight values. We study the theoretical and empirical guarantees on the preservation of the accuracy from the hashing as well as the expected pruning ratio resulting from the exploitation of said redundancies. We propose a novel data-free pruning technique of DNN layers which removes the input-wise redundant operations. This algorithm is straightforward, parallelizable and offers novel perspective on DNN pruning by shifting the burden of large computation to efficient memory access and allocation. We provide theoretical guarantees on RED++ performance and empirically demonstrate its superiority over other data-free pruning methods and its competitiveness with data-driven ones on ResNets, MobileNets, and EfficientNets.

3.
Mol Autism ; 11(1): 5, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31956394

RESUMO

Background: Computer vision combined with human annotation could offer a novel method for exploring facial expression (FE) dynamics in children with autism spectrum disorder (ASD). Methods: We recruited 157 children with typical development (TD) and 36 children with ASD in Paris and Nice to perform two experimental tasks to produce FEs with emotional valence. FEs were explored by judging ratings and by random forest (RF) classifiers. To do so, we located a set of 49 facial landmarks in the task videos, we generated a set of geometric and appearance features and we used RF classifiers to explore how children with ASD differed from TD children when producing FEs. Results: Using multivariate models including other factors known to predict FEs (age, gender, intellectual quotient, emotion subtype, cultural background), ratings from expert raters showed that children with ASD had more difficulty producing FEs than TD children. In addition, when we explored how RF classifiers performed, we found that classification tasks, except for those for sadness, were highly accurate and that RF classifiers needed more facial landmarks to achieve the best classification for children with ASD. Confusion matrices showed that when RF classifiers were tested in children with ASD, anger was often confounded with happiness. Limitations: The sample size of the group of children with ASD was lower than that of the group of TD children. By using several control calculations, we tried to compensate for this limitation. Conclusion: Children with ASD have more difficulty producing socially meaningful FEs. The computer vision methods we used to explore FE dynamics also highlight that the production of FEs in children with ASD carries more ambiguity.


Assuntos
Transtorno do Espectro Autista/psicologia , Expressão Facial , Criança , Emoções , Feminino , Humanos , Masculino
4.
Front Psychol ; 9: 446, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29670561

RESUMO

The production of facial expressions (FEs) is an important skill that allows children to share and adapt emotions with their relatives and peers during social interactions. These skills are impaired in children with Autism Spectrum Disorder. However, the way in which typical children develop and master their production of FEs has still not been clearly assessed. This study aimed to explore factors that could influence the production of FEs in childhood such as age, gender, emotion subtype (sadness, anger, joy, and neutral), elicitation task (on request, imitation), area of recruitment (French Riviera and Parisian) and emotion multimodality. A total of one hundred fifty-seven children aged 6-11 years were enrolled in Nice and Paris, France. We asked them to produce FEs in two different tasks: imitation with an avatar model and production on request without a model. Results from a multivariate analysis revealed that: (1) children performed better with age. (2) Positive emotions were easier to produce than negative emotions. (3) Children produced better FE on request (as opposed to imitation); and (4) Riviera children performed better than Parisian children suggesting regional influences on emotion production. We conclude that facial emotion production is a complex developmental process influenced by several factors that needs to be acknowledged in future research.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA