Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
IEEE J Biomed Health Inform ; 28(2): 870-880, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38019619

RESUMO

Obstetrics and gynecology (OB/GYN) are areas of medicine that specialize in the care of women during pregnancy and childbirth and in the diagnosis of diseases of the female reproductive system. Ultrasound scanning has become ubiquitous in these branches of medicine, as breast or fetal ultrasound images can lead the sonographer and guide him through his diagnosis. However, ultrasound scan images require a lot of resources to annotate and are often unavailable for training purposes because of confidentiality reasons, which explains why deep learning methods are still not as commonly used to solve OB/GYN tasks as in other computer vision tasks. In order to tackle this lack of data for training deep neural networks in this context, we propose Prior-Guided Attribution (PGA), a novel method that takes advantage of prior spatial information during training by guiding part of its attribution towards these salient areas. Furthermore, we introduce a novel prior allocation strategy method to take into account several spatial priors at the same time while providing the model enough degrees of liberty to learn relevant features by itself. The proposed method only uses the additional information during training, without needing it during inference. After validating the different elements of the method as well as its genericity on a facial analysis problem, we demonstrate that the proposed PGA method constantly outperforms existing baselines on two ultrasound imaging OB/GYN tasks: breast cancer detection and scan plane detection with segmentation prior maps.


Assuntos
Ginecologia , Internato e Residência , Obstetrícia , Humanos , Gravidez , Masculino , Feminino , Ginecologia/educação , Obstetrícia/educação , Mama , Redes Neurais de Computação
2.
IEEE Trans Pattern Anal Mach Intell ; 45(3): 3664-3676, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35653454

RESUMO

Pruning Deep Neural Networks (DNNs) is a prominent field of study in the goal of inference runtime acceleration. In this paper, we introduce a novel data-free pruning protocol RED++. Only requiring a trained neural network, and not specific to any particular DNN, we exploit an adaptive data-free scalar hashing which exhibits redundancies among neuron weight values. We study the theoretical and empirical guarantees on the preservation of the accuracy from the hashing as well as the expected pruning ratio resulting from the exploitation of said redundancies. We propose a novel data-free pruning technique of DNN layers which removes the input-wise redundant operations. This algorithm is straightforward, parallelizable and offers novel perspective on DNN pruning by shifting the burden of large computation to efficient memory access and allocation. We provide theoretical guarantees on RED++ performance and empirically demonstrate its superiority over other data-free pruning methods and its competitiveness with data-driven ones on ResNets, MobileNets, and EfficientNets.

3.
J Med Internet Res ; 24(4): e35465, 2022 04 20.
Artigo em Inglês | MEDLINE | ID: mdl-35297766

RESUMO

BACKGROUND: The applications of artificial intelligence (AI) processes have grown significantly in all medical disciplines during the last decades. Two main types of AI have been applied in medicine: symbolic AI (eg, knowledge base and ontologies) and nonsymbolic AI (eg, machine learning and artificial neural networks). Consequently, AI has also been applied across most obstetrics and gynecology (OB/GYN) domains, including general obstetrics, gynecology surgery, fetal ultrasound, and assisted reproductive medicine, among others. OBJECTIVE: The aim of this study was to provide a systematic review to establish the actual contributions of AI reported in OB/GYN discipline journals. METHODS: The PubMed database was searched for citations indexed with "artificial intelligence" and at least one of the following medical subject heading (MeSH) terms between January 1, 2000, and April 30, 2020: "obstetrics"; "gynecology"; "reproductive techniques, assisted"; or "pregnancy." All publications in OB/GYN core disciplines journals were considered. The selection of journals was based on disciplines defined in Web of Science. The publications were excluded if no AI process was used in the study. Review, editorial, and commentary articles were also excluded. The study analysis comprised (1) classification of publications into OB/GYN domains, (2) description of AI methods, (3) description of AI algorithms, (4) description of data sets, (5) description of AI contributions, and (6) description of the validation of the AI process. RESULTS: The PubMed search retrieved 579 citations and 66 publications met the selection criteria. All OB/GYN subdomains were covered: obstetrics (41%, 27/66), gynecology (3%, 2/66), assisted reproductive medicine (33%, 22/66), early pregnancy (2%, 1/66), and fetal medicine (21%, 14/66). Both machine learning methods (39/66) and knowledge base methods (25/66) were represented. Machine learning used imaging, numerical, and clinical data sets. Knowledge base methods used mostly omics data sets. The actual contributions of AI were method/algorithm development (53%, 35/66), hypothesis generation (42%, 28/66), or software development (3%, 2/66). Validation was performed on one data set (86%, 57/66) and no external validation was reported. We observed a general rising trend in publications related to AI in OB/GYN over the last two decades. Most of these publications (82%, 54/66) remain out of the scope of the usual OB/GYN journals. CONCLUSIONS: In OB/GYN discipline journals, mostly preliminary work (eg, proof-of-concept algorithm or method) in AI applied to this discipline is reported and clinical validation remains an unmet prerequisite. Improvement driven by new AI research guidelines is expected. However, these guidelines are covering only a part of AI approaches (nonsymbolic) reported in this review; hence, updates need to be considered.


Assuntos
Ginecologia , Obstetrícia , Publicações Periódicas como Assunto , Inteligência Artificial , Feminino , Humanos , Gravidez
4.
Proc ACM Int Conf Multimodal Interact ; 2020: 874-875, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33274351

RESUMO

The goal of Face and Gesture Analysis for Health Informatics's workshop is to share and discuss the achievements as well as the challenges in using computer vision and machine learning for automatic human behavior analysis and modeling for clinical research and healthcare applications. The workshop aims to promote current research and support growth of multidisciplinary collaborations to advance this groundbreaking research. The meeting gathers scientists working in related areas of computer vision and machine learning, multi-modal signal processing and fusion, human centered computing, behavioral sensing, assistive technologies, and medical tutoring systems for healthcare applications and medicine.

5.
IEEE Trans Neural Syst Rehabil Eng ; 28(8): 1731-1741, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32746295

RESUMO

Next generation prosthetics will rely massively on myoelectric "Pattern Recognition" (PR) based control approaches, to improve their users' dexterity. One major identified factor of successful functioning of these approaches lies in the training of amputees and in their understanding of how those prosthetics works. We thus propose here an intuitive pattern similarity biofeedback which can be easily used to train amputees and allow them to optimize their muscular contractions to improve their control performance. Experiments were conducted on twenty able-bodied participants and one transradial amputee. Their performance in controlling an interface through a myoelectric PR algorithm was evaluated; before and after a short automatic user training session consisting in using the proposed visual biofeedback for ten participants, and using a generic PR algorithm output feedback for the others ten. Participants who were trained with the proposed biofeedback increased their classification score for the retrained gesture (by 39.4%), without affecting the overall classification performance (which progressed by 10.2%) through over-training and increase of False Positive rate as observed in the control group. Additional analysis indicates a clear change in contraction strategy only in the group who used the proposed biofeedback. These preliminary results highlight the potential of this method which does not focus so much on over-optimizing the pattern recognition algorithm or on physically training the users, but on providing them simple and intuitive information to adapt or change their motor strategies to solve some misclassification issues.


Assuntos
Amputados , Membros Artificiais , Biorretroalimentação Psicológica , Eletromiografia , Humanos , Reconhecimento Automatizado de Padrão
6.
Transl Psychiatry ; 10(1): 54, 2020 02 03.
Artigo em Inglês | MEDLINE | ID: mdl-32066713

RESUMO

Automated behavior analysis are promising tools to overcome current assessment limitations in psychiatry. At 9 months of age, we recorded 32 infants with West syndrome (WS) and 19 typically developing (TD) controls during a standardized mother-infant interaction. We computed infant hand movements (HM), speech turn taking of both partners (vocalization, pause, silences, overlap) and motherese. Then, we assessed whether multimodal social signals and interactional synchrony at 9 months could predict outcomes (autism spectrum disorder (ASD) and intellectual disability (ID)) of infants with WS at 4 years. At follow-up, 10 infants developed ASD/ID (WS+). The best machine learning reached 76.47% accuracy classifying WS vs. TD and 81.25% accuracy classifying WS+ vs. WS-. The 10 best features to distinguish WS+ and WS- included a combination of infant vocalizations and HM features combined with synchrony vocalization features. These data indicate that behavioral and interaction imaging was able to predict ASD/ID in high-risk children with WS.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Deficiência Intelectual , Espasmos Infantis , Criança , Humanos , Lactente , Fala
7.
Mol Autism ; 11(1): 5, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31956394

RESUMO

Background: Computer vision combined with human annotation could offer a novel method for exploring facial expression (FE) dynamics in children with autism spectrum disorder (ASD). Methods: We recruited 157 children with typical development (TD) and 36 children with ASD in Paris and Nice to perform two experimental tasks to produce FEs with emotional valence. FEs were explored by judging ratings and by random forest (RF) classifiers. To do so, we located a set of 49 facial landmarks in the task videos, we generated a set of geometric and appearance features and we used RF classifiers to explore how children with ASD differed from TD children when producing FEs. Results: Using multivariate models including other factors known to predict FEs (age, gender, intellectual quotient, emotion subtype, cultural background), ratings from expert raters showed that children with ASD had more difficulty producing FEs than TD children. In addition, when we explored how RF classifiers performed, we found that classification tasks, except for those for sadness, were highly accurate and that RF classifiers needed more facial landmarks to achieve the best classification for children with ASD. Confusion matrices showed that when RF classifiers were tested in children with ASD, anger was often confounded with happiness. Limitations: The sample size of the group of children with ASD was lower than that of the group of TD children. By using several control calculations, we tried to compensate for this limitation. Conclusion: Children with ASD have more difficulty producing socially meaningful FEs. The computer vision methods we used to explore FE dynamics also highlight that the production of FEs in children with ASD carries more ambiguity.


Assuntos
Transtorno do Espectro Autista/psicologia , Expressão Facial , Criança , Emoções , Feminino , Humanos , Masculino
8.
Front Psychol ; 9: 446, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29670561

RESUMO

The production of facial expressions (FEs) is an important skill that allows children to share and adapt emotions with their relatives and peers during social interactions. These skills are impaired in children with Autism Spectrum Disorder. However, the way in which typical children develop and master their production of FEs has still not been clearly assessed. This study aimed to explore factors that could influence the production of FEs in childhood such as age, gender, emotion subtype (sadness, anger, joy, and neutral), elicitation task (on request, imitation), area of recruitment (French Riviera and Parisian) and emotion multimodality. A total of one hundred fifty-seven children aged 6-11 years were enrolled in Nice and Paris, France. We asked them to produce FEs in two different tasks: imitation with an avatar model and production on request without a model. Results from a multivariate analysis revealed that: (1) children performed better with age. (2) Positive emotions were easier to produce than negative emotions. (3) Children produced better FE on request (as opposed to imitation); and (4) Riviera children performed better than Parisian children suggesting regional influences on emotion production. We conclude that facial emotion production is a complex developmental process influenced by several factors that needs to be acknowledged in future research.

9.
Front Psychol ; 9: 83, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29515472

RESUMO

Highlights The kinematics of hand movements (spatial use, curvature, acceleration, and velocity) of infants with their mothers in an interactive setting are significantly associated with age in cohorts of typical and at-risk infantsdiffer significantly at 5-6 months of age, depending on the context: relating either with an object or a person.Environmental and developmental factors shape the developmental trajectories of hand movements in different cohorts: environment for infants with VIMs; stage of development for premature infants and those with West syndrome; and both factors for infants with orality disorders.The curvature of hand movements specifically reflects atypical development in infants with West syndrome when developmental age is considered. We aimed to discriminate between typical and atypical developmental trajectory patterns of at-risk infants in an interactive setting in this observational and longitudinal study, with the assumption that hand movements (HM) reflect preverbal communication and its disorders. We examined the developmental trajectories of HM in five cohorts of at-risk infants and one control cohort, followed from ages 2 to 10 months: 25 West syndrome (WS), 13 preterm birth (PB), 16 orality disorder (OD), 14 with visually impaired mothers (VIM), 7 early hospitalization (EH), and 19 typically developing infants (TD). Video-recorded data were collected in three different structured interactive contexts. Descriptors of the hand motion were used to examine the extent to which HM were associated with age and cohort. We obtained four principal results: (i) the kinematics of HM (spatial use, curvature, acceleration, and velocity) were significantly associated with age in all cohorts; (ii) HM significantly differed at 5-6 months of age in TD infants, depending on the context; (iii) environmental and developmental factors shaped the developmental trajectories of HM in different cohorts: environment for VIM, development for PB and WS, and both factors for OD and; (iv) the curvatures of HM showed atypical development in WS infants when developmental age was considered. These findings support the importance of using kinematics of HM to identify very early developmental disorders in an interactive context and would allow early prevention and intervention for at-risk infants.

10.
Neural Netw ; 22(5-6): 748-56, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19616404

RESUMO

The head pose estimation problem is well known to be a challenging task in computer vision and is a useful tool for several applications involving human-computer interaction. This problem can be stated as a regression one where the input is an image and the output is pan and tilt angles. Finding the optimal regression is a hard problem because of the high dimensionality of the input (number of image pixels) and the large variety of morphologies and illumination. We propose a new method combining a boosting strategy for feature selection and a neural network for the regression. Potential features are a very large set of Haar-like wavelets which are well known to be adapted to face image processing. To achieve the feature selection, a new Fuzzy Functional Criterion (FFC) is introduced which is able to evaluate the link between a feature and the output without any estimation of the joint probability density function as in the Mutual Information. The boosting strategy uses this criterion at each step: features are evaluated by the FFC using weights on examples computed from the error produced by the neural network trained at the previous step. Tests are carried out on the commonly used Pointing 04 database and compared with three state-of-the-art methods. We also evaluate the accuracy of the estimation on FacePix, a database with a high angular resolution. Our method is compared positively to a Convolutional Neural Network, which is well known to incorporate feature extraction in its first layers.


Assuntos
Redes Neurais de Computação , Análise de Regressão , Algoritmos , Bases de Dados Factuais , Face , Lógica Fuzzy , Humanos , Dinâmica não Linear , Estimulação Luminosa , Probabilidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...