Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Eur Radiol ; 2024 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-39134745

RESUMO

OBJECTIVES: The interpretation of mammograms requires many years of training and experience. Currently, training in mammography, like the rest of diagnostic radiology, is through institutional libraries, books, and experience accumulated over time. We explore whether artificial Intelligence (AI)-generated images can help in simulation education and result in measurable improvement in performance of residents in training. METHODS: We developed a generative adversarial network (GAN) that was capable of generating mammography images with varying characteristics, such as size and density, and created a tool with which a user could control these characteristics. The tool allowed the user (a radiology resident) to realistically insert cancers within different regions of the mammogram. We then provided this tool to residents in training. Residents were randomized into a practice group and a non-practice group, and the difference in performance before and after practice with such a tool (in comparison to no intervention in the non-practice group) was assessed. RESULTS: Fifty residents participated in the study, 27 underwent simulation training, and 23 did not. There was a significant improvement in the sensitivity (7.43 percent, significant at p-value = 0.03), negative predictive value (5.05 percent, significant at p-value = 0.008) and accuracy (6.49 percent, significant at p-value = 0.01) among residents in the detection of cancer on mammograms after simulation training. CONCLUSION: Our study shows the value of simulation training in diagnostic radiology and explores the potential of generative AI to enable such simulation training. CLINICAL RELEVANCE STATEMENT: Using generative artificial intelligence, simulation training modules can be developed that can help residents in training by providing them with a visual impression of a variety of different cases. KEY POINTS: Generative networks can produce diagnostic imaging with specific characteristics, potentially useful for training residents. Training with generating images improved residents' mammographic diagnostic abilities. Development of a game-like interface that exploits these networks can result in improvement in performance over a short training period.

2.
Eur Radiol ; 31(8): 6039-6048, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33471219

RESUMO

OBJECTIVES: To study whether a trained convolutional neural network (CNN) can be of assistance to radiologists in differentiating Coronavirus disease (COVID)-positive from COVID-negative patients using chest X-ray (CXR) through an ambispective clinical study. To identify subgroups of patients where artificial intelligence (AI) can be of particular value and analyse what imaging features may have contributed to the performance of AI by means of visualisation techniques. METHODS: CXR of 487 patients were classified into [4] categories-normal, classical COVID, indeterminate, and non-COVID by consensus opinion of 2 radiologists. CXR which were classified as "normal" and "indeterminate" were then subjected to analysis by AI, and final categorisation provided as guided by prediction of the network. Precision and recall of the radiologist alone and radiologist assisted by AI were calculated in comparison to reverse transcriptase-polymerase chain reaction (RT-PCR) as the gold standard. Attention maps of the CNN were analysed to understand regions in the CXR important to the AI algorithm in making a prediction. RESULTS: The precision of radiologists improved from 65.9 to 81.9% and recall improved from 17.5 to 71.75 when assistance with AI was provided. AI showed 92% accuracy in classifying "normal" CXR into COVID or non-COVID. Analysis of attention maps revealed attention on the cardiac shadow in these "normal" radiographs. CONCLUSION: This study shows how deployment of an AI algorithm can complement a human expert in the determination of COVID status. Analysis of the detected features suggests possible subtle cardiac changes, laying ground for further investigative studies into possible cardiac changes. KEY POINTS: • Through an ambispective clinical study, we show how assistance with an AI algorithm can improve recall (sensitivity) and precision (positive predictive value) of radiologists in assessing CXR for possible COVID in comparison to RT-PCR. • We show that AI achieves the best results in images classified as "normal" by radiologists. We conjecture that possible subtle cardiac in the CXR, imperceptible to the human eye, may have contributed to this prediction. • The reported results may pave the way for a human computer collaboration whereby the expert with some help from the AI algorithm achieves higher accuracy in predicting COVID status on CXR than previously thought possible when considering either alone.


Assuntos
Inteligência Artificial , COVID-19 , Humanos , Radiografia Torácica , SARS-CoV-2 , Tomografia Computadorizada por Raios X , Raios X
3.
Emotion ; 24(2): 495-505, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37561517

RESUMO

People readily and automatically process facial emotion and identity, and it has been reported that these cues are processed both dependently and independently. However, this question of identity independent encoding of emotions has only been examined using posed, often exaggerated expressions of emotion, that do not account for the substantial individual differences in emotion recognition. In this study, we ask whether people's unique beliefs of how emotions should be reflected in facial expressions depend on the identity of the face. To do this, we employed a genetic algorithm where participants created facial expressions to represent different emotions. Participants generated facial expressions of anger, fear, happiness, and sadness, on two different identities. Facial features were controlled by manipulating a set of weights, allowing us to probe the exact positions of faces in high-dimensional expression space. We found that participants created facial expressions belonging to each identity in a similar space that was unique to the participant, for angry, fearful, and happy expressions, but not sad. However, using a machine learning algorithm that examined the positions of faces in expression space, we also found systematic differences between the two identities' expressions across participants. This suggests that participants' beliefs of how an emotion should be reflected in a facial expression are unique to them and identity independent, although there are also some systematic differences in the facial expressions between two identities that are common across all individuals. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Assuntos
Emoções , Reconhecimento Facial , Humanos , Ira , Felicidade , Medo , Tristeza , Expressão Facial
4.
Commun Psychol ; 2(1): 62, 2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-39242751

RESUMO

Humans can use the facial expressions of another to infer their emotional state, although it remains unknown how this process occurs. Here we suppose the presence of perceptive fields within expression space, analogous to feature-tuned receptive-fields of early visual cortex. We developed genetic algorithms to explore a multidimensional space of possible expressions and identify those that individuals associated with different emotions. We next defined perceptive fields as probabilistic maps within expression space, and found that they could predict the emotions that individuals infer from expressions presented in a separate task. We found profound individual variability in their size, location, and specificity, and that individuals with more similar perceptive fields had similar interpretations of the emotion communicated by an expression, providing possible channels for social communication. Modelling perceptive fields therefore provides a predictive framework in which to understand how individuals infer emotions from facial expressions.

5.
Artigo em Inglês | MEDLINE | ID: mdl-36449592

RESUMO

In the task incremental learning problem, deep learning models suffer from catastrophic forgetting of previously seen classes/tasks as they are trained on new classes/tasks. This problem becomes even harder when some of the test classes do not belong to the training class set, i.e., the task incremental generalized zero-shot learning problem. We propose a novel approach to address the task incremental learning problem for both the non zero-shot and zero-shot settings. Our proposed approach, called Rectification-based Knowledge Retention (RKR), applies weight rectifications and affine transformations for adapting the model to any task. During testing, our approach can use the task label information (task-aware) to quickly adapt the network to that task. We also extend our approach to make it task-agnostic so that it can work even when the task label information is not available during testing. Specifically, given a continuum of test data, our approach predicts the task and quickly adapts the network to the predicted task. We experimentally show that our proposed approach achieves state-of-the-art results on several benchmark datasets for both non zero-shot and zero-shot task incremental learning.

6.
IEEE Trans Image Process ; 30: 1910-1924, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33417544

RESUMO

Understanding and explaining deep learning models is an imperative task. Towards this, we propose a method that obtains gradient-based certainty estimates that also provide visual attention maps. Particularly, we solve for visual question answering task. We incorporate modern probabilistic deep learning methods that we further improve by using the gradients for these estimates. These have two-fold benefits: a) improvement in obtaining the certainty estimates that correlate better with misclassified samples and b) improved attention maps that provide state-of-the-art results in terms of correlation with human attention regions. The improved attention maps result in consistent improvement for various methods for visual question answering. Therefore, the proposed technique can be thought of as a tool for obtaining improved certainty estimates and explanations for deep learning models. We provide detailed empirical analysis for the visual question answering task on all standard benchmarks and comparison with state of the art methods.


Assuntos
Inteligência Artificial , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Teorema de Bayes , Humanos , Incerteza
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA