Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Neurosci ; 39(33): 6513-6525, 2019 08 14.
Artigo em Inglês | MEDLINE | ID: mdl-31196934

RESUMO

Recent studies showed agreement between how the human brain and neural networks represent objects, suggesting that we might start to understand the underlying computations. However, we know that the human brain is prone to biases at many perceptual and cognitive levels, often shaped by learning history and evolutionary constraints. Here, we explore one such perceptual phenomenon, perceiving animacy, and use the performance of neural networks as a benchmark. We performed an fMRI study that dissociated object appearance (what an object looks like) from object category (animate or inanimate) by constructing a stimulus set that includes animate objects (e.g., a cow), typical inanimate objects (e.g., a mug), and, crucially, inanimate objects that look like the animate objects (e.g., a cow mug). Behavioral judgments and deep neural networks categorized images mainly by animacy, setting all objects (lookalike and inanimate) apart from the animate ones. In contrast, activity patterns in ventral occipitotemporal cortex (VTC) were better explained by object appearance: animals and lookalikes were similarly represented and separated from the inanimate objects. Furthermore, the appearance of an object interfered with proper object identification, such as failing to signal that a cow mug is a mug. The preference in VTC to represent a lookalike as animate was even present when participants performed a task requiring them to report the lookalikes as inanimate. In conclusion, VTC representations, in contrast to neural networks, fail to represent objects when visual appearance is dissociated from animacy, probably due to a preferred processing of visual features typical of animate objects.SIGNIFICANCE STATEMENT How does the brain represent objects that we perceive around us? Recent advances in artificial intelligence have suggested that object categorization and its neural correlates have now been approximated by neural networks. Here, we show that neural networks can predict animacy according to human behavior but do not explain visual cortex representations. In ventral occipitotemporal cortex, neural activity patterns were strongly biased toward object appearance, to the extent that objects with visual features resembling animals were represented closely to real animals and separated from other objects from the same category. This organization that privileges animals and their features over objects might be the result of learning history and evolutionary constraints.


Assuntos
Redes Neurais de Computação , Reconhecimento Visual de Modelos/fisiologia , Córtex Visual/fisiologia , Vias Visuais/fisiologia , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino
2.
PLoS Comput Biol ; 14(10): e1006557, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-30365485

RESUMO

Recent studies suggest that deep Convolutional Neural Network (CNN) models show higher representational similarity, compared to any other existing object recognition models, with macaque inferior temporal (IT) cortical responses, human ventral stream fMRI activations and human object recognition. These studies employed natural images of objects. A long research tradition employed abstract shapes to probe the selectivity of IT neurons. If CNN models provide a realistic model of IT responses, then they should capture the IT selectivity for such shapes. Here, we compare the activations of CNN units to a stimulus set of 2D regular and irregular shapes with the response selectivity of macaque IT neurons and with human similarity judgements. The shape set consisted of regular shapes that differed in nonaccidental properties, and irregular, asymmetrical shapes with curved or straight boundaries. We found that deep CNNs (Alexnet, VGG-16 and VGG-19) that were trained to classify natural images show response modulations to these shapes that were similar to those of IT neurons. Untrained CNNs with the same architecture than trained CNNs, but with random weights, demonstrated a poorer similarity than CNNs trained in classification. The difference between the trained and untrained CNNs emerged at the deep convolutional layers, where the similarity between the shape-related response modulations of IT neurons and the trained CNNs was high. Unlike IT neurons, human similarity judgements of the same shapes correlated best with the last layers of the trained CNNs. In particular, these deepest layers showed an enhanced sensitivity for straight versus curved irregular shapes, similar to that shown in human shape judgments. In conclusion, the representations of abstract shape similarity are highly comparable between macaque IT neurons and deep convolutional layers of CNNs that were trained to classify natural images, while human shape similarity judgments correlate better with the deepest layers.


Assuntos
Redes Neurais de Computação , Neurônios/fisiologia , Lobo Temporal/fisiologia , Adulto , Algoritmos , Animais , Biologia Computacional , Humanos , Macaca , Imageamento por Ressonância Magnética , Masculino , Lobo Temporal/citologia
3.
Ecol Inform ; 75: 102037, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37397435

RESUMO

Context: Sticky trap catches of agricultural pests can be employed for early hotspot detection, identification, and estimation of pest presence in greenhouses or in the field. However, manual procedures to produce and analyze catch results require substantial time and effort. As a result, much research has gone into creating efficient techniques for remotely monitoring possible infestations. A considerable number of these studies use Artificial Intelligence (AI) to analyze the acquired data and focus on performance metrics for various model architectures. Less emphasis, however, was devoted to the testing of the trained models to investigate how well they would perform under practical, in-field conditions. Objective: In this study, we showcase an automatic and reliable computational method for monitoring insects in witloof chicory fields, while shifting the focus to the challenges of compiling and using a realistic insect image dataset that contains insects with common taxonomy levels. Methods: To achieve this, we collected, imaged, and annotated 731 sticky plates - containing 74,616 bounding boxes - to train a YOLOv5 object detection model, concentrating on two pest insects (chicory leaf-miners and wooly aphids) and their two predatory counterparts (ichneumon wasps and grass flies). To better understand the object detection model's actual field performance, it was validated in a practical manner by splitting our image data on the sticky plate level. Results and conclusions: According to experimental findings, the average mAP score for all dataset classes was 0.76. For both pest species and their corresponding predators, high mAP values of 0.73 and 0.86 were obtained. Additionally, the model accurately forecasted the presence of pests when presented with unseen sticky plate images from the test set. Significance: The findings of this research clarify the feasibility of AI-powered pest monitoring in the field for real-world applications and provide opportunities for implementing pest monitoring in witloof chicory fields with minimal human intervention.

4.
Front Plant Sci ; 13: 812506, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35720527

RESUMO

The spotted wing Drosophila (SWD), Drosophila suzukii, is a significant invasive pest of berries and soft-skinned fruits that causes major economic losses in fruit production worldwide. Automatic identification and monitoring strategies would allow to detect the emergence of this pest in an early stage and minimize its impact. The small size of Drosophila suzukii and similar flying insects makes it difficult to identify them using camera systems. Therefore, an optical sensor recording wingbeats was investigated in this study. We trained convolutional neural network (CNN) classifiers to distinguish D. suzukii insects from one of their closest relatives, Drosophila Melanogaster, based on their wingbeat patterns recorded by the optical sensor. Apart from the original wingbeat time signals, we modeled their frequency (power spectral density) and time-frequency (spectrogram) representations. A strict validation procedure was followed to estimate the models' performance in field-conditions. First, we validated each model on wingbeat data that was collected under the same conditions using different insect populations to train and test them. Next, we evaluated their robustness on a second independent dataset which was acquired under more variable environmental conditions. The best performing model, named "InceptionFly," was trained on wingbeat time signals. It was able to discriminate between our two target insects with a balanced accuracy of 92.1% on the test set and 91.7% on the second independent dataset. This paves the way towards early, automated detection of D. suzukii infestation in fruit orchards.

5.
eNeuro ; 4(3)2017.
Artigo em Inglês | MEDLINE | ID: mdl-28660250

RESUMO

Functional MRI studies in primates have demonstrated cortical regions that are strongly activated by visual images of bodies. The presence of such body patches in macaques allows characterization of the stimulus selectivity of their single neurons. Middle superior temporal sulcus body (MSB) patch neurons showed similar stimulus selectivity for natural, shaded, and textured images compared with their silhouettes, suggesting that shape is an important determinant of MSB responses. Here, we examined and modeled the shape selectivity of single MSB neurons. We measured the responses of single MSB neurons to a variety of shapes producing a wide range of responses. We used an adaptive stimulus sampling procedure, selecting and modifying shapes based on the responses of the neuron. Forty percent of shapes that produced the maximal response were rated by humans as animal-like, but the top shape of many MSB neurons was not judged as resembling a body. We fitted the shape selectivity of MSB neurons with a model that parameterizes shapes in terms of curvature and orientation of contour segments, with a pixel-based model, and with layers of units of convolutional neural networks (CNNs). The deep convolutional layers of CNNs provided the best goodness-of-fit, with a median explained explainable variance of the neurons' responses of 77%. The goodness-of-fit increased along the convolutional layers' hierarchy but was lower for the fully connected layers. Together with demonstrating the successful modeling of single unit shape selectivity with deep CNNs, the data suggest that semantic or category knowledge determines only slightly the single MSB neuron's shape selectivity.


Assuntos
Percepção de Forma/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Lobo Temporal/fisiologia , Potenciais de Ação , Animais , Medições dos Movimentos Oculares , Humanos , Macaca mulatta , Imageamento por Ressonância Magnética , Masculino , Microeletrodos , Estimulação Luminosa , Psicofísica
6.
Front Hum Neurosci ; 11: 402, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28824405

RESUMO

According to a recent study, semantic similarity between concrete entities correlates with the similarity of activity patterns in left middle IPS during category naming. We examined the replicability of this effect under passive viewing conditions, the potential role of visuoperceptual similarity, where the effect is situated compared to regions that have been previously implicated in visuospatial attention, and how it compares to effects of object identity and location. Forty-six subjects participated. Subjects passively viewed pictures from two categories, musical instruments and vehicles. Semantic similarity between entities was estimated based on a concept-feature matrix obtained in more than 1,000 subjects. Visuoperceptual similarity was modeled based on the HMAX model, the AlexNet deep convolutional learning model, and thirdly, based on subjective visuoperceptual similarity ratings. Among the IPS regions examined, only left middle IPS showed a semantic similarity effect. The effect was significant in hIP1, hIP2, and hIP3. Visuoperceptual similarity did not correlate with similarity of activity patterns in left middle IPS. The semantic similarity effect in left middle IPS was significantly stronger than in the right middle IPS and also stronger than in the left or right posterior IPS. The semantic similarity effect was similar to that seen in the angular gyrus. Object identity effects were much more widespread across nearly all parietal areas examined. Location effects were relatively specific for posterior IPS and area 7 bilaterally. To conclude, the current findings replicate the semantic similarity effect in left middle IPS under passive viewing conditions, and demonstrate its anatomical specificity within a cytoarchitectonic reference frame. We propose that the semantic similarity effect in left middle IPS reflects the transient uploading of semantic representations in working memory.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA