Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
J Med Imaging (Bellingham) ; 11(4): 045501, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38988989

ABSTRACT

Purpose: Radiologists are tasked with visually scrutinizing large amounts of data produced by 3D volumetric imaging modalities. Small signals can go unnoticed during the 3D search because they are hard to detect in the visual periphery. Recent advances in machine learning and computer vision have led to effective computer-aided detection (CADe) support systems with the potential to mitigate perceptual errors. Approach: Sixteen nonexpert observers searched through digital breast tomosynthesis (DBT) phantoms and single cross-sectional slices of the DBT phantoms. The 3D/2D searches occurred with and without a convolutional neural network (CNN)-based CADe support system. The model provided observers with bounding boxes superimposed on the image stimuli while they looked for a small microcalcification signal and a large mass signal. Eye gaze positions were recorded and correlated with changes in the area under the ROC curve (AUC). Results: The CNN-CADe improved the 3D search for the small microcalcification signal ( Δ AUC = 0.098 , p = 0.0002 ) and the 2D search for the large mass signal ( Δ AUC = 0.076 , p = 0.002 ). The CNN-CADe benefit in 3D for the small signal was markedly greater than in 2D ( Δ Δ AUC = 0.066 , p = 0.035 ). Analysis of individual differences suggests that those who explored the least with eye movements benefited the most from the CNN-CADe ( r = - 0.528 , p = 0.036 ). However, for the large signal, the 2D benefit was not significantly greater than the 3D benefit ( Δ Δ AUC = 0.033 , p = 0.133 ). Conclusion: The CNN-CADe brings unique performance benefits to the 3D (versus 2D) search of small signals by reducing errors caused by the underexploration of the volumetric data.

2.
Commun Biol ; 4(1): 768, 2021 06 22.
Article in English | MEDLINE | ID: mdl-34158579

ABSTRACT

To optimize visual search, humans attend to objects with the expected size of the sought target relative to its surrounding scene (object-scene scale consistency). We investigate how the human brain responds to variations in object-scene scale consistency. We use functional magnetic resonance imaging and a voxel-wise feature encoding model to estimate tuning to different object/scene properties. We find that regions involved in scene processing (transverse occipital sulcus) and spatial attention (intraparietal sulcus) have the strongest responsiveness and selectivity to object-scene scale consistency: reduced activity to mis-scaled objects (either unusually smaller or larger). The findings show how and where the brain incorporates object-scene size relationships in the processing of scenes. The response properties of these brain areas might explain why during visual search humans often miss objects that are salient but at atypical sizes relative to the surrounding scene.


Subject(s)
Occipital Lobe/physiology , Parietal Lobe/physiology , Visual Perception/physiology , Adult , Female , Humans , Male
3.
Curr Biol ; 31(5): 1099-1106.e5, 2021 03 08.
Article in English | MEDLINE | ID: mdl-33472051

ABSTRACT

Advances in 3D imaging technology are transforming how radiologists search for cancer1,2 and how security officers scrutinize baggage for dangerous objects.3 These new 3D technologies often improve search over 2D images4,5 but vastly increase the image data. Here, we investigate 3D search for targets of various sizes in filtered noise and digital breast phantoms. For a Bayesian ideal observer optimally processing the filtered noise and a convolutional neural network processing the digital breast phantoms, search with 3D image stacks increases target information and improves accuracy over search with 2D images. In contrast, 3D search by humans leads to high miss rates for small targets easily detected in 2D search, but not for larger targets more visible in the visual periphery. Analyses of human eye movements, perceptual judgments, and a computational model with a foveated visual system suggest that human errors can be explained by interaction among a target's peripheral visibility, eye movement under-exploration of the 3D images, and a perceived overestimation of the explored area. Instructing observers to extend the search reduces 75% of the small target misses without increasing false positives. Results with twelve radiologists confirm that even medical professionals reading realistic breast phantoms have high miss rates for small targets in 3D search. Thus, under-exploration represents a fundamental limitation to the efficacy with which humans search in 3D image stacks and miss targets with these prevalent image technologies.


Subject(s)
Imaging, Three-Dimensional , Neural Networks, Computer , Bayes Theorem , Eye Movements , Humans , Phantoms, Imaging
4.
Article in English | MEDLINE | ID: mdl-32435081

ABSTRACT

With the advent of powerful convolutional neural networks (CNNs), recent studies have extended early applications of neural networks to imaging tasks thus making CNNs a potential new tool for assessing medical image quality. Here, we compare a CNN to model observers in a search task for two possible signals (a simulated mass and a smaller simulated micro-calcification) embedded in filtered noise and single slices of Digital Breast Tomosynthesis (DBT) virtual phantoms. For the case of the filtered noise, we show how a CNN can approximate the ideal observer for a search task, achieving a statistical efficiency of 0.77 for the microcalcification and 0.78 for the mass. For search in single slices of DBT phantoms, we show that a Channelized Hotelling Observer (CHO) performance is affected detrimentally by false positives related to anatomic variations and results in detection accuracy below human observer performance. In contrast, the CNN learns to identify and discount the backgrounds, and achieves performance comparable to that of human observer and superior to model observers (Proportion Correct for the microcalcification: CNN = 0.96; Humans = 0.98; CHO = 0.84; Proportion Correct for the mass: CNN = 0.98; Humans = 0.83; CHO = 0.51). Together, our results provide an important evaluation of CNN methods by benchmarking their performance against human and model observers in complex search tasks.

SELECTION OF CITATIONS
SEARCH DETAIL