Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
BMC Med Imaging ; 22(1): 46, 2022 03 16.
Artigo em Inglês | MEDLINE | ID: mdl-35296262

RESUMO

BACKGROUND: Artificial intelligence, particularly the deep learning (DL) model, can provide reliable results for automated cardiothoracic ratio (CTR) measurement on chest X-ray (CXR) images. In everyday clinical use, however, this technology is usually implemented in a non-automated (AI-assisted) capacity because it still requires approval from radiologists. We investigated the performance and efficiency of our recently proposed models for the AI-assisted method intended for clinical practice. METHODS: We validated four proposed DL models (AlbuNet, SegNet, VGG-11, and VGG-16) to find the best model for clinical implementation using a dataset of 7517 CXR images from manual operations. These models were investigated in single-model and combined-model modes to find the model with the highest percentage of results where the user could accept the results without further interaction (excellent grade), and with measurement variation within ± 1.8% of the human-operating range. The best model from the validation study was then tested on an evaluation dataset of 9386 CXR images using the AI-assisted method with two radiologists to measure the yield of excellent grade results, observer variation, and operating time. A Bland-Altman plot with coefficient of variation (CV) was employed to evaluate agreement between measurements. RESULTS: The VGG-16 gave the highest excellent grade result (68.9%) of any single-model mode with a CV comparable to manual operation (2.12% vs 2.13%). No DL model produced a failure-grade result. The combined-model mode of AlbuNet + VGG-11 model yielded excellent grades in 82.7% of images and a CV of 1.36%. Using the evaluation dataset, the AlbuNet + VGG-11 model produced excellent grade results in 77.8% of images, a CV of 1.55%, and reduced CTR measurement time by almost ten-fold (1.07 ± 2.62 s vs 10.6 ± 1.5 s) compared with manual operation. CONCLUSION: Due to its excellent accuracy and speed, the AlbuNet + VGG-11 model could be clinically implemented to assist radiologists with CTR measurement.


Assuntos
Inteligência Artificial , Tórax , Humanos , Variações Dependentes do Observador , Radiologistas
2.
BMC Med Imaging ; 21(1): 95, 2021 06 07.
Artigo em Inglês | MEDLINE | ID: mdl-34098887

RESUMO

BACKGROUND: Artificial Intelligence (AI) is a promising tool for cardiothoracic ratio (CTR) measurement that has been technically validated but not clinically evaluated on a large dataset. We observed and validated AI and manual methods for CTR measurement using a large dataset and investigated the clinical utility of the AI method. METHODS: Five thousand normal chest x-rays and 2,517 images with cardiomegaly and CTR values, were analyzed using manual, AI-assisted, and AI-only methods. AI-only methods obtained CTR values from a VGG-16 U-Net model. An in-house software was used to aid the manual and AI-assisted measurements and to record operating time. Intra and inter-observer experiments were performed on manual and AI-assisted methods and the averages were used in a method variation study. AI outcomes were graded in the AI-assisted method as excellent (accepted by both users independently), good (required adjustment), and poor (failed outcome). Bland-Altman plot with coefficient of variation (CV), and coefficient of determination (R-squared) were used to evaluate agreement and correlation between measurements. Finally, the performance of a cardiomegaly classification test was evaluated using a CTR cutoff at the standard (0.5), optimum, and maximum sensitivity. RESULTS: Manual CTR measurements on cardiomegaly data were comparable to previous radiologist reports (CV of 2.13% vs 2.04%). The observer and method variations from the AI-only method were about three times higher than from the manual method (CV of 5.78% vs 2.13%). AI assistance resulted in 40% excellent, 56% good, and 4% poor grading. AI assistance significantly improved agreement on inter-observer measurement compared to manual methods (CV; bias: 1.72%; - 0.61% vs 2.13%; - 1.62%) and was faster to perform (2.2 ± 2.4 secs vs 10.6 ± 1.5 secs). The R-squared and classification-test were not reliable indicators to verify that the AI-only method could replace manual operation. CONCLUSIONS: AI alone is not yet suitable to replace manual operations due to its high variation, but it is useful to assist the radiologist because it can reduce observer variation and operation time. Agreement of measurement should be used to compare AI and manual methods, rather than R-square or classification performance tests.


Assuntos
Inteligência Artificial , Cardiomegalia/diagnóstico por imagem , Cavidade Torácica/diagnóstico por imagem , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Viés , Aprendizado Profundo , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Variações Dependentes do Observador , Radiografia Torácica/estatística & dados numéricos , Adulto Jovem
3.
Neuron ; 93(6): 1504-1517.e4, 2017 Mar 22.
Artigo em Inglês | MEDLINE | ID: mdl-28334612

RESUMO

Decision making involves dynamic interplay between internal judgements and external perception, which has been investigated in delayed match-to-category (DMC) experiments. Our analysis of neural recordings shows that, during DMC tasks, LIP and PFC neurons demonstrate mixed, time-varying, and heterogeneous selectivity, but previous theoretical work has not established the link between these neural characteristics and population-level computations. We trained a recurrent network model to perform DMC tasks and found that the model can remarkably reproduce key features of neuronal selectivity at the single-neuron and population levels. Analysis of the trained networks elucidates that robust transient trajectories of the neural population are the key driver of sequential categorical decisions. The directions of trajectories are governed by network self-organized connectivity, defining a "neural landscape" consisting of a task-tailored arrangement of slow states and dynamical tunnels. With this model, we can identify functionally relevant circuit motifs and generalize the framework to solve other categorization tasks.


Assuntos
Tomada de Decisões/fisiologia , Redes Neurais de Computação , Lobo Parietal/fisiologia , Córtex Pré-Frontal/fisiologia , Animais , Macaca mulatta , Masculino , Modelos Neurológicos , Neurônios/fisiologia
4.
Nat Commun ; 6: 6454, 2015 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-25759251

RESUMO

The ability to categorize stimuli into discrete behaviourally relevant groups is an essential cognitive function. To elucidate the neural mechanisms underlying categorization, we constructed a cortical circuit model that is capable of learning a motion categorization task through reward-dependent plasticity. Here we show that stable category representations develop in neurons intermediate to sensory and decision layers if they exhibit choice-correlated activity fluctuations (choice probability). In the model, choice probability and task-specific interneuronal correlations emerge from plasticity of top-down projections from decision neurons. Specific model predictions are confirmed by analysis of single-neuron activity from the monkey parietal cortex, which reveals a mixture of directional and categorical tuning, and a positive correlation between category selectivity and choice probability. Beyond demonstrating a circuit mechanism for categorization, the present work suggests a key role of plastic top-down feedback in simultaneously shaping both neural tuning and correlated neural variability.


Assuntos
Cognição/fisiologia , Aprendizagem por Discriminação/fisiologia , Neurônios/fisiologia , Lobo Parietal/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Animais , Comportamento de Escolha/fisiologia , Eletrodos Implantados , Retroalimentação Sensorial/fisiologia , Macaca mulatta/fisiologia , Modelos Neurológicos , Neurônios/citologia , Recompensa
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...