Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 4 de 4
1.
Sci Rep ; 14(1): 10378, 2024 05 06.
Article En | MEDLINE | ID: mdl-38710715

Across the world, the officially reported number of COVID-19 deaths is likely an undercount. Establishing true mortality is key to improving data transparency and strengthening public health systems to tackle future disease outbreaks. In this study, we estimated excess deaths during the COVID-19 pandemic in the Pune region of India. Excess deaths are defined as the number of additional deaths relative to those expected from pre-COVID-19-pandemic trends. We integrated data from: (a) epidemiological modeling using pre-pandemic all-cause mortality data, (b) discrepancies between media-reported death compensation claims and official reported mortality, and (c) the "wisdom of crowds" public surveying. Our results point to an estimated 14,770 excess deaths [95% CI 9820-22,790] in Pune from March 2020 to December 2021, of which 9093 were officially counted as COVID-19 deaths. We further calculated the undercount factor-the ratio of excess deaths to officially reported COVID-19 deaths. Our results point to an estimated undercount factor of 1.6 [95% CI 1.1-2.5]. Besides providing similar conclusions about excess deaths estimates across different methods, our study demonstrates the utility of frugal methods such as the analysis of death compensation claims and the wisdom of crowds in estimating excess mortality.


COVID-19 , COVID-19/mortality , COVID-19/epidemiology , Humans , India/epidemiology , SARS-CoV-2/isolation & purification , Pandemics , Epidemiological Models
2.
Nat Hum Behav ; 8(3): 544-561, 2024 Mar.
Article En | MEDLINE | ID: mdl-38172630

Transformer models such as GPT generate human-like language and are predictive of human brain responses to language. Here, using functional-MRI-measured brain responses to 1,000 diverse sentences, we first show that a GPT-based encoding model can predict the magnitude of the brain response associated with each sentence. We then use the model to identify new sentences that are predicted to drive or suppress responses in the human language network. We show that these model-selected novel sentences indeed strongly drive and suppress the activity of human language areas in new individuals. A systematic analysis of the model-selected sentences reveals that surprisal and well-formedness of linguistic input are key determinants of response strength in the language network. These results establish the ability of neural network models to not only mimic human language but also non-invasively control neural activity in higher-level cortical areas, such as the language network.


Comprehension , Language , Humans , Comprehension/physiology , Brain/diagnostic imaging , Brain/physiology , Linguistics/methods , Brain Mapping/methods
3.
bioRxiv ; 2023 Oct 30.
Article En | MEDLINE | ID: mdl-37090673

Transformer models such as GPT generate human-like language and are highly predictive of human brain responses to language. Here, using fMRI-measured brain responses to 1,000 diverse sentences, we first show that a GPT-based encoding model can predict the magnitude of brain response associated with each sentence. Then, we use the model to identify new sentences that are predicted to drive or suppress responses in the human language network. We show that these model-selected novel sentences indeed strongly drive and suppress activity of human language areas in new individuals. A systematic analysis of the model-selected sentences reveals that surprisal and well-formedness of linguistic input are key determinants of response strength in the language network. These results establish the ability of neural network models to not only mimic human language but also noninvasively control neural activity in higher-level cortical areas, like the language network.

4.
Sci Data ; 9(1): 529, 2022 08 29.
Article En | MEDLINE | ID: mdl-36038572

Two analytic traditions characterize fMRI language research. One relies on averaging activations across individuals. This approach has limitations: because of inter-individual variability in the locations of language areas, any given voxel/vertex in a common brain space is part of the language network in some individuals but in others, may belong to a distinct network. An alternative approach relies on identifying language areas in each individual using a functional 'localizer'. Because of its greater sensitivity, functional resolution, and interpretability, functional localization is gaining popularity, but it is not always feasible, and cannot be applied retroactively to past studies. To bridge these disjoint approaches, we created a probabilistic functional atlas using fMRI data for an extensively validated language localizer in 806 individuals. This atlas enables estimating the probability that any given location in a common space belongs to the language network, and thus can help interpret group-level activation peaks and lesion locations, or select voxels/electrodes for analysis. More meaningful comparisons of findings across studies should increase robustness and replicability in language research.


Brain , Language , Magnetic Resonance Imaging , Brain/diagnostic imaging , Brain/physiology , Brain Mapping , Humans
...