RESUMO
Wastewater-based epidemiology (WBE) has emerged as an effective environmental surveillance tool for predicting severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) disease outbreaks in high-income countries (HICs) with centralized sewage infrastructure. However, few studies have applied WBE alongside epidemic disease modelling to estimate the prevalence of SARS-CoV-2 in low-resource settings. This study aimed to explore the feasibility of collecting untreated wastewater samples from rural and urban catchment areas of Nagpur district, to detect and quantify SARS-CoV-2 using real-time qPCR, to compare geographic differences in viral loads, and to integrate the wastewater data into a modified Susceptible-Exposed-Infectious-Confirmed Positives-Recovered (SEIPR) model. Of the 983 wastewater samples analyzed for SARS-CoV-2 RNA, we detected significantly higher sample positivity rates, 43.7% (95% confidence interval (CI) 40.1, 47.4) and 30.4% (95% CI 24.66, 36.66), and higher viral loads for the urban compared with rural samples, respectively. The Basic reproductive number, R0, positively correlated with population density and negatively correlated with humidity, a proxy for rainfall and dilution of waste in the sewers. The SEIPR model estimated the rate of unreported coronavirus disease 2019 (COVID-19) cases at the start of the wave as 13.97 [95% CI (10.17, 17.0)] times that of confirmed cases, representing a material difference in cases and healthcare resource burden. Wastewater surveillance might prove to be a more reliable way to prepare for surges in COVID-19 cases during future waves for authorities.
Assuntos
COVID-19 , SARS-CoV-2 , Águas Residuárias , Índia/epidemiologia , COVID-19/epidemiologia , COVID-19/virologia , COVID-19/diagnóstico , Humanos , Águas Residuárias/virologia , SARS-CoV-2/isolamento & purificação , Carga Viral , Pandemias , Vigilância Epidemiológica Baseada em Águas Residuárias , Esgotos/virologiaRESUMO
One of the central objectives of contemporary neuroimaging research is to create predictive models that can disentangle the connection between patterns of functional connectivity across the entire brain and various behavioral traits. Previous studies have shown that models trained to predict behavioral features from the individual's functional connectivity have modest to poor performance. In this study, we trained models that predict observable individual traits (phenotypes) and their corresponding singular value decomposition (SVD) representations - herein referred to as latent phenotypes from resting state functional connectivity. For this task, we predicted phenotypes in two large neuroimaging datasets: the Human Connectome Project (HCP) and the Philadelphia Neurodevelopmental Cohort (PNC). We illustrate the importance of regressing out confounds, which could significantly influence phenotype prediction. Our findings reveal that both phenotypes and their corresponding latent phenotypes yield similar predictive performance. Interestingly, only the first five latent phenotypes were reliably identified, and using just these reliable phenotypes for predicting phenotypes yielded a similar performance to using all latent phenotypes. This suggests that the predictable information is present in the first latent phenotypes, allowing the remainder to be filtered out without any harm in performance. This study sheds light on the intricate relationship between functional connectivity and the predictability and reliability of phenotypic information, with potential implications for enhancing predictive modeling in the realm of neuroimaging research.
RESUMO
Eye tracking provides insights into social processing deficits in autism spectrum disorder (ASD), especially in conjunction with dynamic, naturalistic free-viewing stimuli. However, the question remains whether gaze characteristics, such as preference for specific facial features, can be considered a stable individual trait, particularly in those with ASD. If so, how much data are needed for consistent estimations? To address these questions, we assessed the stability and robustness of gaze preference for facial features as incremental amounts of movie data were introduced for analysis. We trained an artificial neural network to create an object-based segmentation of naturalistic movie clips (14 s each, 7410 frames total). Thirty-three high-functioning individuals with ASD and 36 age- and IQ-equated typically developing individuals (age range: 12-30 years) viewed 22 Hollywood movie clips, each depicting a social interaction. As we evaluated combinations of one, three, five, eight, and 11 movie clips, gaze dwell times on core facial features became increasingly stable at within-subject, within-group, and between-group levels. Using a number of movie clips deemed sufficient by our analysis, we found that individuals with ASD displayed significantly less face-centered gaze (centralized on the nose; p < 0.001) but did not significantly differ from typically developing participants in eye or mouth looking times. Our findings validate gaze preference for specific facial features as a stable individual trait and highlight the possibility of misinterpretation with insufficient data. Additionally, we propose the use of a machine learning approach to stimuli segmentation to quickly and flexibly prepare dynamic stimuli for analysis. LAY SUMMARY: Using a data-driven approach to segmenting movie stimuli, we examined varying amounts of data to assess the stability of social gaze in individuals with autism spectrum disorder (ASD). We found a reduction in social fixations in participants with ASD, driven by decreased attention to the center of the face. Our findings further support the validity of gaze preference for face features as a stable individual trait when sufficient data are used.
Assuntos
Transtorno do Espectro Autista , Adolescente , Adulto , Criança , Face , Fixação Ocular , Humanos , Filmes Cinematográficos , Fenótipo , Adulto JovemRESUMO
In this paper, we describe a Bayesian deep neural network (DNN) for predicting FreeSurfer segmentations of structural MRI volumes, in minutes rather than hours. The network was trained and evaluated on a large dataset (n = 11,480), obtained by combining data from more than a hundred different sites, and also evaluated on another completely held-out dataset (n = 418). The network was trained using a novel spike-and-slab dropout-based variational inference approach. We show that, on these datasets, the proposed Bayesian DNN outperforms previously proposed methods, in terms of the similarity between the segmentation predictions and the FreeSurfer labels, and the usefulness of the estimate uncertainty of these predictions. In particular, we demonstrated that the prediction uncertainty of this network at each voxel is a good indicator of whether the network has made an error and that the uncertainty across the whole brain can predict the manual quality control ratings of a scan. The proposed Bayesian DNN method should be applicable to any new network architecture for addressing the segmentation problem.
RESUMO
[This corrects the article DOI: 10.3389/fpsyg.2017.01551.].
RESUMO
Collecting the large datasets needed to train deep neural networks can be very difficult, particularly for the many applications for which sharing and pooling data is complicated by practical, ethical, or legal concerns. However, it may be the case that derivative datasets or predictive models developed within individual sites can be shared and combined with fewer restrictions. Training on distributed data and combining the resulting networks is often viewed as continual learning, but these methods require networks to be trained sequentially. In this paper, we introduce distributed weight consolidation (DWC), a continual learning method to consolidate the weights of separate neural networks, each trained on an independent dataset. We evaluated DWC with a brain segmentation case study, where we consolidated dilated convolutional neural networks trained on independent structural magnetic resonance imaging (sMRI) datasets from different sites. We found that DWC led to increased performance on test sets from the different sites, while maintaining generalization performance for a very large and completely independent multi-site dataset, compared to an ensemble baseline.
RESUMO
Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.
RESUMO
Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance matrices (RDMs). We propose representational distance learning (RDL), a stochastic gradient descent method that drives the RDMs of the student to approximate the RDMs of the teacher. We demonstrate that RDL is competitive with other transfer learning techniques for two publicly available benchmark computer vision datasets (MNIST and CIFAR-100), while allowing for architectural differences between student and teacher. By pulling the student's RDMs toward those of the teacher, RDL significantly improved visual classification performance when compared to baseline networks that did not use transfer learning. In the future, RDL may enable combined supervised training of deep neural networks using task constraints (e.g., images and category labels) and constraints from brain-activity measurements, so as to build models that replicate the internal representational spaces of biological brains.
RESUMO
This paper overviews one of the most important, interesting, and challenging problems in oncology, early diagnosis of prostate cancer. Developing effective diagnostic techniques for prostate cancer is of great clinical importance and can improve the effectiveness of treatment and increase the patient's chance of survival. The main focus of this study is to overview the different in-vitro and in-vivo technologies for diagnosing prostate cancer. This review discusses the current clinically used in-vitro cancer diagnostic tools, such as biomarker tests and needle biopsies and including their applications, advantages, and limitations. Moreover, the current in-vitro research tools that focus on the role of nanotechnology in prostate cancer diagnosis have been detailed. In addition to the in-vitro techniques, the current study discusses in detail developed in-vivo non-invasive state-of-the-art Computer-Aided Diagnosis (CAD) systems for prostate cancer based on analyzing Transrectal Ultrasound (TRUS) and different types of magnetic resonance imaging (MRI), e.g., T2-MRI, Diffusion Weighted Imaging (DWI), Dynamic Contrast Enhanced (DCE)-MRI, and multi-parametric MRI, focusing on their implementation, experimental procedures, and reported outcomes. Furthermore, the paper addresses the limitations of the current prostate cancer diagnostic techniques, outlines the challenges that these techniques face, and introduces the recent trends to solve these challenges, which include biomarkers used in in-vitro lab-on-a-chip nanotechnology-based methods.
Assuntos
Técnicas e Procedimentos Diagnósticos , Neoplasias da Próstata/diagnóstico , Animais , Diagnóstico por Computador , Humanos , Imageamento por Ressonância Magnética , Masculino , Ultrassom Focalizado Transretal de Alta IntensidadeRESUMO
Accurate automatic extraction of a 3-D cerebrovascular system from images obtained by time-of-flight (TOF) or phase contrast (PC) magnetic resonance angiography (MRA) is a challenging segmentation problem due to the small size objects of interest (blood vessels) in each 2-D MRA slice and complex surrounding anatomical structures (e.g., fat, bones, or gray and white brain matter). We show that due to the multimodal nature of MRA data, blood vessels can be accurately separated from the background in each slice using a voxel-wise classification based on precisely identified probability models of voxel intensities. To identify the models, an empirical marginal probability distribution of intensities is closely approximated with a linear combination of discrete Gaussians (LCDG) with alternate signs, using our previous EM-based techniques for precise linear combination of Gaussian-approximation adapted to deal with the LCDGs. The high accuracy of the proposed approach is experimentally validated on 85 real MRA datasets (50 TOF and 35 PC) as well as on synthetic MRA data for special 3-D geometrical phantoms of known shapes.
Assuntos
Processamento de Imagem Assistida por Computador/métodos , Angiografia por Ressonância Magnética/métodos , Algoritmos , Encéfalo/anatomia & histologia , Encéfalo/irrigação sanguínea , Circulação Cerebrovascular , Bases de Dados Factuais , Humanos , Distribuição Normal , Imagens de Fantasmas , Reprodutibilidade dos TestesRESUMO
A new approach to align 3D CT data of a segmented lung object with a given prototype (reference lung object) using an affine transformation is proposed. Visual appearance of the lung from CT images, after equalizing their signals, is modeled with a new 3D Markov-Gibbs random field (MGRF) with pairwise interaction model. Similarity to the prototype is measured by a Gibbs energy of signal co-occurrences in a characteristic subset of voxel pairs derived automatically from the prototype. An object is aligned by an affine transformation maximizing the similarity by using an automatic initialization followed by a gradient search. Experiments confirm that our approach aligns complex objects better than popular conventional algorithms.
Assuntos
Algoritmos , Imageamento Tridimensional/métodos , Pulmão/diagnóstico por imagem , Reconhecimento Automatizado de Padrão/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia Torácica/métodos , Técnica de Subtração , Simulação por Computador , Humanos , Cadeias de Markov , Modelos Biológicos , Modelos Estatísticos , Intensificação de Imagem Radiográfica/métodos , Reprodutibilidade dos Testes , Sensibilidade e EspecificidadeRESUMO
Kidney segmentation is a key step in developing any noninvasive computer-aided diagnosis (CAD) system for early detection of acute renal rejection. This paper describes a new 3-D segmentation approach for the kidney from computed tomography (CT) images. The kidney borders are segmented from the surrounding abdominal tissues with a geometric deformable model guided by a special stochastic speed relationship. The latter accounts for a shape prior and appearance features in terms of voxel-wise image intensities and their pair-wise spatial interactions integrated into a two-level joint Markov-Gibbs random field (MGRF) model of the kidney and its background. The segmentation approach was evaluated on 21 CT data sets with available manual expert segmentation. The performance evaluation based on the receiver operating characteristic (ROC) and Dice similarity coefficient (DSC) between manually drawn and automatically segmented contours confirm the robustness and accuracy of the proposed segmentation approach.