Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
bioRxiv ; 2024 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-38798648

RESUMO

Neural organoids have revolutionized how human neurodevelopmental disorders (NDDs) are studied. Yet, their utility for screening complex NDD etiologies and in drug discovery is limited by a lack of scalable and quantifiable derivation formats. Here, we describe the RosetteArray® platform's ability to be used as an off-the-shelf, 96-well plate assay that standardizes incipient forebrain and spinal cord organoid morphogenesis as micropatterned, 3-D, singularly polarized neural rosette tissues (>9000 per plate). RosetteArrays are seeded from cryopreserved human pluripotent stem cells, cultured over 6-8 days, and immunostained images can be quantified using artificial intelligence-based software. We demonstrate the platform's suitability for screening developmental neurotoxicity and genetic and environmental factors known to cause neural tube defect risk. Given the presence of rosette morphogenesis perturbation in neural organoid models of NDDs and neurodegenerative disorders, the RosetteArray platform could enable quantitative high-throughput screening (qHTS) of human neurodevelopmental risk across regulatory and precision medicine applications.

3.
Nat Commun ; 14(1): 3822, 2023 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-37380668

RESUMO

Climate-driven changes in precipitation amounts and their seasonal variability are expected in many continental-scale regions during the remainder of the 21st century. However, much less is known about future changes in the predictability of seasonal precipitation, an important earth system property relevant for climate adaptation. Here, on the basis of CMIP6 models that capture the present-day teleconnections between seasonal precipitation and previous-season sea surface temperature (SST), we show that climate change is expected to alter the SST-precipitation relationships and thus our ability to predict seasonal precipitation by 2100. Specifically, in the tropics, seasonal precipitation predictability from SSTs is projected to increase throughout the year, except the northern Amazonia during boreal winter. Concurrently, in the extra-tropics predictability is likely to increase in central Asia during boreal spring and winter. The altered predictability, together with enhanced interannual variability of seasonal precipitation, poses new opportunities and challenges for regional water management.

4.
SIAM J Math Data Sci ; 2(2): 480-504, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32968717

RESUMO

Sparse models for high-dimensional linear regression and machine learning have received substantial attention over the past two decades. Model selection, or determining which features or covariates are the best explanatory variables, is critical to the interpretability of a learned model. Much of the current literature assumes that covariates are only mildly correlated. However, in many modern applications covariates are highly correlated and do not exhibit key properties (such as the restricted eigenvalue condition, restricted isometry property, or other related assumptions). This work considers a high-dimensional regression setting in which a graph governs both correlations among the covariates and the similarity among regression coefficients - meaning there is alignment between the covariates and regression coefficients. Using side information about the strength of correlations among features, we form a graph with edge weights corresponding to pairwise covariances. This graph is used to define a graph total variation regularizer that promotes similar weights for correlated features. This work shows how the proposed graph-based regularization yields mean-squared error guarantees for a broad range of covariance graph structures. These guarantees are optimal for many specific covariance graphs, including block and lattice graphs. Our proposed approach outperforms other methods for highly-correlated design in a variety of experiments on synthetic data and real biochemistry data.

5.
J Clim ; 34(2): 737-754, 2020 Dec 23.
Artigo em Inglês | MEDLINE | ID: mdl-34045793

RESUMO

Understanding the physical drivers of seasonal hydroclimatic variability and improving predictive skill remains a challenge with important socioeconomic and environmental implications for many regions around the world. Physics-based deterministic models show limited ability to predict precipitation as the lead time increases, due to imperfect representation of physical processes and incomplete knowledge of initial conditions. Similarly, statistical methods drawing upon established climate teleconnections have low prediction skill due to the complex nature of the climate system. Recently, promising data-driven approaches have been proposed, but they often suffer from overparameterization and overfitting due to the short observational record, and they often do not account for spatiotemporal dependencies among covariates (i.e., predictors such as sea surface temperatures). This study addresses these challenges via a predictive model based on a graph-guided regularizer that simultaneously promotes similarity of predictive weights for highly correlated covariates and enforces sparsity in the covariate domain. This approach both decreases the effective dimensionality of the problem and identifies the most predictive features without specifying them a priori. We use large ensemble simulations from a climate model to construct this regularizer, reducing the structural uncertainty in the estimation. We apply the learned model to predict winter precipitation in the southwestern United States using sea surface temperatures over the entire Pacific basin, and demonstrate its superiority compared to other regularization approaches and statistical models informed by known teleconnections. Our results highlight the potential to combine optimally the space-time structure of predictor variables learned from climate models with new graph-based regularizers to improve seasonal prediction.

6.
IEEE Trans Inf Theory ; 65(4): 2401-2422, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31839683

RESUMO

Vector autoregressive models characterize a variety of time series in which linear combinations of current and past observations can be used to accurately predict future observations. For instance, each element of an observation vector could correspond to a different node in a network, and the parameters of an autoregressive model would correspond to the impact of the network structure on the time series evolution. Often these models are used successfully in practice to learn the structure of social, epidemiological, financial, or biological neural networks. However, little is known about statistical guarantees on estimates of such models in non-Gaussian settings. This paper addresses the inference of the autoregressive parameters and associated network structure within a generalized linear model framework that includes Poisson and Bernoulli autoregressive processes. At the heart of this analysis is a sparsity-regularized maximum likelihood estimator. While sparsity-regularization is well-studied in the statistics and machine learning communities, those analysis methods cannot be applied to autoregressive generalized linear models because of the correlations and potential heteroscedasticity inherent in the observations. Sample complexity bounds are derived using a combination of martingale concentration inequalities and modern empirical process techniques for dependent random variables. These bounds, which are supported by several simulation studies, characterize the impact of various network parameters on estimator performance.

7.
Methods Mol Biol ; 1903: 255-267, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30547447

RESUMO

We present the baseline regularization model for computational drug repurposing using electronic health records (EHRs). In EHRs, drug prescriptions of various drugs are recorded throughout time for various patients. In the same time, numeric physical measurements (e.g., fasting blood glucose level) are also recorded. Baseline regularization uses statistical relationships between the occurrences of prescriptions of some particular drugs and the increase or the decrease in the values of some particular numeric physical measurements to identify potential repurposing opportunities.


Assuntos
Biologia Computacional/métodos , Reposicionamento de Medicamentos/métodos , Aprendizado de Máquina , Algoritmos , Registros Eletrônicos de Saúde , Humanos
8.
IEEE Trans Pattern Anal Mach Intell ; 41(5): 1173-1187, 2019 May.
Artigo em Inglês | MEDLINE | ID: mdl-29993736

RESUMO

In an era of ubiquitous large-scale streaming data, the availability of data far exceeds the capacity of expert human analysts. In many settings, such data is either discarded or stored unprocessed in data centers. This paper proposes a method of online data thinning, in which large-scale streaming datasets are winnowed to preserve unique, anomalous, or salient elements for timely expert analysis. At the heart of this proposed approach is an online anomaly detection method based on dynamic, low-rank Gaussian mixture models. Specifically, the high-dimensional covariance matrices associated with the Gaussian components are associated with low-rank models. According to this model, most observations lie near a union of subspaces. The low-rank modeling mitigates the curse of dimensionality associated with anomaly detection for high-dimensional data, and recent advances in subspace clustering and subspace tracking allow the proposed method to adapt to dynamic environments. Furthermore, the proposed method allows subsampling, is robust to missing data, and uses a mini-batch online optimization approach. The resulting algorithms are scalable, efficient, and are capable of operating in real time. Experiments on wide-area motion imagery and e-mail databases illustrate the efficacy of the proposed approach.

9.
Elife ; 72018 10 29.
Artigo em Inglês | MEDLINE | ID: mdl-30371350

RESUMO

Human pluripotent stem cell (hPSC)-derived neural organoids display unprecedented emergent properties. Yet in contrast to the singular neuroepithelial tube from which the entire central nervous system (CNS) develops in vivo, current organoid protocols yield tissues with multiple neuroepithelial units, a.k.a. neural rosettes, each acting as independent morphogenesis centers and thereby confounding coordinated, reproducible tissue development. Here, we discover that controlling initial tissue morphology can effectively (>80%) induce single neural rosette emergence within hPSC-derived forebrain and spinal tissues. Notably, the optimal tissue morphology for observing singular rosette emergence was distinct for forebrain versus spinal tissues due to previously unknown differences in ROCK-mediated cell contractility. Following release of geometric confinement, the tissues displayed radial outgrowth with maintenance of a singular neuroepithelium and peripheral neuronal differentiation. Thus, we have identified neural tissue morphology as a critical biophysical parameter for controlling in vitro neural tissue morphogenesis furthering advancement towards biomanufacture of CNS tissues with biomimetic anatomy and physiology.


Assuntos
Diferenciação Celular , Técnicas de Cultura de Órgãos/métodos , Células-Tronco Pluripotentes/fisiologia , Prosencéfalo/citologia , Medula Espinal/citologia , Fenômenos Biofísicos , Humanos , Morfogênese
10.
Appl Opt ; 55(29): 8316-8334, 2016 Oct 10.
Artigo em Inglês | MEDLINE | ID: mdl-27828081

RESUMO

Atmospheric lidar observations provide a unique capability to directly observe the vertical column of cloud and aerosol scattering properties. Detector and solar-background noise, however, hinder the ability of lidar systems to provide reliable backscatter and extinction cross-section estimates. Standard methods for solving this inverse problem are most effective with high signal-to-noise ratio observations that are only available at low resolution in uniform scenes. This paper describes a novel method for solving the inverse problem with high-resolution, lower signal-to-noise ratio observations that are effective in non-uniform scenes. The novelty is twofold. First, the inferences of the backscatter and extinction are applied to images, whereas current lidar algorithms only use the information content of single profiles. Hence, the latent spatial and temporal information in noisy images are utilized to infer the cross-sections. Second, the noise associated with photon-counting lidar observations can be modeled using a Poisson distribution, and state-of-the-art tools for solving Poisson inverse problems are adapted to the atmospheric lidar problem. It is demonstrated through photon-counting high spectral resolution lidar (HSRL) simulations that the proposed algorithm yields inverted backscatter and extinction cross-sections (per unit volume) with smaller mean squared error values at higher spatial and temporal resolutions, compared to the standard approach. Two case studies of real experimental data are also provided where the proposed algorithm is applied on HSRL observations and the inverted backscatter and extinction cross-sections are compared against the standard approach.

11.
Biomed Opt Express ; 7(9): 3412-3424, 2016 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-27699108

RESUMO

Fluorescence microscopy can be used to acquire real-time images of tissue morphology and with appropriate algorithms can rapidly quantify features associated with disease. The objective of this study was to assess the ability of various segmentation algorithms to isolate fluorescent positive features (FPFs) in heterogeneous images and identify an approach that can be used across multiple fluorescence microscopes with minimal tuning between systems. Specifically, we show a variety of image segmentation algorithms applied to images of stained tumor and muscle tissue acquired with 3 different fluorescence microscopes. Results indicate that a technique called maximally stable extremal regions followed by thresholding (MSER + Binary) yielded the greatest contrast in FPF density between tumor and muscle images across multiple microscopy systems.

12.
Nanotechnology ; 27(36): 364001, 2016 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-27479946

RESUMO

Image registration and non-local Poisson principal component analysis (PCA) denoising improve the quality of characteristic x-ray (EDS) spectrum imaging of Ca-stabilized Nd2/3TiO3 acquired at atomic resolution in a scanning transmission electron microscope. Image registration based on the simultaneously acquired high angle annular dark field image significantly outperforms acquisition with a long pixel dwell time or drift correction using a reference image. Non-local Poisson PCA denoising reduces noise more strongly than conventional weighted PCA while preserving atomic structure more faithfully. The reliability of and optimal internal parameters for non-local Poisson PCA denoising of EDS spectrum images is assessed using tests on phantom data.

13.
J Cancer Res Clin Oncol ; 142(7): 1475-86, 2016 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-27106032

RESUMO

PURPOSE: Histopathology is the clinical standard for tissue diagnosis; however, it requires tissue processing, laboratory personnel and infrastructure, and a highly trained pathologist to diagnose the tissue. Optical microscopy can provide real-time diagnosis, which could be used to inform the management of breast cancer. The goal of this work is to obtain images of tissue morphology through fluorescence microscopy and vital fluorescent stains and to develop a strategy to segment and quantify breast tissue features in order to enable automated tissue diagnosis. METHODS: We combined acriflavine staining, fluorescence microscopy, and a technique called sparse component analysis to segment nuclei and nucleoli, which are collectively referred to as acriflavine positive features (APFs). A series of variables, which included the density, area fraction, diameter, and spacing of APFs, were quantified from images taken from clinical core needle breast biopsies and used to create a multivariate classification model. The model was developed using a training data set and validated using an independent testing data set. RESULTS: The top performing classification model included the density and area fraction of smaller APFs (those less than 7 µm in diameter, which likely correspond to stained nucleoli).When applied to the independent testing set composed of 25 biopsy panels, the model achieved a sensitivity of 82 %, a specificity of 79 %, and an overall accuracy of 80 %. CONCLUSIONS: These results indicate that our quantitative microscopy toolbox is a potentially viable approach for detecting the presence of malignancy in clinical core needle breast biopsies.


Assuntos
Neoplasias da Mama/diagnóstico , Mama/patologia , Sistemas Automatizados de Assistência Junto ao Leito , Biópsia , Diagnóstico Diferencial , Feminino , Humanos , Coloração e Rotulagem
14.
PLoS One ; 11(1): e0147006, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26799613

RESUMO

Intraoperative assessment of surgical margins is critical to ensuring residual tumor does not remain in a patient. Previously, we developed a fluorescence structured illumination microscope (SIM) system with a single-shot field of view (FOV) of 2.1 × 1.6 mm (3.4 mm2) and sub-cellular resolution (4.4 µm). The goal of this study was to test the utility of this technology for the detection of residual disease in a genetically engineered mouse model of sarcoma. Primary soft tissue sarcomas were generated in the hindlimb and after the tumor was surgically removed, the relevant margin was stained with acridine orange (AO), a vital stain that brightly stains cell nuclei and fibrous tissues. The tissues were imaged with the SIM system with the primary goal of visualizing fluorescent features from tumor nuclei. Given the heterogeneity of the background tissue (presence of adipose tissue and muscle), an algorithm known as maximally stable extremal regions (MSER) was optimized and applied to the images to specifically segment nuclear features. A logistic regression model was used to classify a tissue site as positive or negative by calculating area fraction and shape of the segmented features that were present and the resulting receiver operator curve (ROC) was generated by varying the probability threshold. Based on the ROC curves, the model was able to classify tumor and normal tissue with 77% sensitivity and 81% specificity (Youden's index). For an unbiased measure of the model performance, it was applied to a separate validation dataset that resulted in 73% sensitivity and 80% specificity. When this approach was applied to representative whole margins, for a tumor probability threshold of 50%, only 1.2% of all regions from the negative margin exceeded this threshold, while over 14.8% of all regions from the positive margin exceeded this threshold.


Assuntos
Modelos Animais de Doenças , Engenharia Genética , Microscopia de Fluorescência/métodos , Sarcoma/patologia , Animais , Camundongos , Sarcoma/genética
15.
Int J Cancer ; 137(10): 2403-12, 2015 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-25994353

RESUMO

The goal of resection of soft tissue sarcomas located in the extremity is to preserve limb function while completely excising the tumor with a margin of normal tissue. With surgery alone, one-third of patients with soft tissue sarcoma of the extremity will have local recurrence due to microscopic residual disease in the tumor bed. Currently, a limited number of intraoperative pathology-based techniques are used to assess margin status; however, few have been widely adopted due to sampling error and time constraints. To aid in intraoperative diagnosis, we developed a quantitative optical microscopy toolbox, which includes acriflavine staining, fluorescence microscopy, and analytic techniques called sparse component analysis and circle transform to yield quantitative diagnosis of tumor margins. A series of variables were quantified from images of resected primary sarcomas and used to optimize a multivariate model. The sensitivity and specificity for differentiating positive from negative ex vivo resected tumor margins was 82 and 75%. The utility of this approach was tested by imaging the in vivo tumor cavities from 34 mice after resection of a sarcoma with local recurrence as a bench mark. When applied prospectively to images from the tumor cavity, the sensitivity and specificity for differentiating local recurrence was 78 and 82%. For comparison, if pathology was used to predict local recurrence in this data set, it would achieve a sensitivity of 29% and a specificity of 71%. These results indicate a robust approach for detecting microscopic residual disease, which is an effective predictor of local recurrence.


Assuntos
Neoplasias Ósseas/cirurgia , Diagnóstico por Imagem/métodos , Neoplasia Residual/diagnóstico , Sarcoma/cirurgia , Animais , Neoplasias Ósseas/patologia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Cuidados Intraoperatórios , Camundongos , Estudos Prospectivos , Sarcoma/patologia , Sensibilidade e Especificidade
16.
PLoS One ; 8(6): e66198, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23824589

RESUMO

PURPOSE: To develop a robust tool for quantitative in situ pathology that allows visualization of heterogeneous tissue morphology and segmentation and quantification of image features. MATERIALS AND METHODS: TISSUE EXCISED FROM A GENETICALLY ENGINEERED MOUSE MODEL OF SARCOMA WAS IMAGED USING A SUBCELLULAR RESOLUTION MICROENDOSCOPE AFTER TOPICAL APPLICATION OF A FLUORESCENT ANATOMICAL CONTRAST AGENT: acriflavine. An algorithm based on sparse component analysis (SCA) and the circle transform (CT) was developed for image segmentation and quantification of distinct tissue types. The accuracy of our approach was quantified through simulations of tumor and muscle images. Specifically, tumor, muscle, and tumor+muscle tissue images were simulated because these tissue types were most commonly observed in sarcoma margins. Simulations were based on tissue characteristics observed in pathology slides. The potential clinical utility of our approach was evaluated by imaging excised margins and the tumor bed in a cohort of mice after surgical resection of sarcoma. RESULTS: Simulation experiments revealed that SCA+CT achieved the lowest errors for larger nuclear sizes and for higher contrast ratios (nuclei intensity/background intensity). For imaging of tumor margins, SCA+CT effectively isolated nuclei from tumor, muscle, adipose, and tumor+muscle tissue types. Differences in density were correctly identified with SCA+CT in a cohort of ex vivo and in vivo images, thus illustrating the diagnostic potential of our approach. CONCLUSION: The combination of a subcellular-resolution microendoscope, acriflavine staining, and SCA+CT can be used to accurately isolate nuclei and quantify their density in anatomical images of heterogeneous tissue.


Assuntos
Microscopia de Fluorescência/métodos , Neoplasia Residual/diagnóstico , Humanos , Neoplasia Residual/patologia
17.
IEEE Trans Image Process ; 21(3): 1084-96, 2012 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-21926022

RESUMO

Observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where the number of unknowns may potentially be larger than the number of observations and f* admits sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.

18.
IEEE Trans Pattern Anal Mach Intell ; 31(3): 563-9, 2009 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-19147882

RESUMO

This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.


Assuntos
Algoritmos , Inteligência Artificial , Modelos Teóricos , Reconhecimento Automatizado de Padrão/métodos , Simulação por Computador
19.
Proc IEEE Int Symp Biomed Imaging ; : 803-806, 2009 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-22158826

RESUMO

The techniques introduced in this paper allow for accurate multi-scale image reconstruction of multi-photon microscopy data. Multi-photon microscopy (MPM) is a tool for the non-invasive imaging of living organisms and tissue. The data acquired using this technique can contain information about the position, excited state lifetime, and spectra of the observed photons. The small number of photons collected, however, limits the quality of the reconstruction. The multiscale framework in this paper results in an accurate representation of both the intensity and excited state lifetime information. Using a multiscale reconstruction approach based on a penalized likelihood function, the underlying image is more accurately represented as compared to a naive aggregate binning approach.

20.
Opt Express ; 16(21): 16352-63, 2008 Oct 13.
Artigo em Inglês | MEDLINE | ID: mdl-18852741

RESUMO

Many infrared optical systems in wide-ranging applications such as surveillance and security frequently require large fields of view (FOVs). Often this necessitates a focal plane array (FPA) with a large number of pixels, which, in general, is very expensive. In a previous paper, we proposed a method for increasing the FOV without increasing the pixel resolution of the FPA by superimposing multiple sub-images within a static scene and disambiguating the observed data to reconstruct the original scene. This technique, in effect, allows each sub-image of the scene to share a single FPA, thereby increasing the FOV without compromising resolution. In this paper, we demonstrate the increase of FOVs in a realistic setting by physically generating a superimposed video from a single scene using an optical system employing a beamsplitter and a movable mirror. Without prior knowledge of the contents of the scene, we are able to disambiguate the two sub-images, successfully capturing both large-scale features and fine details in each sub-image. We improve upon our previous reconstruction approach by allowing each sub-image to have slowly changing components, carefully exploiting correlations between sequential video frames to achieve small mean errors and to reduce run times. We show the effectiveness of this improved approach by reconstructing the constituent images of a surveillance camera video.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Técnica de Subtração , Gravação em Vídeo/métodos , Inteligência Artificial , Raios Infravermelhos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA