Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
BMC Bioinformatics ; 25(1): 3, 2024 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-38166586

RESUMO

BACKGROUND: Uniform random sampling of mass-balanced flux solutions offers an unbiased appraisal of the capabilities of metabolic networks. Unfortunately, it is impossible to avoid thermodynamically infeasible loops in flux samples when using convex samplers on large metabolic models. Current strategies for randomly sampling the non-convex loopless flux space display limited efficiency and lack theoretical guarantees. RESULTS: Here, we present LooplessFluxSampler, an efficient algorithm for exploring the loopless mass-balanced flux solution space of metabolic models, based on an Adaptive Directions Sampling on a Box (ADSB) algorithm. ADSB is rooted in the general Adaptive Direction Sampling (ADS) framework, specifically the Parallel ADS, for which theoretical convergence and irreducibility results are available for sampling from arbitrary distributions. By sampling directions that adapt to the target distribution, ADSB traverses more efficiently the sample space achieving faster mixing than other methods. Importantly, the presented algorithm is guaranteed to target the uniform distribution over convex regions, and it provably converges on the latter distribution over more general (non-convex) regions provided the sample can have full support. CONCLUSIONS: LooplessFluxSampler enables scalable statistical inference of the loopless mass-balanced solution space of large metabolic models. Grounded in a theoretically sound framework, this toolbox provides not only efficient but also reliable results for exploring the properties of the almost surely non-convex loopless flux space. Finally, LooplessFluxSampler includes a Markov Chain diagnostics suite for assessing the quality of the final sample and the performance of the algorithm.


Assuntos
Algoritmos , Modelos Biológicos , Redes e Vias Metabólicas , Projetos de Pesquisa , Adaptação Fisiológica
2.
Stat Med ; 39(21): 2695-2713, 2020 09 20.
Artigo em Inglês | MEDLINE | ID: mdl-32419227

RESUMO

The degeneration of the human brain is a complex process, which often affects certain brain regions due to healthy aging or disease. This degeneration can be evaluated on regions of interest (ROI) in the brain through probabilistic networks and morphological estimates. Current approaches for finding such networks are limited to analyses at discrete neuropsychological stages, which cannot appropriately account for connectivity dynamics over the onset of cognitive deterioration, and morphological changes are seldom unified with connectivity networks, despite known dependencies. To overcome these limitations, a probabilistic wombling model is proposed to simultaneously estimate ROI cortical thickness and covariance networks contingent on rates of change in cognitive decline. This proposed model was applied to analyze longitudinal data from healthy control (HC) and Alzheimer's disease (AD) groups and found connection differences pertaining to regions, which play a crucial role in lasting cognitive impairment, such as the entorhinal area and temporal regions. Moreover, HC cortical thickness estimates were significantly higher than those in the AD group across all ROIs. The analyses presented in this work will help practitioners jointly analyze brain tissue atrophy at the ROI-level conditional on neuropsychological networks, which could potentially allow for more targeted therapeutic interventions.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Doença de Alzheimer/patologia , Atrofia , Teorema de Bayes , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Cognição , Humanos , Imageamento por Ressonância Magnética
3.
Stat Sci ; 32(3): 385-404, 2017 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28883686

RESUMO

Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis.

4.
PLoS Comput Biol ; 11(12): e1004635, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26642072

RESUMO

In vitro studies and mathematical models are now being widely used to study the underlying mechanisms driving the expansion of cell colonies. This can improve our understanding of cancer formation and progression. Although much progress has been made in terms of developing and analysing mathematical models, far less progress has been made in terms of understanding how to estimate model parameters using experimental in vitro image-based data. To address this issue, a new approximate Bayesian computation (ABC) algorithm is proposed to estimate key parameters governing the expansion of melanoma cell (MM127) colonies, including cell diffusivity, D, cell proliferation rate, λ, and cell-to-cell adhesion, q, in two experimental scenarios, namely with and without a chemical treatment to suppress cell proliferation. Even when little prior biological knowledge about the parameters is assumed, all parameters are precisely inferred with a small posterior coefficient of variation, approximately 2-12%. The ABC analyses reveal that the posterior distributions of D and q depend on the experimental elapsed time, whereas the posterior distribution of λ does not. The posterior mean values of D and q are in the ranges 226-268 µm2h-1, 311-351 µm2h-1 and 0.23-0.39, 0.32-0.61 for the experimental periods of 0-24 h and 24-48 h, respectively. Furthermore, we found that the posterior distribution of q also depends on the initial cell density, whereas the posterior distributions of D and λ do not. The ABC approach also enables information from the two experiments to be combined, resulting in greater precision for all estimates of D and λ.


Assuntos
Melanoma/patologia , Melanoma/fisiopatologia , Modelos Biológicos , Modelos Estatísticos , Neoplasias Cutâneas/patologia , Neoplasias Cutâneas/fisiopatologia , Teorema de Bayes , Adesão Celular , Linhagem Celular Tumoral , Movimento Celular , Proliferação de Células , Simulação por Computador , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Invasividade Neoplásica
5.
Biometrics ; 72(2): 344-53, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-26584211

RESUMO

In this article we present a new method for performing Bayesian parameter inference and model choice for low- count time series models with intractable likelihoods. The method involves incorporating an alive particle filter within a sequential Monte Carlo (SMC) algorithm to create a novel exact-approximate algorithm, which we refer to as alive SMC2. The advantages of this approach over competing methods are that it is naturally adaptive, it does not involve between-model proposals required in reversible jump Markov chain Monte Carlo, and does not rely on potentially rough approximations. The algorithm is demonstrated on Markov process and integer autoregressive moving average models applied to real biological datasets of hospital-acquired pathogen incidence, animal health time series, and the cumulative number of prion disease cases in mule deer.


Assuntos
Algoritmos , Teorema de Bayes , Modelos Estatísticos , Animais , Simulação por Computador , Humanos , Doença Iatrogênica , Cadeias de Markov , Modelos Biológicos , Método de Monte Carlo , Doenças Priônicas , Fatores de Tempo
6.
Ecol Appl ; 26(8): 2635-2646, 2016 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-27862584

RESUMO

Monitoring programs are essential for understanding patterns, trends, and threats in ecological and environmental systems. However, such programs are costly in terms of dollars, human resources, and technology, and complex in terms of balancing short- and long-term requirements. In this work, We develop new statistical methods for implementing cost-effective adaptive sampling and monitoring schemes for coral reef that can better utilize existing information and resources, and which can incorporate available prior information. Our research was motivated by developing efficient monitoring practices for Australia's Great Barrier Reef. We develop and implement two types of adaptive sampling schemes, static and sequential, and show that they can be more informative and cost-effective than an existing (nonadaptive) monitoring program. Our methods are developed in a Bayesian framework with a range of utility functions relevant to environmental monitoring. Our results demonstrate the considerable potential for adaptive design to support improved management outcomes in comparison to set-and-forget styles of surveillance monitoring.


Assuntos
Recifes de Corais , Monitoramento Ambiental , Animais , Antozoários , Austrália , Teorema de Bayes , Humanos
7.
Sex Transm Infect ; 91(7): 513-9, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25564675

RESUMO

OBJECTIVES: Directly measuring disease incidence in a population is difficult and not feasible to do routinely. We describe the development and application of a new method for estimating at a population level the number of incident genital chlamydia infections, and the corresponding incidence rates, by age and sex using routine surveillance data. METHODS: A Bayesian statistical approach was developed to calibrate the parameters of a decision-pathway tree against national data on numbers of notifications and tests conducted (2001-2013). Independent beta probability density functions were adopted for priors on the time-independent parameters; the shapes of these beta parameters were chosen to match prior estimates sourced from peer-reviewed literature or expert opinion. To best facilitate the calibration, multivariate Gaussian priors on (the logistic transforms of) the time-dependent parameters were adopted, using the Matérn covariance function to favour small changes over consecutive years and across adjacent age cohorts. The model outcomes were validated by comparing them with other independent empirical epidemiological measures, that is, prevalence and incidence as reported by other studies. RESULTS: Model-based estimates suggest that the total number of people acquiring chlamydia per year in Australia has increased by ∼120% over 12 years. Nationally, an estimated 356 000 people acquired chlamydia in 2013, which is 4.3 times the number of reported diagnoses. This corresponded to a chlamydia annual incidence estimate of 1.54% in 2013, increased from 0.81% in 2001 (∼90% increase). CONCLUSIONS: We developed a statistical method which uses routine surveillance (notifications and testing) data to produce estimates of the extent and trends in chlamydia incidence.


Assuntos
Bioestatística/métodos , Infecções por Chlamydia/epidemiologia , Chlamydia/isolamento & purificação , Métodos Epidemiológicos , Infecções do Sistema Genital/epidemiologia , Adolescente , Adulto , Fatores Etários , Austrália/epidemiologia , Humanos , Incidência , Fatores Sexuais , Adulto Jovem
8.
Biometrics ; 71(1): 198-207, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25303085

RESUMO

Analytically or computationally intractable likelihood functions can arise in complex statistical inferential problems making them inaccessible to standard Bayesian inferential methods. Approximate Bayesian computation (ABC) methods address such inferential problems by replacing direct likelihood evaluations with repeated sampling from the model. ABC methods have been predominantly applied to parameter estimation problems and less to model choice problems due to the added difficulty of handling multiple model spaces. The ABC algorithm proposed here addresses model choice problems by extending Fearnhead and Prangle (2012, Journal of the Royal Statistical Society, Series B 74, 1-28) where the posterior mean of the model parameters estimated through regression formed the summary statistics used in the discrepancy measure. An additional stepwise multinomial logistic regression is performed on the model indicator variable in the regression step and the estimated model probabilities are incorporated into the set of summary statistics for model choice purposes. A reversible jump Markov chain Monte Carlo step is also included in the algorithm to increase model diversity for thorough exploration of the model space. This algorithm was applied to a validating example to demonstrate the robustness of the algorithm across a wide range of true model probabilities. Its subsequent use in three pathogen transmission examples of varying complexity illustrates the utility of the algorithm in inferring preference of particular transmission models for the pathogens.


Assuntos
Teorema de Bayes , Doenças Transmissíveis/epidemiologia , Doenças Transmissíveis/transmissão , Interpretação Estatística de Dados , Surtos de Doenças/estatística & dados numéricos , Vigilância da População/métodos , Simulação por Computador , Métodos Epidemiológicos , Humanos , Modelos Estatísticos , Prevalência , Medição de Risco/métodos
9.
Biometrics ; 69(4): 937-48, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24131221

RESUMO

In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables.


Assuntos
Teorema de Bayes , Interpretação Estatística de Dados , Funções Verossimilhança , Modelos Estatísticos , Reconhecimento Automatizado de Padrão/métodos , Simulação por Computador , Cadeias de Markov , Projetos de Pesquisa , Processos Estocásticos
10.
R Soc Open Sci ; 7(3): 191315, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32269786

RESUMO

The behaviour of many processes in science and engineering can be accurately described by dynamical system models consisting of a set of ordinary differential equations (ODEs). Often these models have several unknown parameters that are difficult to estimate from experimental data, in which case Bayesian inference can be a useful tool. In principle, exact Bayesian inference using Markov chain Monte Carlo (MCMC) techniques is possible; however, in practice, such methods may suffer from slow convergence and poor mixing. To address this problem, several approaches based on approximate Bayesian computation (ABC) have been introduced, including Markov chain Monte Carlo ABC (MCMC ABC) and sequential Monte Carlo ABC (SMC ABC). While the system of ODEs describes the underlying process that generates the data, the observed measurements invariably include errors. In this paper, we argue that several popular ABC approaches fail to adequately model these errors because the acceptance probability depends on the choice of the discrepancy function and the tolerance without any consideration of the error term. We observe that the so-called posterior distributions derived from such methods do not accurately reflect the epistemic uncertainties in parameter values. Moreover, we demonstrate that these methods provide minimal computational advantages over exact Bayesian methods when applied to two ODE epidemiological models with simulated data and one with real data concerning malaria transmission in Afghanistan.

11.
Front Physiol ; 9: 1114, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30210355

RESUMO

Purpose: Rotor stability and meandering are key mechanisms determining and sustaining cardiac fibrillation, with important implications for anti-arrhythmic drug development. However, little is yet known on how rotor dynamics are modulated by variability in cellular electrophysiology, particularly on kinetic properties of ion channel recovery. Methods: We propose a novel emulation approach, based on Gaussian process regression augmented with machine learning, for data enrichment, automatic detection, classification, and analysis of re-entrant biomarkers in cardiac tissue. More than 5,000 monodomain simulations of long-lasting arrhythmic episodes with Fenton-Karma ionic dynamics, further enriched by emulation to 80 million electrophysiological scenarios, were conducted to investigate the role of variability in ion channel densities and kinetics in modulating rotor-driven arrhythmic behavior. Results: Our methods predicted the class of excitation behavior with classification accuracy up to 96%, and emulation effectively predicted frequency, stability, and spatial biomarkers of functional re-entry. We demonstrate that the excitation wavelength interpretation of re-entrant behavior hides critical information about rotor persistence and devolution into fibrillation. In particular, whereas action potential duration directly modulates rotor frequency and meandering, critical windows of excitability are identified as the main determinants of breakup. Further novel electrophysiological insights of particular relevance for ventricular arrhythmias arise from our multivariate analysis, including the role of incomplete activation of slow inward currents in mediating tissue rate-dependence and dispersion of repolarization, and the emergence of slow recovery of excitability as a significant promoter of this mechanism of dispersion and increased arrhythmic risk. Conclusions: Our results mechanistically explain pro-arrhythmic effects of class Ic anti-arrhythmics in the ventricles despite their established role in the pharmacological management of atrial fibrillation. This is mediated by their slow recovery of excitability mode of action, promoting incomplete activation of slow inward currents and therefore increased dispersion of repolarization, given the larger influence of these currents in modulating the action potential in the ventricles compared to the atria. These results exemplify the potential of emulation techniques in elucidating novel mechanisms of arrhythmia and further application to cardiac electrophysiology.

12.
J R Soc Interface ; 15(144)2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-30021925

RESUMO

Functional response models are important in understanding predator-prey interactions. The development of functional response methodology has progressed from mechanistic models to more statistically motivated models that can account for variance and the over-dispersion commonly seen in the datasets collected from functional response experiments. However, little information seems to be available for those wishing to prepare optimal parameter estimation designs for functional response experiments. It is worth noting that optimally designed experiments may require smaller sample sizes to achieve the same statistical outcomes as non-optimally designed experiments. In this paper, we develop a model-based approach to optimal experimental design for functional response experiments in the presence of parameter uncertainty (also known as a robust optimal design approach). Further, we develop and compare new utility functions which better focus on the statistical efficiency of the designs; these utilities are generally applicable for robust optimal design in other applications (not just in functional response). The methods are illustrated using a beta-binomial functional response model for two published datasets: an experiment involving the freshwater predator Notonecta glauca (an aquatic insect) preying on Asellus aquaticus (a small crustacean), and another experiment involving a ladybird beetle (Propylea quatuordecimpunctata L.) preying on the black bean aphid (Aphis fabae Scopoli). As a by-product, we also derive necessary quantities to perform optimal design for beta-binomial regression models, which may be useful in other applications.


Assuntos
Insetos/fisiologia , Isópodes/fisiologia , Modelos Biológicos , Comportamento Predatório/fisiologia , Projetos de Pesquisa , Animais
13.
PLoS One ; 13(7): e0198583, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30001336

RESUMO

Often derived from partial correlations or many pairwise analyses, covariance networks represent the inter-relationships among regions and can reveal important topological structures in brain measures from healthy and pathological subjects. However both approaches are not consistent network estimators and are sensitive to the value of the tuning parameters. Here, we propose a consistent covariance network estimator by maximising the network likelihood (MNL) which is robust to the tuning parameter. We validate the consistency of our algorithm theoretically and via a simulation study, and contrast these results against two well-known approaches: the graphical LASSO (gLASSO) and Pearson pairwise correlations (PPC) over a range of tuning parameters. The MNL algorithm had a specificity equal to and greater than 0.94 for all sample sizes in the simulation study, and the sensitivity was shown to increase as the sample size increased. The gLASSO and PPC demonstrated a specificity-sensitivity trade-off over a range of values of tuning parameters highlighting the discrepancy in the results for misspecified values. Application of the MNL algorithm to the case study data showed a loss of connections between healthy and impaired groups, and improved ability to identify between lobe connectivity in contrast to gLASSO networks. In this work, we propose the MNL algorithm as an effective approach to find covariance brain networks, which can inform the organisational features in brain-wide analyses, particularly for large sample sizes.


Assuntos
Algoritmos , Doença de Alzheimer/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Disfunção Cognitiva/diagnóstico por imagem , Rede Nervosa/diagnóstico por imagem , Idoso , Idoso de 80 Anos ou mais , Doença de Alzheimer/fisiopatologia , Encéfalo/fisiopatologia , Mapeamento Encefálico , Estudos de Casos e Controles , Disfunção Cognitiva/fisiopatologia , Simulação por Computador , Feminino , Humanos , Funções Verossimilhança , Estudos Longitudinais , Imageamento por Ressonância Magnética , Masculino , Rede Nervosa/fisiopatologia , Neuroimagem/métodos , Neuroimagem/estatística & dados numéricos , Tamanho da Amostra , Sensibilidade e Especificidade
14.
Sci Adv ; 4(1): e1701676, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-29349296

RESUMO

The understanding of complex physical or biological systems nearly always requires a characterization of the variability that underpins these processes. In addition, the data used to calibrate these models may also often exhibit considerable variability. A recent approach to deal with these issues has been to calibrate populations of models (POMs), multiple copies of a single mathematical model but with different parameter values, in response to experimental data. To date, this calibration has been largely limited to selecting models that produce outputs that fall within the ranges of the data set, ignoring any trends that might be present in the data. We present here a novel and general methodology for calibrating POMs to the distributions of a set of measured values in a data set. We demonstrate our technique using a data set from a cardiac electrophysiology study based on the differences in atrial action potential readings between patients exhibiting sinus rhythm (SR) or chronic atrial fibrillation (cAF) and the Courtemanche-Ramirez-Nattel model for human atrial action potentials. Not only does our approach accurately capture the variability inherent in the experimental population, but we also demonstrate how the POMs that it produces may be used to extract additional information from the data used for calibration, including improved identification of the differences underlying stratified data. We also show how our approach allows different hypotheses regarding the variability in complex systems to be quantitatively compared.


Assuntos
Conjuntos de Dados como Assunto , Fenômenos Eletrofisiológicos , Átrios do Coração/fisiopatologia , Modelos Cardiovasculares , Potenciais de Ação/fisiologia , Fibrilação Atrial/fisiopatologia , Biomarcadores/metabolismo , Calibragem , Seio Coronário/fisiopatologia , Humanos , Método de Monte Carlo , Canais de Sódio/metabolismo
15.
PeerJ ; 6: e4620, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29666769

RESUMO

BACKGROUND: There is convincing evidence for the benefits of resistance training on vertical jump improvements, but little evidence to guide optimal training prescription. The inability to detect small between modality effects may partially reflect the use of ANOVA statistics. This study represents the results of a sub-study from a larger project investigating the effects of two resistance training methods on load carriage running energetics. Bayesian statistics were used to compare the effectiveness of isoinertial resistance against speed-power training to change countermovement jump (CMJ) and squat jump (SJ) height, and joint energetics. METHODS: Active adults were randomly allocated to either a six-week isoinertial (n = 16; calf raises, leg press, and lunge), or a speed-power training program (n = 14; countermovement jumps, hopping, with hip flexor training to target pre-swing running energetics). Primary outcome variables included jump height and joint power. Bayesian mixed modelling and Functional Data Analysis were used, where significance was determined by a non-zero crossing of the 95% Bayesian Credible Interval (CrI). RESULTS: The gain in CMJ height after isoinertial training was 1.95 cm (95% CrI [0.85-3.04] cm) greater than the gain after speed-power training, but the gain in SJ height was similar between groups. In the CMJ, isoinertial training produced a larger increase in power absorption at the hip by a mean 0.018% (equivalent to 35 W) (95% CrI [0.007-0.03]), knee by 0.014% (equivalent to 27 W) (95% CrI [0.006-0.02]) and foot by 0.011% (equivalent to 21 W) (95% CrI [0.005-0.02]) compared to speed-power training. DISCUSSION: Short-term isoinertial training improved CMJ height more than speed-power training. The principle adaptive difference between training modalities was at the level of hip, knee and foot power absorption.

16.
BMJ Open ; 7(2): e012174, 2017 02 07.
Artigo em Inglês | MEDLINE | ID: mdl-28174220

RESUMO

OBJECTIVES: In recent years, large-scale longitudinal neuroimaging studies have improved our understanding of healthy ageing and pathologies including Alzheimer's disease (AD). A particular focus of these studies is group differences and identification of participants at risk of deteriorating to a worse diagnosis. For this, statistical analysis using linear mixed-effects (LME) models are used to account for correlated observations from individuals measured over time. A Bayesian framework for LME models in AD is introduced in this paper to provide additional insight often not found in current LME volumetric analyses. SETTING AND PARTICIPANTS: Longitudinal neuroimaging case study of ageing was analysed in this research on 260 participants diagnosed as either healthy controls (HC), mild cognitive impaired (MCI) or AD. Bayesian LME models for the ventricle and hippocampus regions were used to: (1) estimate how the volumes of these regions change over time by diagnosis, (2) identify high-risk non-AD individuals with AD like degeneration and (3) determine probabilistic trajectories of diagnosis groups over age. RESULTS: We observed (1) large differences in the average rate of change of volume for the ventricle and hippocampus regions between diagnosis groups, (2) high-risk individuals who had progressed from HC to MCI and displayed similar rates of deterioration as AD counterparts, and (3) critical time points which indicate where deterioration of regions begins to diverge between the diagnosis groups. CONCLUSIONS: To the best of our knowledge, this is the first application of Bayesian LME models to neuroimaging data which provides inference on a population and individual level in the AD field. The application of a Bayesian LME framework allows for additional information to be extracted from longitudinal studies. This provides health professionals with valuable information of neurodegeneration stages, and a potential to provide a better understanding of disease pathology.


Assuntos
Doença de Alzheimer/diagnóstico por imagem , Ventrículos Cerebrais/diagnóstico por imagem , Disfunção Cognitiva/diagnóstico por imagem , Envelhecimento Saudável , Hipocampo/diagnóstico por imagem , Idoso , Idoso de 80 Anos ou mais , Teorema de Bayes , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Estudos de Casos e Controles , Ventrículos Cerebrais/patologia , Feminino , Hipocampo/patologia , Humanos , Estudos Longitudinais , Masculino , Pessoa de Meia-Idade , Doenças Neurodegenerativas/diagnóstico por imagem , Tamanho do Órgão , Fatores de Tempo
17.
PeerJ ; 5: e3438, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28626613

RESUMO

Seawater temperature anomalies associated with warming climate have been linked to increases in coral disease outbreaks that have contributed to coral reef declines globally. However, little is known about how seasonal scale variations in environmental factors influence disease dynamics at the level of individual coral colonies. In this study, we applied a multi-state Markov model (MSM) to investigate the dynamics of black band disease (BBD) developing from apparently healthy corals and/or a precursor-stage, termed 'cyanobacterial patches' (CP), in relation to seasonal variation in light and seawater temperature at two reef sites around Pelorus Island in the central sector of the Great Barrier Reef. The model predicted that the proportion of colonies transitioning from BBD to Healthy states within three months was approximately 57%, but 5.6% of BBD cases resulted in whole colony mortality. According to our modelling, healthy coral colonies were more susceptible to BBD during summer months when light levels were at their maxima and seawater temperatures were either rising or at their maxima. In contrast, CP mostly occurred during spring, when both light and seawater temperatures were rising. This suggests that environmental drivers for healthy coral colonies transitioning into a CP state are different from those driving transitions into BBD. Our model predicts that (1) the transition from healthy to CP state is best explained by increasing light, (2) the transition between Healthy to BBD occurs more frequently from early to late summer, (3) 20% of CP infected corals developed BBD, although light and temperature appeared to have limited impact on this state transition, and (4) the number of transitions from Healthy to BBD differed significantly between the two study sites, potentially reflecting differences in localised wave action regimes.

18.
J Phys Act Health ; 13(6 Suppl 1): S35-40, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-27392376

RESUMO

BACKGROUND: Published energy cost data for children and adolescents are lacking. The purpose of this study was to measure and describe developmental trends in the energy cost of 12 physical activities commonly performed by youth. METHODS: A mixed age cohort of 209 participants completed 12 standardized activity trials on 4 occasions over a 3-year period (baseline, 12-months, 24-months, and 36-months) while wearing a portable indirect calorimeter. Bayesian hierarchical regression was used to link growth curves from each age cohort into a single curve describing developmental trends in energy cost from age 6 to 18 years. RESULTS: For sedentary and light-intensity household chores, YOUTH METs (METy) remained stable or declined with age. In contrast, METy values associated with brisk walking, running, basketball, and dance increased with age. CONCLUSIONS: The reported energy costs for specific activities will contribute to efforts to update and expand the youth compendium.


Assuntos
Atividades Cotidianas/psicologia , Metabolismo Energético/fisiologia , Criança , Exercício Físico , Feminino , Humanos , Masculino
19.
PLoS One ; 11(4): e0147311, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27073897

RESUMO

The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a 'magnitude-based inference' approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.


Assuntos
Exercício Físico/fisiologia , Modelos Biológicos , Esportes/fisiologia , Teorema de Bayes , Humanos , Medicina Esportiva
20.
Math Biosci ; 263: 133-42, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-25747415

RESUMO

Wound healing and tumour growth involve collective cell spreading, which is driven by individual motility and proliferation events within a population of cells. Mathematical models are often used to interpret experimental data and to estimate the parameters so that predictions can be made. Existing methods for parameter estimation typically assume that these parameters are constants and often ignore any uncertainty in the estimated values. We use approximate Bayesian computation (ABC) to estimate the cell diffusivity, D, and the cell proliferation rate, λ, from a discrete model of collective cell spreading, and we quantify the uncertainty associated with these estimates using Bayesian inference. We use a detailed experimental data set describing the collective cell spreading of 3T3 fibroblast cells. The ABC analysis is conducted for different combinations of initial cell densities and experimental times in two separate scenarios: (i) where collective cell spreading is driven by cell motility alone, and (ii) where collective cell spreading is driven by combined cell motility and cell proliferation. We find that D can be estimated precisely, with a small coefficient of variation (CV) of 2-6%. Our results indicate that D appears to depend on the experimental time, which is a feature that has been previously overlooked. Assuming that the values of D are the same in both experimental scenarios, we use the information about D from the first experimental scenario to obtain reasonably precise estimates of λ, with a CV between 4 and 12%. Our estimates of D and λ are consistent with previously reported values; however, our method is based on a straightforward measurement of the position of the leading edge whereas previous approaches have involved expensive cell counting techniques. Additional insights gained using a fully Bayesian approach justify the computational cost, especially since it allows us to accommodate information from different experiments in a principled way.


Assuntos
Teorema de Bayes , Fenômenos Fisiológicos Celulares , Modelos Biológicos , Processos Estocásticos , Incerteza , Células 3T3 , Animais , Fibroblastos , Camundongos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA