Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Philos Trans A Math Phys Eng Sci ; 381(2257): 20230134, 2023 Oct 09.
Artículo en Inglés | MEDLINE | ID: mdl-37611627

RESUMEN

The effectiveness of international border control measures during the COVID-19 pandemic is not well understood. Using a narrative synthesis approach to published systematic reviews, we synthesized the evidence from both modelling and observational studies on the effects of border control measures on domestic transmission of the virus. We find that symptomatic screening measures were not particularly effective, but that diagnostic-based screening methods were more effective at identifying infected travellers. Targeted travel restrictions levied against travellers from Wuhan were likely temporarily effective but insufficient to stop the exportation of the virus to the rest of the world. Quarantine of inbound travellers was also likely effective at reducing transmission, but only with relatively long quarantine periods, and came with important economic and social effects. There is little evidence that most travel restrictions, including border closure and those implemented to stop the introduction of new variants of concern, were particularly effective. Border control measures played an important role in former elimination locations but only when coupled with strong domestic public health measures. In future outbreaks, if border control measures are to be adopted, they should be seen as part of a broader strategy that includes other non-pharmaceutical interventions. This article is part of the theme issue 'The effectiveness of non-pharmaceutical interventions on the COVID-19 pandemic: the evidence'.


Asunto(s)
COVID-19 , Humanos , COVID-19/epidemiología , Pandemias/prevención & control , Salud Pública , Publicaciones , Revisiones Sistemáticas como Asunto
2.
J Math Biol ; 73(6-7): 1491-1524, 2016 12.
Artículo en Inglés | MEDLINE | ID: mdl-27072124

RESUMEN

A common view in evolutionary biology is that mutation rates are minimised. However, studies in combinatorial optimisation and search have shown a clear advantage of using variable mutation rates as a control parameter to optimise the performance of evolutionary algorithms. Much biological theory in this area is based on Ronald Fisher's work, who used Euclidean geometry to study the relation between mutation size and expected fitness of the offspring in infinite phenotypic spaces. Here we reconsider this theory based on the alternative geometry of discrete and finite spaces of DNA sequences. First, we consider the geometric case of fitness being isomorphic to distance from an optimum, and show how problems of optimal mutation rate control can be solved exactly or approximately depending on additional constraints of the problem. Then we consider the general case of fitness communicating only partial information about the distance. We define weak monotonicity of fitness landscapes and prove that this property holds in all landscapes that are continuous and open at the optimum. This theoretical result motivates our hypothesis that optimal mutation rate functions in such landscapes will increase when fitness decreases in some neighbourhood of an optimum, resembling the control functions derived in the geometric case. We test this hypothesis experimentally by analysing approximately optimal mutation rate control functions in 115 complete landscapes of binding scores between DNA sequences and transcription factors. Our findings support the hypothesis and find that the increase of mutation rate is more rapid in landscapes that are less monotonic (more rugged). We discuss the relevance of these findings to living organisms.


Asunto(s)
Evolución Biológica , Modelos Genéticos , Tasa de Mutación , Secuencia de Bases , Humanos , Modelos Estadísticos , Selección Genética
3.
J Ind Microbiol Biotechnol ; 43(1): 13-23, 2016 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-26542284

RESUMEN

Alicyclobacillus acidocaldarius, a thermoacidophilic bacterium, has a repertoire of thermo- and acid-stable enzymes that deconstruct lignocellulosic compounds. The work presented here describes the ability of A. acidocaldarius to reduce the concentration of the phenolic compounds: phenol, ferulic acid, ρ-coumaric acid and sinapinic acid during growth conditions. The extent and rate of the removal of these compounds were significantly increased by the presence of micro-molar copper concentrations, suggesting activity by copper oxidases that have been identified in the genome of A. acidocaldarius. Substrate removal kinetics was first order for phenol, ferulic acid, ρ-coumaric acid and sinapinic acid in the presence of 50 µM copper sulfate. In addition, laccase enzyme assays of cellular protein fractions suggested significant activity on a lignin analog between the temperatures of 45 and 90 °C. This work shows the potential for A. acidocaldarius to degrade phenolic compounds, demonstrating potential relevance to biofuel production and other industrial processes.


Asunto(s)
Alicyclobacillus/metabolismo , Lignina/metabolismo , Fenoles/metabolismo , Alicyclobacillus/enzimología , Alicyclobacillus/crecimiento & desarrollo , Biocombustibles , Sulfato de Cobre/farmacología , Ácidos Cumáricos/metabolismo , Cinética , Lacasa/metabolismo , Lignina/química , Oxidorreductasas/metabolismo , Fenol/metabolismo , Temperatura
4.
Stat Med ; 34(29): 3901-15, 2015 Dec 20.
Artículo en Inglés | MEDLINE | ID: mdl-26310288

RESUMEN

Functional magnetic resonance imaging (fMRI) is a dynamic four-dimensional imaging modality. However, in almost all fMRI analyses, the time series elements of this data are assumed to be second-order stationary. In this paper, we examine, using time series spectral methods, whether such stationary assumptions can be made and whether estimates of non-stationarity can be used to gain understanding into fMRI experiments. A non-stationary version of replicated stationary time series analysis is proposed that takes into account the replicated time series that are available from nearby voxels in a region of interest (ROI). These are used to investigate non-stationarities in both the ROI itself and the variations within the ROI. The proposed techniques are applied to simulated data and to an anxiety-inducing fMRI experiment.


Asunto(s)
Ansiedad/fisiopatología , Encéfalo/fisiología , Neuroimagen Funcional/métodos , Imagen por Resonancia Magnética/métodos , Análisis Espectral/métodos , Análisis de Ondículas , Sesgo , Encéfalo/irrigación sanguínea , Química Encefálica/fisiología , Simulación por Computador , Humanos , Oxígeno/sangre , Procesamiento de Señales Asistido por Computador , Factores de Tiempo
5.
Patterns (N Y) ; 5(6): 101006, 2024 Jun 14.
Artículo en Inglés | MEDLINE | ID: mdl-39005485

RESUMEN

For healthcare datasets, it is often impossible to combine data samples from multiple sites due to ethical, privacy, or logistical concerns. Federated learning allows for the utilization of powerful machine learning algorithms without requiring the pooling of data. Healthcare data have many simultaneous challenges, such as highly siloed data, class imbalance, missing data, distribution shifts, and non-standardized variables, that require new methodologies to address. Federated learning adds significant methodological complexity to conventional centralized machine learning, requiring distributed optimization, communication between nodes, aggregation of models, and redistribution of models. In this systematic review, we consider all papers on Scopus published between January 2015 and February 2023 that describe new federated learning methodologies for addressing challenges with healthcare data. We reviewed 89 papers meeting these criteria. Significant systemic issues were identified throughout the literature, compromising many methodologies reviewed. We give detailed recommendations to help improve methodology development for federated learning in healthcare.

6.
PLoS Comput Biol ; 8(3): e1002401, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22396632

RESUMEN

Traditional approaches to the problem of parameter estimation in biophysical models of neurons and neural networks usually adopt a global search algorithm (for example, an evolutionary algorithm), often in combination with a local search method (such as gradient descent) in order to minimize the value of a cost function, which measures the discrepancy between various features of the available experimental data and model output. In this study, we approach the problem of parameter estimation in conductance-based models of single neurons from a different perspective. By adopting a hidden-dynamical-systems formalism, we expressed parameter estimation as an inference problem in these systems, which can then be tackled using a range of well-established statistical inference methods. The particular method we used was Kitagawa's self-organizing state-space model, which was applied on a number of Hodgkin-Huxley-type models using simulated or actual electrophysiological data. We showed that the algorithm can be used to estimate a large number of parameters, including maximal conductances, reversal potentials, kinetics of ionic currents, measurement and intrinsic noise, based on low-dimensional experimental data and sufficiently informative priors in the form of pre-defined constraints imposed on model parameters. The algorithm remained operational even when very noisy experimental data were used. Importantly, by combining the self-organizing state-space model with an adaptive sampling algorithm akin to the Covariance Matrix Adaptation Evolution Strategy, we achieved a significant reduction in the variance of parameter estimates. The algorithm did not require the explicit formulation of a cost function and it was straightforward to apply on compartmental models and multiple data sets. Overall, the proposed methodology is particularly suitable for resolving high-dimensional inference problems based on noisy electrophysiological data and, therefore, a potentially useful tool in the construction of biophysical neuron models.


Asunto(s)
Potenciales de Acción/fisiología , Algoritmos , Modelos Neurológicos , Modelos Estadísticos , Neuronas/fisiología , Animales , Simulación por Computador , Humanos
7.
Phys Med Biol ; 68(15)2023 07 19.
Artículo en Inglés | MEDLINE | ID: mdl-37192631

RESUMEN

Krylov subspace methods are a powerful family of iterative solvers for linear systems of equations, which are commonly used for inverse problems due to their intrinsic regularization properties. Moreover, these methods are naturally suited to solve large-scale problems, as they only require matrix-vector products with the system matrix (and its adjoint) to compute approximate solutions, and they display a very fast convergence. Even if this class of methods has been widely researched and studied in the numerical linear algebra community, its use in applied medical physics and applied engineering is still very limited. e.g. in realistic large-scale computed tomography (CT) problems, and more specifically in cone beam CT (CBCT). This work attempts to breach this gap by providing a general framework for the most relevant Krylov subspace methods applied to 3D CT problems, including the most well-known Krylov solvers for non-square systems (CGLS, LSQR, LSMR), possibly in combination with Tikhonov regularization, and methods that incorporate total variation regularization. This is provided within an open source framework: the tomographic iterative GPU-based reconstruction toolbox, with the idea of promoting accessibility and reproducibility of the results for the algorithms presented. Finally, numerical results in synthetic and real-world 3D CT applications (medical CBCT andµ-CT datasets) are provided to showcase and compare the different Krylov subspace methods presented in the paper, as well as their suitability for different kinds of problems.


Asunto(s)
Tomografía Computarizada de Haz Cónico Espiral , Reproducibilidad de los Resultados , Tomografía Computarizada por Rayos X , Algoritmos , Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen
8.
Bioresour Technol ; 384: 129338, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37343796

RESUMEN

Pelleting of lignocellulosic biomass to improve its transportation, storage and handling impacts subsequent processing and conversion. This work reports the role of high moisture pelleting in the enzymatic digestibility of corn stover prior to pretreatment, together with associated substrate characteristics. Pelleting increases the digestibility of unpretreated corn stover, from 8.2 to 15.5% glucan conversion, at 5% solid loading using 1 FPU Cellic® CTec2 per g solids. Compositional analysis indicates that loose and pelleted corn stover have similar non-dissolvable compositions, although their extractives are different. Enzymatic hydrolysis of corn stover after size reduction to normalize particle sizes and removal of extractives confirms that pelleting improves corn stover digestibility. Such differences may be explained by the decreased particle size, improved substrate accessibility, and hydrolysis of cross-linking structures induced by pelleting. These findings are useful for the development of processing schemes for sustainable and efficient use of lignocellulose.


Asunto(s)
Celulasa , Zea mays , Zea mays/química , Celulasa/química , Hidrólisis , Biomasa
9.
Front Immunol ; 14: 1228812, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37818359

RESUMEN

Background: Pneumonitis is one of the most common adverse events induced by the use of immune checkpoint inhibitors (ICI), accounting for a 20% of all ICI-associated deaths. Despite numerous efforts to identify risk factors and develop predictive models, there is no clinically deployed risk prediction model for patient risk stratification or for guiding subsequent monitoring. We believe this is due to systemic suboptimal approaches in study designs and methodologies in the literature. The nature and prevalence of different methodological approaches has not been thoroughly examined in prior systematic reviews. Methods: The PubMed, medRxiv and bioRxiv databases were used to identify studies that aimed at risk factor discovery and/or risk prediction model development for ICI-induced pneumonitis (ICI pneumonitis). Studies were then analysed to identify common methodological pitfalls and their contribution to the risk of bias, assessed using the QUIPS and PROBAST tools. Results: There were 51 manuscripts eligible for the review, with Japan-based studies over-represented, being nearly half (24/51) of all papers considered. Only 2/51 studies had a low risk of bias overall. Common bias-inducing practices included unclear diagnostic method or potential misdiagnosis, lack of multiple testing correction, the use of univariate analysis for selecting features for multivariable analysis, discretization of continuous variables, and inappropriate handling of missing values. Results from the risk model development studies were also likely to have been overoptimistic due to lack of holdout sets. Conclusions: Studies with low risk of bias in their methodology are lacking in the existing literature. High-quality risk factor identification and risk model development studies are urgently required by the community to give the best chance of them progressing into a clinically deployable risk prediction model. Recommendations and alternative approaches for reducing the risk of bias were also discussed to guide future studies.


Asunto(s)
Neumonía , Humanos , Japón , Neumonía/diagnóstico , Neumonía/inducido químicamente , Factores de Riesgo , Revisiones Sistemáticas como Asunto
10.
Sci Data ; 10(1): 493, 2023 07 27.
Artículo en Inglés | MEDLINE | ID: mdl-37500661

RESUMEN

The National COVID-19 Chest Imaging Database (NCCID) is a centralized UK database of thoracic imaging and corresponding clinical data. It is made available by the National Health Service Artificial Intelligence (NHS AI) Lab to support the development of machine learning tools focused on Coronavirus Disease 2019 (COVID-19). A bespoke cleaning pipeline for NCCID, developed by the NHSx, was introduced in 2021. We present an extension to the original cleaning pipeline for the clinical data of the database. It has been adjusted to correct additional systematic inconsistencies in the raw data such as patient sex, oxygen levels and date values. The most important changes will be discussed in this paper, whilst the code and further explanations are made publicly available on GitLab. The suggested cleaning will allow global users to work with more consistent data for the development of machine learning tools without being an expert. In addition, it highlights some of the challenges when working with clinical multi-center data and includes recommendations for similar future initiatives.


Asunto(s)
COVID-19 , Tórax , Humanos , Inteligencia Artificial , Aprendizaje Automático , Medicina Estatal , Radiografía Torácica , Tórax/diagnóstico por imagen
11.
Commun Med (Lond) ; 3(1): 139, 2023 Oct 06.
Artículo en Inglés | MEDLINE | ID: mdl-37803172

RESUMEN

BACKGROUND: Classifying samples in incomplete datasets is a common aim for machine learning practitioners, but is non-trivial. Missing data is found in most real-world datasets and these missing values are typically imputed using established methods, followed by classification of the now complete samples. The focus of the machine learning researcher is to optimise the classifier's performance. METHODS: We utilise three simulated and three real-world clinical datasets with different feature types and missingness patterns. Initially, we evaluate how the downstream classifier performance depends on the choice of classifier and imputation methods. We employ ANOVA to quantitatively evaluate how the choice of missingness rate, imputation method, and classifier method influences the performance. Additionally, we compare commonly used methods for assessing imputation quality and introduce a class of discrepancy scores based on the sliced Wasserstein distance. We also assess the stability of the imputations and the interpretability of model built on the imputed data. RESULTS: The performance of the classifier is most affected by the percentage of missingness in the test data, with a considerable performance decline observed as the test missingness rate increases. We also show that the commonly used measures for assessing imputation quality tend to lead to imputed data which poorly matches the underlying data distribution, whereas our new class of discrepancy scores performs much better on this measure. Furthermore, we show that the interpretability of classifier models trained using poorly imputed data is compromised. CONCLUSIONS: It is imperative to consider the quality of the imputation when performing downstream classification as the effects on the classifier can be considerable.


Many artificial intelligence (AI) methods aim to classify samples of data into groups, e.g., patients with disease vs. those without. This often requires datasets to be complete, i.e., that all data has been collected for all samples. However, in clinical practice this is often not the case and some data can be missing. One solution is to 'complete' the dataset using a technique called imputation to replace those missing values. However, assessing how well the imputation method performs is challenging. In this work, we demonstrate why people should care about imputation, develop a new method for assessing imputation quality, and demonstrate that if we build AI models on poorly imputed data, the model can give different results to those we would hope for. Our findings may improve the utility and quality of AI models in the clinic.

12.
Stat Med ; 31(3): 253-68, 2012 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-22170084

RESUMEN

In multivariate clinical trials, a key research endpoint is ascertaining whether a candidate treatment is more efficacious than an established alternative. This global endpoint is clearly of high practical value for studies, such as those arising from neuroimaging, where the outcome dimensions are not only numerous but they are also highly correlated and the available sample sizes are typically small. In this paper, we develop a two-stage procedure testing the null hypothesis of global equivalence between treatments effects and demonstrate its application to analysing phase II neuroimaging trials. Prior information such as suitable statistics of historical data or suitably elicited expert clinical opinions are combined with data collected from the first stage of the trial to learn a set of optimal weights. We apply these weights to the outcome dimensions of the second-stage responses to form the linear combination z and t tests statistics while controlling the test's false positive rate. We show that the proposed tests hold desirable asymptotic properties and characterise their power functions under wide conditions. In particular, by comparing the power of the proposed tests with that of Hotelling's T(2), we demonstrate their advantages when sample sizes are close to the dimension of the multivariate outcome. We apply our methods to fMRI studies, where we find that, for sufficiently precise first stage estimates of the treatment effect, standard single-stage testing procedures are outperformed.


Asunto(s)
Ensayos Clínicos como Asunto/estadística & datos numéricos , Imagen por Resonancia Magnética/métodos , Análisis Multivariante , Neuroimagen/estadística & datos numéricos , Encéfalo , Humanos , Proyectos de Investigación/estadística & datos numéricos , Tamaño de la Muestra
13.
J Acoust Soc Am ; 131(6): 4651-64, 2012 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-22712938

RESUMEN

A model for fundamental frequency (F0, or commonly pitch) employing a functional principal component (FPC) analysis framework is presented. The model is applied to Mandarin Chinese; this Sino-Tibetan language is rich in pitch-related information as the relative pitch curve is specified for most syllables in the lexicon. The approach yields a quantification of the influence carried by each identified component in relation to original tonal content, without formulating any assumptions on the shape of the tonal components. The original five speaker corpus is preprocessed using a locally weighted least squares smoother to produce F0 curves. These smoothed curves are then utilized as input for the computation of FPC scores and their corresponding eigenfunctions. These scores are analyzed in a series of penalized mixed effect models, through which meaningful categorical prototypes are built. The prototypes appear to confirm known tonal characteristics of the language, as well as suggest the presence of a sinusoid tonal component that is previously undocumented.


Asunto(s)
Fonética , Habla/fisiología , China , Femenino , Humanos , Lenguaje , Masculino , Modelos Teóricos , Percepción de la Altura Tonal , Acústica del Lenguaje
14.
Bioresour Technol ; 363: 127999, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36152978

RESUMEN

Liquefaction of high solid loadings of unpretreated corn stover pellets has been demonstrated with rheology of the resulting slurries enabling mixing and movement within biorefinery bioreactors. However, some forms of pelleted stover do not readily liquefy, so it is important to screen out lots of unsuitable pellets before processing is initiated. This work reports a laboratory assay that rapidly assesses whether pellets have the potential for enzyme-based liquefaction at high solids loadings. Twenty-eight pelleted corn stover (harvested at the same time and location) were analyzed using 20 mL enzyme solutions (3 FPU cellulase/ g biomass) at 30 % w/v solids loading. Imaging together with measurement of reducing sugars were performed over 24-hours. Some samples formed concentrated slurries of 300 mg/mL (dry basis) in the small-scale assay, which was later confirmed in an agitated bioreactor. Also, the laboratory assay showed potential for optimizing enzyme formulations that could be employed for slurry formation.


Asunto(s)
Celulasa , Zea mays , Reactores Biológicos , Hidrólisis , Azúcares
15.
Bioresour Technol ; 341: 125773, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34419879

RESUMEN

The movement of solid material into and between unit operations within a biorefinery is a bottleneck in reaching design capacity, with formation of biomass slurries needed to introduce feedstock. Corn stover slurries have been achieved from dilute acid, pretreated materials resulting in slurry concentrations of up to about 150 g/L, above which flowability is compromised. We report a new strategy to liquefy corn stover at higher solids concentration (300 g/L) by initially cooking it with the enzyme mimetic maleic acid at 40 mM and 150 °C. This is followed by 6 h of enzymatic modification at 1 FPU (2.2 mg protein)/g solids, resulting in a yield stress of 171 Pa after 6 h and 58 Pa in 48 h compared to 6806 Pa for untreated stover. Mimetic treatment of corn stover pellets minimizes the inhibitory effect of xylo-oligomers on hydrolytic enzymes. This strategy allows for the delivery of solid lignocellulosic slurry into a pretreatment reactor by pumping, improving operability of a biorefinery.


Asunto(s)
Ácidos , Zea mays , Biomasa , Hidrólisis
16.
Phonetica ; 67(1-2): 82-99, 2010.
Artículo en Inglés | MEDLINE | ID: mdl-20798571

RESUMEN

While both human and linguistic factors affect fundamental frequency (F(0)) in spoken language, capturing the influence of multiple effects and their interactions presents special challenges, especially when there are strict time constraints on the data-gathering process. A lack of speaker literacy can further impede the collection of identical utterances across multiple speakers. This study employs linear mixed effects analysis to elucidate how various effects and their interactions contribute to the production of F(0) in Luobuzhai, a tonal dialect of the Qiang language. In addition to the effects of speaker sex and tone, F(0) in this language is affected by previous and following tones, sentence type, vowel, position in the phrase, and by numerous combinations of these effects. Under less than ideal data collecting conditions, a single experiment was able to yield an extensive model of F(0) output in an endangered language of the Himalayas.


Asunto(s)
Lenguaje , Fonación , Fonética , Psicolingüística , Acústica del Lenguaje , Conducta Verbal , Adulto , China , Femenino , Humanos , Masculino , Persona de Mediana Edad , Semántica , Factores Sexuales , Espectrografía del Sonido , Medición de la Producción del Habla
17.
Neuroimage ; 47(1): 184-93, 2009 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-19344774

RESUMEN

A functional smoothing approach to the analysis of PET time course data is presented. By borrowing information across space and accounting for this pooling through the use of a nonparametric covariate adjustment, it is possible to smooth the PET time course data thus reducing the noise. A new model for functional data analysis, the Multiplicative Nonparametric Random Effects Model, is introduced to more accurately account for the variation in the data. A locally adaptive bandwidth choice helps to determine the correct amount of smoothing at each time point. This preprocessing step to smooth the data then allows subsequent analysis by methods such as Spectral Analysis to be substantially improved in terms of their mean squared error.


Asunto(s)
Tomografía de Emisión de Positrones/métodos , Análisis de Componente Principal , Procesamiento de Señales Asistido por Computador , Algoritmos , Encéfalo/fisiología , Radioisótopos de Carbono , Simulación por Computador , Diprenorfina , Humanos , Modelos Biológicos , Fantasmas de Imagen , Estadísticas no Paramétricas , Factores de Tiempo
18.
Environ Toxicol Chem ; 28(2): 279-86, 2009 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-18803441

RESUMEN

Acidithiobacillus caldus is a thermophilic acidophile found in commercial biomining, acid mine drainage systems, and natural environments. Previous work has characterized A. caldus as a chemolithotrophic autotroph capable of utilizing reduced sulfur compounds under aerobic conditions. Organic acids are especially toxic to chemolithotrophs in low-pH environments, where they diffuse more readily into the cell and deprotonate within the cytoplasm. In the present study, the toxic effects of oxaloacetate, pyruvate, 2-ketoglutarate, acetate, malate, succinate, and fumarate on A. caldus strain BC13 were examined under batch conditions. All tested organic acids exhibited some inhibitory effect. Oxaloacetate was observed to inhibit growth completely at a concentration of 250 microM, whereas other organic acids were completely inhibitory at concentrations of between 1,000 and 5,000 microM. In these experiments, the measured concentrations of organic acids decreased with time, indicating uptake or assimilation by the cells. Phospholipid fatty acid analyses indicated an effect of organic acids on the cellular envelope. Notable differences included an increase in cyclic fatty acids in the presence of organic acids, indicating possible instability of the cellular envelope. This was supported by field emission scanning-electron micrographs showing blebbing and sluffing in cells grown in the presence of organic acids.


Asunto(s)
Acidithiobacillus/efectos de los fármacos , Ácidos/toxicidad , Compuestos Orgánicos/toxicidad , Acidithiobacillus/crecimiento & desarrollo
19.
Nat Commun ; 10(1): 1220, 2019 12 25.
Artículo en Inglés | MEDLINE | ID: mdl-30899012

RESUMEN

Given the recent controversies in some neuroimaging statistical methods, we compare the most frequently used functional Magnetic Resonance Imaging (fMRI) analysis packages: AFNI, FSL and SPM, with regard to temporal autocorrelation modeling. This process, sometimes known as pre-whitening, is conducted in virtually all task fMRI studies. Here, we employ eleven datasets containing 980 scans corresponding to different fMRI protocols and subject populations. We found that autocorrelation modeling in AFNI, although imperfect, performed much better than the autocorrelation modeling of FSL and SPM. The presence of residual autocorrelated noise in FSL and SPM leads to heavily confounded first level results, particularly for low-frequency experimental designs. SPM's alternative pre-whitening method, FAST, performed better than SPM's default. The reliability of task fMRI studies could be improved with more accurate autocorrelation modeling. We recommend that fMRI analysis packages provide diagnostic plots to make users aware of any pre-whitening problems.


Asunto(s)
Encéfalo/diagnóstico por imagen , Neuroimagen Funcional/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Algoritmos , Artefactos , Simulación por Computador , Conjuntos de Datos como Asunto , Humanos , Modelos Lineales , Reproducibilidad de los Resultados
20.
Nat Commun ; 10(1): 1511, 2019 03 29.
Artículo en Inglés | MEDLINE | ID: mdl-30926806

RESUMEN

The original HTML version of this Article had an incorrect Published online date of 25 December 2019; it should have been 21 March 2019. This has been corrected in the HTML version of the Article. The PDF version was correct from the time of publication.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA