Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34
Filtrar
1.
Cancers (Basel) ; 16(8)2024 Apr 13.
Artículo en Inglés | MEDLINE | ID: mdl-38672572

RESUMEN

Breast cancer is the leading cause of cancer-related mortality among women in Germany and worldwide. This retrospective claims data analysis utilizing data from AOK Baden-Wuerttemberg, a major statutory German health insurance provider, aimed to construct and assess a real-world data breast cancer disease model. The study included 27,869 female breast cancer patients and 55,738 age-matched controls, analyzing data from 2010 to 2020. Three distinct breast cancer stages were analyzed: Stage A (early breast cancer without lymph node involvement), Stage B (early breast cancer with lymph node involvement), and Stage C (primary distant metastatic breast cancer). Tumor subtypes were estimated based on the prescription of antihormonal or HER2-targeted therapy. The study established that 77.9% of patients had HR+ breast cancer and 9.8% HER2+; HR+/HER2- was the most common subtype (70.9%). Overall survival (OS) analysis demonstrated significantly lower survival rates for stages B and C than for controls, with 5-year OS rates ranging from 79.3% for stage B to 35.4% for stage C. OS rates were further stratified by tumor subtype and stage, revealing varying prognoses. Distant recurrence-free survival (DRFS) analysis showed higher recurrence rates in stage B than in stage A, with HR-/HER2- displaying the worst DRFS. This study, the first to model breast cancer subtypes, stages, and outcomes using German claims data, provides valuable insights into real-world breast cancer epidemiology and demonstrates that this breast cancer disease model has the potential to be representative of treatment outcomes.

3.
Trends Hear ; 27: 23312165231211437, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37990543

RESUMEN

Preference for noise reduction (NR) strength differs between individuals. The purpose of this study was (1) to investigate whether hearing loss influences this preference, (2) to find the number of distinct settings required to classify participants in similar groups based on their preference for NR strength, and (3) to estimate the number of paired comparisons needed to predict to which preference group a participant belongs. A paired comparison paradigm was used in which participants listened to pairs of speech-in-noise stimuli processed by NR with 10 different strength settings. Participants indicated their preferred sound sample. The 30 participants were divided into three groups according to hearing status (normal hearing, mild hearing loss, and moderate hearing loss). The results showed that (1) participants with moderate hearing loss preferred stronger NR than participants with normal hearing; (2) cluster analysis based solely on the preference for NR strength showed that the data could be described well by dividing the participants into three preference clusters; (3) the appropriate cluster membership could be found with 15 paired comparisons. We conclude that on average, a higher hearing loss is related to a preference for stronger NR, at least for our NR algorithm and our participants. The results show that it might be possible to use a limited set of pre-set NR strengths that can be chosen clinically. For our NR one might use three settings: no NR, intermediate NR, and strong NR. Paired comparisons might be used to find the optimal one of the three settings.


Asunto(s)
Sordera , Audífonos , Pérdida Auditiva Sensorineural , Pérdida Auditiva , Percepción del Habla , Humanos , Pérdida Auditiva Sensorineural/diagnóstico , Pérdida Auditiva/diagnóstico , Audición
4.
Cell Rep Methods ; 3(8): 100560, 2023 08 28.
Artículo en Inglés | MEDLINE | ID: mdl-37671023

RESUMEN

In protein design, the energy associated with a huge number of sequence-conformer perturbations has to be routinely estimated. Hence, enhancing the throughput and accuracy of these energy calculations can profoundly improve design success rates and enable tackling more complex design problems. In this work, we explore the possibility of tensorizing the energy calculations and apply them in a protein design framework. We use this framework to design enhanced proteins with anti-cancer and radio-tracing functions. Particularly, we designed multispecific binders against ligands of the epidermal growth factor receptor (EGFR), where the tested design could inhibit EGFR activity in vitro and in vivo. We also used this method to design high-affinity Cu2+ binders that were stable in serum and could be readily loaded with copper-64 radionuclide. The resulting molecules show superior functional properties for their respective applications and demonstrate the generalizable potential of the described protein design approach.


Asunto(s)
Radioisótopos de Cobre , Receptores ErbB , Ojo Artificial , Aparatos Ortopédicos , Fosforilación
5.
Eur J Cancer ; 188: 111-121, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37229835

RESUMEN

BACKGROUND: Assessments of health-related quality of life (HRQoL) play an important role in transition to palliative care for women with metastatic breast cancer. We developed machine learning (ML) algorithms to analyse longitudinal HRQoL data and identify patients who may benefit from palliative care due to disease progression. METHODS: We recruited patients from two institutions and administered the EuroQoL Visual Analog Scale (EQ-VAS) via an online platform over a 6-month period. We trained a regularised regression algorithm using 10-fold cross-validation to determine if a patient was at high or low risk of disease progression based on changes in the EQ-VAS scores using data of one institution and validated the performance on data of the other institution. Progression-free survival (PFS) was the end-point. We conducted Kaplan-Meier and Cox regression analysis adjusted for clinical risk factors. RESULTS: Of 179 patients, 98 (54.7%) had progressive disease after a median follow-up of 14weeks. Using EQ-VAS scores collected at weeks 1-6 to predict disease progression at week 12, in the validation set (n = 63), PFS was significantly lower in the intelligent EQ-VAS high-risk versus low-risk group: median PFS 7 versus 10weeks, log-rank P < 0.038). Intelligent EQ-VAS had the strongest association with PFS (adjusted hazard ratio 2.69, 95% confidence interval 1.17-6.18, P = 0.02). CONCLUSION: ML algorithms can analyse changes in longitudinal HRQoL data to identify patients with disease progression earlier than standard follow-up methods. Intelligent EQ-VAS scores were identified as independent prognostic factor. Future studies may validate these results to remotely monitor patients.


Asunto(s)
Neoplasias de la Mama , Calidad de Vida , Humanos , Femenino , Estudios Retrospectivos , Neoplasias de la Mama/terapia , Neoplasias de la Mama/patología , Progresión de la Enfermedad , Medición de Resultados Informados por el Paciente , Encuestas y Cuestionarios
6.
Sci Total Environ ; 869: 161740, 2023 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-36708843

RESUMEN

Conventional Environmental Risk Assessment (ERA) of pesticide pollution is based on soil concentrations and apical endpoints, such as the reproduction of test organisms, but has traditionally disregarded information along the organismal response cascade leading to an adverse outcome. The Adverse Outcome Pathway (AOP) framework includes response information at any level of biological organization, providing opportunities to use intermediate responses as a predictive read-out for adverse outcomes instead. Transcriptomic and proteomic data can provide thousands of data points on the response to toxic exposure. Combining multiple omics data types is necessary for a comprehensive overview of the response cascade and, therefore, AOP development. However, it is unclear if transcript and protein responses are synchronized in time or time lagged. To understand if analysis of multi-omics data obtained at the same timepoint reveal one synchronized response cascade, we studied time-resolved shifts in gene transcript and protein abundance in the springtail Folsomia candida, a soil ecotoxicological model, after exposure to the neonicotinoid insecticide imidacloprid. We analyzed transcriptome and proteome data every 12 h up to 72 h after onset of exposure. The most pronounced shift in both transcript and protein abundances was observed after 48 h exposure. Moreover, cross-correlation analyses indicate that most genes displayed the highest correlation between transcript and protein abundances without a time-lag. This demonstrates that a combined analysis of transcriptomic and proteomic data from the same time-point can be used for AOP improvement. This data will promote the development of biomarkers for the presence of neonicotinoid insecticides or chemicals with a similar mechanism of action in soils.


Asunto(s)
Rutas de Resultados Adversos , Insecticidas , Ecotoxicología , Transcriptoma , Proteómica , Neonicotinoides , Insecticidas/toxicidad , Suelo
7.
Sleep Med ; 98: 9-12, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35764010

RESUMEN

OBJECTIVE: We have used an obstructive apnea index of ≥3 as treatment indication for infants with Robin sequence (RS), while the obstructive apnea-hypopnea index (OAHI) and a threshold of ≥5 is often used internationally. We wanted to know whether these two result in similar indications, and what the interobserver variability is with either asessement. METHODS: Twenty lab-based overnight sleep recordings from infants with isolated RS (median age: 7 days, range 2-38) were scored based on the 2020 American Academy of Sleep Medicine guidelines, including or excluding obstructive hypopneas. RESULTS: Median obstructive apnea index (OAI) was 18 (interquartile range: 7.6-38) including only apneas, and 35 (18-54) if obstructive hypopneas were also considered as respiratory events (OAHI). Obstructive sleep apnea (OSA) severity was re-classified from moderate to severe for two infants when obstructive hypopneas were also considered, but this did not lead to a change in clinical treatment decisions for either infant. Median interobserver agreement was 0.86 (95% CI 0.70-0.94) for the OAI, and 0.60 (0.05-0.84) for the OAHI. CONCLUSION: Inclusion of obstructive hypopneas when assessing OSA severity in RS infants doubled the obstructive event rate, but impaired interobserver agreement and would not have changed clinical management.


Asunto(s)
Médicos , Síndrome de Pierre Robin , Apnea Obstructiva del Sueño , Niño , Humanos , Lactante , Síndrome de Pierre Robin/complicaciones , Polisomnografía , Sueño
8.
BMC Bioinformatics ; 23(1): 14, 2022 Jan 06.
Artículo en Inglés | MEDLINE | ID: mdl-34991440

RESUMEN

BACKGROUND: Understanding the synergetic and antagonistic effects of combinations of drugs and toxins is vital for many applications, including treatment of multifactorial diseases and ecotoxicological monitoring. Synergy is usually assessed by comparing the response of drug combinations to a predicted non-interactive response from reference (null) models. Possible choices of null models are Loewe additivity, Bliss independence and the recently rediscovered Hand model. A different approach is taken by the MuSyC model, which directly fits a generalization of the Hill model to the data. All of these models, however, fit the dose-response relationship with a parametric model. RESULTS: We propose the Hand-GP model, a non-parametric model based on the combination of the Hand model with Gaussian processes. We introduce a new logarithmic squared exponential kernel for the Gaussian process which captures the logarithmic dependence of response on dose. From the monotherapeutic response and the Hand principle, we construct a null reference response and synergy is assessed from the difference between this null reference and the Gaussian process fitted response. Statistical significance of the difference is assessed from the confidence intervals of the Gaussian process fits. We evaluate performance of our model on a simulated data set from Greco, two simulated data sets of our own design and two benchmark data sets from Chou and Talalay. We compare the Hand-GP model to standard synergy models and show that our model performs better on these data sets. We also compare our model to the MuSyC model as an example of a recent method on these five data sets and on two-drug combination screens: Mott et al. anti-malarial screen and O'Neil et al. anti-cancer screen. We identify cases in which the HandGP model is preferred and cases in which the MuSyC model is preferred. CONCLUSION: The Hand-GP model is a flexible model to capture synergy. Its non-parametric and probabilistic nature allows it to model a wide variety of response patterns.

9.
Sci Rep ; 11(1): 12596, 2021 06 15.
Artículo en Inglés | MEDLINE | ID: mdl-34131246

RESUMEN

Women with complications of pregnancy such as preeclampsia and preterm birth are at risk for adverse long-term outcomes, including an increased future risk of chronic kidney disease (CKD) and end-stage kidney disease (ESKD). This observational cohort study aimed to examine the risk of CKD after preterm delivery and preeclampsia in a large obstetric cohort in Germany, taking into account preexisting comorbidities, potential confounders, and the severity of CKD. Statutory claims data of the AOK Baden-Wuerttemberg were used to identify women with singleton live births between 2010 and 2017. Women with preexisting conditions including CKD, ESKD, and kidney replacement therapy (KRT) were excluded. Preterm delivery (< 37 gestational weeks) was the main exposure of interest; preeclampsia was investigated as secondary exposure. The main outcome was a newly recorded diagnosis of CKD in the claims database. Data were analyzed using Cox proportional hazard regression models. The time-dependent occurrence of CKD was analyzed for four strata, i.e., births with (i) neither an exposure of preterm delivery nor an exposure of preeclampsia, (ii) no exposure of preterm delivery but exposure of at least one preeclampsia, (iii) an exposure of at least one preterm delivery but no exposure of preeclampsia, or (iv) joint exposure of preterm delivery and preeclampsia. Risk stratification also included different CKD stages. Adjustments were made for confounding factors, such as maternal age, diabetes, obesity, and dyslipidemia. The cohort consisted of 193,152 women with 257,481 singleton live births. Mean observation time was 5.44 years. In total, there were 16,948 preterm deliveries (6.58%) and 14,448 births with at least one prior diagnosis of preeclampsia (5.61%). With a mean age of 30.51 years, 1,821 women developed any form of CKD. Compared to women with no risk exposure, women with a history of at least one preterm delivery (HR = 1.789) and women with a history of at least one preeclampsia (HR = 1.784) had an increased risk for any subsequent CKD. The highest risk for CKD was found for women with a joint exposure of preterm delivery and preeclampsia (HR = 5.227). These effects were the same in magnitude only for the outcome of mild to moderate CKD, but strongly increased for the outcome of severe CKD (HR = 11.90). Preterm delivery and preeclampsia were identified as independent risk factors for all CKD stages. A joint exposure or preterm birth and preeclampsia was associated with an excessive maternal risk burden for CKD in the first decade after pregnancy. Since consequent follow-up policies have not been defined yet, these results will help guide long-term surveillance for early detection and prevention of kidney disease, especially for women affected by both conditions.


Asunto(s)
Preeclampsia/diagnóstico , Complicaciones del Embarazo/diagnóstico , Nacimiento Prematuro/diagnóstico , Insuficiencia Renal Crónica/diagnóstico , Adulto , Femenino , Humanos , Recién Nacido , Preeclampsia/epidemiología , Preeclampsia/fisiopatología , Embarazo , Complicaciones del Embarazo/epidemiología , Complicaciones del Embarazo/fisiopatología , Nacimiento Prematuro/epidemiología , Nacimiento Prematuro/fisiopatología , Insuficiencia Renal Crónica/epidemiología , Insuficiencia Renal Crónica/etiología , Insuficiencia Renal Crónica/fisiopatología , Terapia de Reemplazo Renal
10.
Mol Cell Proteomics ; 19(12): 2157-2168, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-33067342

RESUMEN

Cross-linking MS (XL-MS) has been recognized as an effective source of information about protein structures and interactions. In contrast to regular peptide identification, XL-MS has to deal with a quadratic search space, where peptides from every protein could potentially be cross-linked to any other protein. To cope with this search space, most tools apply different heuristics for search space reduction. We introduce a new open-source XL-MS database search algorithm, OpenPepXL, which offers increased sensitivity compared with other tools. OpenPepXL searches the full search space of an XL-MS experiment without using heuristics to reduce it. Because of efficient data structures and built-in parallelization OpenPepXL achieves excellent runtimes and can also be deployed on large compute clusters and cloud services while maintaining a slim memory footprint. We compared OpenPepXL to several other commonly used tools for identification of noncleavable labeled and label-free cross-linkers on a diverse set of XL-MS experiments. In our first comparison, we used a data set from a fraction of a cell lysate with a protein database of 128 targets and 128 decoys. At 5% FDR, OpenPepXL finds from 7% to over 50% more unique residue pairs (URPs) than other tools. On data sets with available high-resolution structures for cross-link validation OpenPepXL reports from 7% to over 40% more structurally validated URPs than other tools. Additionally, we used a synthetic peptide data set that allows objective validation of cross-links without relying on structural information and found that OpenPepXL reports at least 12% more validated URPs than other tools. It has been built as part of the OpenMS suite of tools and supports Windows, macOS, and Linux operating systems. OpenPepXL also supports the MzIdentML 1.2 format for XL-MS identification results. It is freely available under a three-clause BSD license at https://openms.org/openpepxl.


Asunto(s)
Reactivos de Enlaces Cruzados/química , Péptidos/análisis , Programas Informáticos , Algoritmos , Secuencia de Aminoácidos , Bases de Datos de Proteínas , Células HEK293 , Humanos , Espectrometría de Masas , Modelos Moleculares , Péptidos/química , Ribosomas/metabolismo
11.
Diagn Pathol ; 15(1): 130, 2020 Oct 23.
Artículo en Inglés | MEDLINE | ID: mdl-33097073

RESUMEN

BACKGROUND: The conventional method for the diagnosis of malaria parasites is the microscopic examination of stained blood films, which is time consuming and requires expertise. We introduce computer-based image segmentation and life stage classification with a random forest classifier. Segmentation and stage classification are performed on a large dataset of malaria parasites with ground truth labels provided by experts. METHODS: We made use of Giemsa stained images obtained from the blood of 16 patients infected with Plasmodium falciparum. Experts labeled the parasite types from each of the images. We applied a two-step approach: image segmentation followed by life stage classification. In segmentation, we classified each pixel as a parasite or non-parasite pixel using a random forest classifier. Performance was evaluated with classification accuracy, Dice coefficient and free-response receiver operating characteristic (FROC) analysis. In life stage classification, we classified each of the segmented objects into one of 8 classes: 6 parasite life stages, early ring, late ring or early trophozoite, mid trophozoite, early schizont, late schizont or segmented, and two other classes, white blood cell or debris. RESULTS: Our segmentation method gives an average cross-validated Dice coefficient of 0.82 which is a 13% improvement compared to the Otsu method. The Otsu method achieved a True Positive Fraction (TPF) of 0.925 at the expense of a False Positive Rate (FPR) of 2.45. At the same TPF of 0.925, our method achieved an FPR of 0.92, an improvement by more than a factor two. We find that inclusion of average intensity of the whole image as feature for the random forest considerably improves segmentation performance. We obtain an overall accuracy of 58.8% when classifying all life stages. Stages are mostly confused with their neighboring stages. When we reduce the life stages to ring, trophozoite and schizont only, we obtain an accuracy of 82.7%. CONCLUSION: Pixel classification gives better segmentation performance than the conventional Otsu method. Effects of staining and background variations can be reduced with the inclusion of average intensity features. The proposed method and data set can be used in the development of automatic tools for the detection and stage classification of malaria parasites. The data set is publicly available as a benchmark for future studies.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Malaria Falciparum/diagnóstico , Plasmodium falciparum , Algoritmos , Humanos , Estadios del Ciclo de Vida , Malaria Falciparum/sangre , Plasmodium falciparum/crecimiento & desarrollo
12.
J Proteome Res ; 19(3): 1060-1072, 2020 03 06.
Artículo en Inglés | MEDLINE | ID: mdl-31975601

RESUMEN

Accurate protein inference in the presence of shared peptides is still one of the key problems in bottom-up proteomics. Most protein inference tools employing simple heuristic inference strategies are efficient but exhibit reduced accuracy. More advanced probabilistic methods often exhibit better inference quality but tend to be too slow for large data sets. Here, we present a novel protein inference method, EPIFANY, combining a loopy belief propagation algorithm with convolution trees for efficient processing of Bayesian networks. We demonstrate that EPIFANY combines the reliable protein inference of Bayesian methods with significantly shorter runtimes. On the 2016 iPRG protein inference benchmark data, EPIFANY is the only tested method that finds all true-positive proteins at a 5% protein false discovery rate (FDR) without strict prefiltering on the peptide-spectrum match (PSM) level, yielding an increase in identification performance (+10% in the number of true positives and +14% in partial AUC) compared to previous approaches. Even very large data sets with hundreds of thousands of spectra (which are intractable with other Bayesian and some non-Bayesian tools) can be processed with EPIFANY within minutes. The increased inference quality including shared peptides results in better protein inference results and thus increased robustness of the biological hypotheses generated. EPIFANY is available as open-source software for all major platforms at https://OpenMS.de/epifany.


Asunto(s)
Algoritmos , Proteómica , Teorema de Bayes , Bases de Datos de Proteínas , Proteínas , Programas Informáticos
13.
Front Pharmacol ; 10: 1384, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31849651

RESUMEN

In synergy studies, one focuses on compound combinations that promise a synergistic or antagonistic effect. With the help of high-throughput techniques, a huge amount of compound combinations can be screened and filtered for suitable candidates for a more detailed analysis. Those promising candidates are chosen based on the deviance between a measured response and an expected non-interactive response. A non-interactive response is based on a principle of no interaction, such as Loewe Additivity or Bliss Independence. In a previous study, we introduced, an explicit formulation of the hitherto implicitly defined Loewe Additivity, the so-called Explicit Mean Equation. In the current study we show that this Explicit Mean Equation outperforms the original implicit formulation of Loewe Additivity and Bliss Independence when measuring synergy in terms of the deviance between measured and expected response, called the lack-of-fit. Further, we show that computing synergy as lack-of-fit outperforms a parametric approach. We show this on two datasets of compound combinations that are categorized into synergistic, non-interactive, and antagonistic.

14.
PLoS One ; 14(5): e0216559, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31071186

RESUMEN

RATIONALE & OBJECTIVE: Early prediction of chronic kidney disease (CKD) progression to end-stage kidney disease (ESKD) currently use Cox models including baseline estimated glomerular filtration rate (eGFR) only. Alternative approaches include a Cox model that includes eGFR slope determined over a baseline period of time, a Cox model with time varying GFR, or a joint modeling approach. We studied if these more complex approaches may further improve ESKD prediction. STUDY DESIGN: Prospective cohort. SETTING & PARTICIPANTS: We re-used data from two CKD cohorts including patients with baseline eGFR >30ml/min per 1.73m2. MASTERPLAN (N = 505; 55 ESKD events) was used as development dataset, and NephroTest (N = 1385; 72 events) for validation. PREDICTORS: All models included age, sex, eGFR, and albuminuria, known prognostic markers for ESKD. ANALYTICAL APPROACH: We trained the models on the MASTERPLAN data and determined discrimination and calibration for each model at 2 years follow-up for a prediction horizon of 2 years in the NephroTest cohort. We benchmarked the predictive performance against the Kidney Failure Risk Equation (KFRE). RESULTS: The C-statistics for the KFRE was 0.94 (95%CI 0.86 to 1.01). Performance was similar for the Cox model with time-varying eGFR (0.92 [0.84 to 0.97]), eGFR (0.95 [0.90 to 1.00]), and the joint model 0.91 [0.87 to 0.96]). The Cox model with eGFR slope showed the best calibration. CONCLUSION: In the present studies, where the outcome was rare and follow-up data was highly complete, the joint models did not offer improvement in predictive performance over more traditional approaches such as a survival model with time-varying eGFR, or a model with eGFR slope.


Asunto(s)
Fallo Renal Crónico/diagnóstico , Modelos Estadísticos , Insuficiencia Renal Crónica/complicaciones , Medición de Riesgo/métodos , Progresión de la Enfermedad , Femenino , Tasa de Filtración Glomerular , Humanos , Fallo Renal Crónico/etiología , Pruebas de Función Renal , Masculino , Persona de Mediana Edad , Pronóstico , Estudios Prospectivos
15.
Front Pharmacol ; 9: 31, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29467650

RESUMEN

High-throughput techniques allow for massive screening of drug combinations. To find combinations that exhibit an interaction effect, one filters for promising compound combinations by comparing to a response without interaction. A common principle for no interaction is Loewe Additivity which is based on the assumption that no compound interacts with itself and that two doses from different compounds having the same effect are equivalent. It then should not matter whether a component is replaced by the other or vice versa. We call this assumption the Loewe Additivity Consistency Condition (LACC). We derive explicit and implicit null reference models from the Loewe Additivity principle that are equivalent when the LACC holds. Of these two formulations, the implicit formulation is the known General Isobole Equation (Loewe, 1928), whereas the explicit one is the novel contribution. The LACC is violated in a significant number of cases. In this scenario the models make different predictions. We analyze two data sets of drug screening that are non-interactive (Cokol et al., 2011; Yadav et al., 2015) and show that the LACC is mostly violated and Loewe Additivity not defined. Further, we compare the measurements of the non-interactive cases of both data sets to the theoretical null reference models in terms of bias and mean squared error. We demonstrate that the explicit formulation of the null reference model leads to smaller mean squared errors than the implicit one and is much faster to compute.

16.
Sci Rep ; 8(1): 1507, 2018 01 24.
Artículo en Inglés | MEDLINE | ID: mdl-29367629

RESUMEN

The visual system is able to recognize body motion from impoverished stimuli. This requires combining stimulus information with visual priors. We present a new visual illusion showing that one of these priors is the assumption that bodies are typically illuminated from above. A change of illumination direction from above to below flips the perceived locomotion direction of a biological motion stimulus. Control experiments show that the underlying mechanism is different from shape-from-shading and directly combines information about body motion with a lighting-from-above prior. We further show that the illusion is critically dependent on the intrinsic luminance gradients of the most mobile parts of the moving body. We present a neural model with physiologically plausible mechanisms that accounts for the illusion and shows how the illumination prior might be encoded within the visual pathway. Our experiments demonstrate, for the first time, a direct influence of illumination priors in high-level motion vision.


Asunto(s)
Ilusiones , Iluminación/métodos , Percepción de Movimiento , Vías Visuales/fisiología , Humanos , Modelos Neurológicos
17.
Bioinformatics ; 34(5): 803-811, 2018 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-29069283

RESUMEN

Motivation: Computational models in biology are frequently underdetermined, due to limits in our capacity to measure biological systems. In particular, mechanistic models often contain parameters whose values are not constrained by a single type of measurement. It may be possible to achieve better model determination by combining the information contained in different types of measurements. Bayesian statistics provides a convenient framework for this, allowing a quantification of the reduction in uncertainty with each additional measurement type. We wished to explore whether such integration is feasible and whether it can allow computational models to be more accurately determined. Results: We created an ordinary differential equation model of cell cycle regulation in budding yeast and integrated data from 13 different studies covering different experimental techniques. We found that for some parameters, a single type of measurement, relative time course mRNA expression, is sufficient to constrain them. Other parameters, however, were only constrained when two types of measurements were combined, namely relative time course and absolute transcript concentration. Comparing the estimates to measurements from three additional, independent studies, we found that the degradation and transcription rates indeed matched the model predictions in order of magnitude. The predicted translation rate was incorrect however, thus revealing a deficiency in the model. Since this parameter was not constrained by any of the measurement types separately, it was only possible to falsify the model when integrating multiple types of measurements. In conclusion, this study shows that integrating multiple measurement types can allow models to be more accurately determined. Availability and implementation: The models and files required for running the inference are included in the Supplementary information. Contact: l.wessels@nki.nl. Supplementary information: Supplementary data are available at Bioinformatics online.


Asunto(s)
Biología Computacional/métodos , Modelos Biológicos , Teorema de Bayes , Saccharomycetales/genética , Saccharomycetales/metabolismo
18.
BMC Syst Biol ; 10(1): 100, 2016 10 21.
Artículo en Inglés | MEDLINE | ID: mdl-27769238

RESUMEN

BACKGROUND: Computational models in biology are characterized by a large degree of uncertainty. This uncertainty can be analyzed with Bayesian statistics, however, the sampling algorithms that are frequently used for calculating Bayesian statistical estimates are computationally demanding, and each algorithm has unique advantages and disadvantages. It is typically unclear, before starting an analysis, which algorithm will perform well on a given computational model. RESULTS: We present BCM, a toolkit for the Bayesian analysis of Computational Models using samplers. It provides efficient, multithreaded implementations of eleven algorithms for sampling from posterior probability distributions and for calculating marginal likelihoods. BCM includes tools to simplify the process of model specification and scripts for visualizing the results. The flexible architecture allows it to be used on diverse types of biological computational models. In an example inference task using a model of the cell cycle based on ordinary differential equations, BCM is significantly more efficient than existing software packages, allowing more challenging inference problems to be solved. CONCLUSIONS: BCM represents an efficient one-stop-shop for computational modelers wishing to use sampler-based Bayesian statistics.


Asunto(s)
Biología Computacional/métodos , Simulación por Computador , Programas Informáticos , Algoritmos , Teorema de Bayes , Cinética , Modelos Biológicos
19.
BMC Bioinformatics ; 15: 342, 2014 Oct 21.
Artículo en Inglés | MEDLINE | ID: mdl-25336059

RESUMEN

BACKGROUND: Millions of cells are present in thousands of images created in high-throughput screening (HTS). Biologists could classify each of these cells into a phenotype by visual inspection. But in the presence of millions of cells this visual classification task becomes infeasible. Biologists train classification models on a few thousand visually classified example cells and iteratively improve the training data by visual inspection of the important misclassified phenotypes. Classification methods differ in performance and performance evaluation time. We present a comparative study of computational performance of gentle boosting, joint boosting CellProfiler Analyst (CPA), support vector machines (linear and radial basis function) and linear discriminant analysis (LDA) on two data sets of HT29 and HeLa cancer cells. RESULTS: For the HT29 data set we find that gentle boosting, SVM (linear) and SVM (RBF) are close in performance but SVM (linear) is faster than gentle boosting and SVM (RBF). For the HT29 data set the average performance difference between SVM (RBF) and SVM (linear) is 0.42 %. For the HeLa data set we find that SVM (RBF) outperforms other classification methods and is on average 1.41 % better in performance than SVM (linear). CONCLUSIONS: Our study proposes SVM (linear) for iterative improvement of the training data and SVM (RBF) for the final classifier to classify all unlabeled cells in the whole data set.


Asunto(s)
Biología Computacional/métodos , Ensayos Analíticos de Alto Rendimiento/métodos , Imagen Molecular , Análisis Discriminante , Células HT29 , Células HeLa , Humanos , Modelos Lineales , Máquina de Vectores de Soporte
20.
J Proteome Res ; 13(9): 3871-80, 2014 Sep 05.
Artículo en Inglés | MEDLINE | ID: mdl-25102230

RESUMEN

A challenge in proteomics is that many observations are missing with the probability of missingness increasing as abundance decreases. Adjusting for this informative missingness is required to assess accurately which proteins are differentially abundant. We propose an empirical Bayesian random censoring threshold (EBRCT) model that takes the pattern of missingness in account in the identification of differential abundance. We compare our model with four alternatives, one that considers the missing values as missing completely at random (MCAR model), one with a fixed censoring threshold for each protein species (fixed censoring model) and two imputation models, k-nearest neighbors (IKNN) and singular value thresholding (SVTI). We demonstrate that the EBRCT model bests all alternative models when applied to the CPTAC study 6 benchmark data set. The model is applicable to any label-free peptide or protein quantification pipeline and is provided as an R script.


Asunto(s)
Teorema de Bayes , Modelos Estadísticos , Proteómica/métodos , Espectrometría de Masas , Proteínas/análisis , Proteínas/química , Curva ROC
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...