RESUMO
BACKGROUND: Few methods are available for transparently combining different evidence streams for chemical risk assessment to reach an integrated conclusion on the probability of causation. Hence, the UK Committees on Toxicity (COT) and on Carcinogenicity (COC) have reviewed current practice and developed guidance on how to achieve this in a transparent manner, using graphical visualisation. METHODS/APPROACH: All lines of evidence, including toxicological, epidemiological, new approach methodologies, and mode of action should be considered, taking account of their strengths/weaknesses in their relative weighting towards a conclusion on the probability of causation. A qualitative estimate of the probability of causation is plotted for each line of evidence and a combined estimate provided. DISCUSSION/CONCLUSIONS: Guidance is provided on integration of multiple lines of evidence for causation, based on current best practice. Qualitative estimates of probability for each line of evidence are plotted graphically. This ensures a deliberative, consensus conclusion on likelihood of causation is reached. It also ensures clear communication of the influence of the different lines of evidence on the overall conclusion on causality. Issues on which advice from the respective Committees is sought varies considerably, hence the guidance is designed to be sufficiently flexible to meet this need.
Assuntos
Probabilidade , Medição de Risco , Humanos , Reino Unido , AnimaisRESUMO
This article discusses issues associated with the design and interpretation of biomarker studies, points to various guidelines and lists points to look out for in assessing studies.
Assuntos
Projetos de Pesquisa , Biomarcadores , Interpretação Estatística de Dados , HumanosRESUMO
In this article the importance of study design will be emphasized, statistical methods for analysing the data are described and some of the implications of these method are discussed. How these issues relate to the use of SpIN and SnOUT rules are reviewed.
Assuntos
Projetos de Pesquisa , Biomarcadores , HumanosRESUMO
Diagnostic statistics such as sensitivity and specificity are widely used in the assessment of biomarkers. Interpretation of these and other statistics derived from a 2 × 2 table can be complex. The properties of the commonly used statistics are discussed. The object is to provide help in their interpretation for authors designing studies and the subsequent reporting of results and to referees and others who assess such papers.
Assuntos
Projetos de Pesquisa , Biomarcadores , Humanos , Sensibilidade e EspecificidadeRESUMO
Objectives. To specify symptoms and measure prevalence of psychological distress among incarcerated people in long-term solitary confinement.Methods. We gathered data via semistructured, in-depth interviews; Brief Psychiatric Rating Scale (BPRS) assessments; and systematic reviews of medical and disciplinary files for 106 randomly selected people in solitary confinement in the Washington State Department of Corrections in 2017. We performed 1-year follow-up interviews and BPRS assessments with 80 of these incarcerated people, and we present the results of our qualitative content analysis and descriptive statistics.Results. BPRS results showed clinically significant symptoms of depression, anxiety, or guilt among half of our research sample. Administrative data showed disproportionately high rates of serious mental illness and self-harming behavior compared with general prison populations. Interview content analysis revealed additional symptoms, including social isolation, loss of identity, and sensory hypersensitivity.Conclusions. Our coordinated study of rating scale, interview, and administrative data illustrates the public health crisis of solitary confinement. Because 95% or more of all incarcerated people, including those who experienced solitary confinement, are eventually released, understanding disproportionate psychopathology matters for developing prevention policies and addressing the unique needs of people who have experienced solitary confinement, an extreme element of mass incarceration.
Assuntos
Prisioneiros , Angústia Psicológica , Isolamento Social/psicologia , Estresse Psicológico , Adulto , Idoso , Estudos Transversais , Humanos , Masculino , Pessoa de Meia-Idade , Prevalência , Prisioneiros/psicologia , Prisioneiros/estatística & dados numéricos , Prisões , Estresse Psicológico/epidemiologia , Estresse Psicológico/fisiopatologia , Estresse Psicológico/psicologia , Estados Unidos/epidemiologia , Adulto JovemRESUMO
OBJECTIVES: In the present study, we have tested whether MRI T1 relaxation time is a sensitive marker to detect early stages of amyloidosis and gliosis in the young 5xFAD transgenic mouse, a well-established animal model for Alzheimer's disease. MATERIALS AND METHODS: 5xFAD and wild-type mice were imaged in a 4.7 T Varian horizontal bore MRI system to generate T1 quantitative maps using the spin-echo multi-slice sequence. Following immunostaining for glial fibrillary acidic protein, Iba-1, and amyloid-ß, T1 and area fraction of staining were quantified in the posterior parietal and primary somatosensory cortex and corpus callosum. RESULTS: In comparison with age-matched wild-type mice, we observed first signs of amyloidosis in 2.5-month-old 5xFAD mice, and development of gliosis in 5-month-old 5xFAD mice. In contrast, MRI T1 relaxation times of young, i.e., 2.5- and 5-month-old, 5xFAD mice were not significantly different to those of age-matched wild-type controls. Furthermore, although disease progression was detectable by increased amyloid-ß load in the brain of 5-month-old 5xFAD mice compared with 2.5-month-old 5xFAD mice, MRI T1 relaxation time did not change. CONCLUSIONS: In summary, our data suggest that MRI T1 relaxation time is neither a sensitive measure of disease onset nor progression at early stages in the 5xFAD mouse transgenic mouse model.
Assuntos
Doença de Alzheimer/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética , Peptídeos beta-Amiloides/metabolismo , Animais , Proteínas de Ligação ao Cálcio/metabolismo , Corpo Caloso/diagnóstico por imagem , Modelos Animais de Doenças , Progressão da Doença , Feminino , Proteína Glial Fibrilar Ácida/metabolismo , Masculino , Camundongos , Camundongos Transgênicos , Proteínas dos Microfilamentos/metabolismo , Córtex Sensório-Motor/diagnóstico por imagemRESUMO
In the life sciences, many measurement methods yield only the relative abundances of different components in a sample. With such relative-or compositional-data, differential expression needs careful interpretation, and correlation-a statistical workhorse for analyzing pairwise relationships-is an inappropriate measure of association. Using yeast gene expression data we show how correlation can be misleading and present proportionality as a valid alternative for relative data. We show how the strength of proportionality between two variables can be meaningfully and interpretably described by a new statistic Ï which can be used instead of correlation as the basis of familiar analyses and visualisation methods, including co-expression networks and clustered heatmaps. While the main aim of this study is to present proportionality as a means to analyse relative data, it also raises intriguing questions about the molecular mechanisms underlying the proportional regulation of a range of yeast genes.
Assuntos
Biologia Computacional/métodos , Modelos Genéticos , Projetos de Pesquisa , Regulação Fúngica da Expressão Gênica/genética , Modelos Estatísticos , RNA Fúngico/genética , RNA Mensageiro/genética , Leveduras/genéticaRESUMO
We describe an investigation into how Massey University's Pollen Classifynder can accelerate the understanding of pollen and its role in nature. The Classifynder is an imaging microscopy system that can locate, image and classify slide based pollen samples. Given the laboriousness of purely manual image acquisition and identification it is vital to exploit assistive technologies like the Classifynder to enable acquisition and analysis of pollen samples. It is also vital that we understand the strengths and limitations of automated systems so that they can be used (and improved) to compliment the strengths and weaknesses of human analysts to the greatest extent possible. This article reviews some of our experiences with the Classifynder system and our exploration of alternative classifier models to enhance both accuracy and interpretability. Our experiments in the pollen analysis problem domain have been based on samples from the Australian National University's pollen reference collection (2,890 grains, 15 species) and images bundled with the Classifynder system (400 grains, 4 species). These samples have been represented using the Classifynder image feature set. We additionally work through a real world case study where we assess the ability of the system to determine the pollen make-up of samples of New Zealand honey. In addition to the Classifynder's native neural network classifier, we have evaluated linear discriminant, support vector machine, decision tree and random forest classifiers on these data with encouraging results. Our hope is that our findings will help enhance the performance of future releases of the Classifynder and other systems for accelerating the acquisition and analysis of pollen samples.
Assuntos
Algoritmos , Biologia Computacional/métodos , Processamento de Imagem Assistida por Computador/métodos , Pólen/citologia , Mel/análise , Mel/classificação , Magnoliopsida , Modelos Biológicos , Nova Zelândia , Plantas/classificação , Pólen/classificação , Reprodutibilidade dos Testes , Especificidade da EspécieRESUMO
The Cramer classification scheme has emerged as one of the most extensively-adopted predictive toxicology tools, owing in part to its employment for chemical categorisation within threshold of toxicological concern evaluation. The characteristics of several of its rules have contributed to inconsistencies with respect to degree of hazard attributed to common (particularly food-relevant) substances. This investigation examines these discrepancies, and their origins, raising awareness of such issues amongst users seeking to apply and/or adapt the rule-set. A dataset of over 3,000 compounds was assembled, each with Cramer class assignments issued by up to four groups of industry and academic experts. These were complemented by corresponding outputs from in silico implementations of the scheme present within Toxtree and OECD QSAR Toolbox software, including a working of a "Revised Cramer Decision Tree". Consistency between judgments was assessed, revealing that although the extent of inter-expert agreement was very high (≥97%), general concordance between expert and in silico calls was more modest (â¼70%). In particular, 22 chemical groupings were identified to serve as prominent sources of disagreement, the origins of which could be attributed either to differences in subjective interpretation, to software coding anomalies, or to reforms introduced by authors of the revised rules.
RESUMO
Exposure levels without appreciable human health risk may be determined by dividing a point of departure on a dose-response curve (e.g., benchmark dose) by a composite adjustment factor (AF). An "effect severity" AF (ESAF) is employed in some regulatory contexts. An ESAF of 10 may be incorporated in the derivation of a health-based guidance value (HBGV) when a "severe" toxicological endpoint, such as teratogenicity, irreversible reproductive effects, neurotoxicity, or cancer was observed in the reference study. Although mutation data have been used historically for hazard identification, this endpoint is suitable for quantitative dose-response modeling and risk assessment. As part of the 8th International Workshops on Genotoxicity Testing, a sub-group of the Quantitative Analysis Work Group (WG) explored how the concept of effect severity could be applied to mutation. To approach this question, the WG reviewed the prevailing regulatory guidance on how an ESAF is incorporated into risk assessments, evaluated current knowledge of associations between germline or somatic mutation and severe disease risk, and mined available data on the fraction of human germline mutations expected to cause severe disease. Based on this review and given that mutations are irreversible and some cause severe human disease, in regulatory settings where an ESAF is used, a majority of the WG recommends applying an ESAF value between 2 and 10 when deriving a HBGV from mutation data. This recommendation may need to be revisited in the future if direct measurement of disease-causing mutations by error-corrected next generation sequencing clarifies selection of ESAF values.
Assuntos
Acetilcisteína , Transplante de Rim , Humanos , Lipocalina-2/urina , Lipocalinas/urina , Doadores de TecidosRESUMO
Historical negative control data (HCD) have played an increasingly important role in interpreting the results of genotoxicity tests. In particular, Organisation for Economic Co-operation and Development (OECD) genetic toxicology test guidelines recommend comparing responses produced by exposure to test substances with the distribution of HCD as one of three criteria for evaluating and interpreting study results (referred to herein as "Criterion C"). Because of the potential for inconsistency in how HCD are acquired, maintained, described, and used to interpret genotoxicity testing results, a workgroup of the International Workshops for Genotoxicity Testing was convened to provide recommendations on this crucial topic. The workgroup used example data sets from four in vivo tests, the Pig-a gene mutation assay, the erythrocyte-based micronucleus test, the transgenic rodent gene mutation assay, and the in vivo alkaline comet assay to illustrate how the quality of HCD can be evaluated. In addition, recommendations are offered on appropriate methods for evaluating HCD distributions. Recommendations of the workgroup are: When concurrent negative control data fulfill study acceptability criteria, they represent the most important comparator for judging whether a particular test substance induced a genotoxic effect. HCD can provide useful context for interpreting study results, but this requires supporting evidence that (i) HCD were generated appropriately, and (ii) their quality has been assessed and deemed sufficiently high for this purpose. HCD should be visualized before any study comparisons take place; graph(s) that show the degree to which HCD are stable over time are particularly useful. Qualitative and semi-quantitative assessments of HCD should also be supplemented with quantitative evaluations. Key factors in the assessment of HCD include: (i) the stability of HCD over time, and (ii) the degree to which inter-study variation explains the total variability observed. When animal-to-animal variation is the predominant source of variability, the relationship between responses in the study and an HCD-derived interval or upper bounds value (i.e., OECD Criterion C) can be used with a strong degree of confidence in contextualizing a particular study's results. When inter-study variation is the major source of variability, comparisons between study data and the HCD bounds are less useful, and consequentially, less emphasis should be placed on using HCD to contextualize a particular study's results. The workgroup findings add additional support for the use of HCD for data interpretation; but relative to most current OECD test guidelines, we recommend a more flexible application that takes into consideration HCD quality. The workgroup considered only commonly used in vivo tests, but it anticipates that the same principles will apply to other genotoxicity tests, including many in vitro tests.
RESUMO
Cells within the tumour microenvironment (TME) can impact tumour development and influence treatment response. Computational approaches have been developed to deconvolve the TME from bulk RNA-seq. Using scRNA-seq profiling from breast tumours we simulate thousands of bulk mixtures, representing tumour purities and cell lineages, to compare the performance of nine TME deconvolution methods (BayesPrism, Scaden, CIBERSORTx, MuSiC, DWLS, hspe, CPM, Bisque, and EPIC). Some methods are more robust in deconvolving mixtures with high tumour purity levels. Most methods tend to mis-predict normal epithelial for cancer epithelial as tumour purity increases, a finding that is validated in two independent datasets. The breast cancer molecular subtype influences this mis-prediction. BayesPrism and DWLS have the lowest combined numbers of false positives and false negatives, and have the best performance when deconvolving granular immune lineages. Our findings highlight the need for more single-cell characterisation of rarer cell types, and suggest that tumour cell compositions should be considered when deconvolving the TME.
Assuntos
Neoplasias Mamárias Animais , Música , Animais , Microambiente Tumoral , Linhagem da Célula , RNA-SeqRESUMO
Quantitative risk assessments of chemicals are routinely performed using in vivo data from rodents; however, there is growing recognition that non-animal approaches can be human-relevant alternatives. There is an urgent need to build confidence in non-animal alternatives given the international support to reduce the use of animals in toxicity testing where possible. In order for scientists and risk assessors to prepare for this paradigm shift in toxicity assessment, standardization and consensus on in vitro testing strategies and data interpretation will need to be established. To address this issue, an Expert Working Group (EWG) of the 8th International Workshop on Genotoxicity Testing (IWGT) evaluated the utility of quantitative in vitro genotoxicity concentration-response data for risk assessment. The EWG first evaluated available in vitro methodologies and then examined the variability and maximal response of in vitro tests to estimate biologically relevant values for the critical effect sizes considered adverse or unacceptable. Next, the EWG reviewed the approaches and computational models employed to provide human-relevant dose context to in vitro data. Lastly, the EWG evaluated risk assessment applications for which in vitro data are ready for use and applications where further work is required. The EWG concluded that in vitro genotoxicity concentration-response data can be interpreted in a risk assessment context. However, prior to routine use in regulatory settings, further research will be required to address the remaining uncertainties and limitations.
RESUMO
This short commentary discusses Biomarkers' requirements for the reporting of statistical analyses in submitted papers. It is expected that submitters will follow the general instructions of the journal, the more detailed guidance given by the International Committee of Medical Journal Editors, the specific guidelines developed by the EQUATOR network, and those of various specialist groups. Biomarkers expects that the study design and subsequent statistical analyses are clearly reported and that the data reported can be made available for independent assessment. The journal recognizes that there is continuing debate about different approaches to statistical science. Biomarkers appreciates that the field continues to develop rapidly and encourages the use of new methodologies.
Assuntos
Biomarcadores/análise , Modelos Estatísticos , Guias como Assunto , HumanosRESUMO
When crystallization screening is conducted many outcomes are observed but typically the only trial recorded in the literature is the condition that yielded the crystal(s) used for subsequent diffraction studies. The initial hit that was optimized and the results of all the other trials are lost. These missing results contain information that would be useful for an improved general understanding of crystallization. This paper provides a report of a crystallization data exchange (XDX) workshop organized by several international large-scale crystallization screening laboratories to discuss how this information may be captured and utilized. A group that administers a significant fraction of the world's crystallization screening results was convened, together with chemical and structural data informaticians and computational scientists who specialize in creating and analysing large disparate data sets. The development of a crystallization ontology for the crystallization community was proposed. This paper (by the attendees of the workshop) provides the thoughts and rationale leading to this conclusion. This is brought to the attention of the wider audience of crystallographers so that they are aware of these early efforts and can contribute to the process going forward.
Assuntos
Cristalografia por Raios X , Cristalização , Bases de Dados FactuaisRESUMO
Disaster victim identification (DVI) entails a protracted process of evidence collection and data matching to reconcile physical remains with victim identity. Technology is critical to DVI by enabling the linkage of physical evidence to information. However, labelling physical remains and collecting data at the scene are dominated by low-technology paper-based practices. We ask, how can technology help us tag and track the victims of disaster? Our response to this question has two parts. First, we conducted a human-computer interaction led investigation into the systematic factors impacting DVI tagging and tracking processes. Through interviews with Australian DVI practitioners, we explored how technologies to improve linkage might fit with prevailing work practices and preferences; practical and social considerations; and existing systems and processes. We focused on tagging and tracking activities throughout the DVI process. Using insights from these interviews and relevant literature, we identified four critical themes: protocols and training; stress and stressors; the plurality of information capture and management systems; and practicalities and constraints. Second, these findings were iteratively discussed by the authors, who have combined expertise across electronics, data science, cybersecurity, human-computer interaction and forensic pathology. We applied the themes identified in the first part of the investigation to critically review technologies that could support DVI practitioners by enhancing DVI processes that link physical evidence to information. This resulted in an overview of candidate technologies matched with consideration of their key attributes. This study recognises the importance of considering human factors that can affect technology adoption into existing practices. Consequently, we provide a searchable table (as Supplementary information) that relates technologies to the key considerations and attributes relevant to DVI practice, for readers to apply to their own context. While this research directly contributes to DVI, it also has applications to other domains in which a physical/digital linkage is required, and particularly within high stress environments with little room for error.Key points:Disaster victim identification (DVI) processes require us to link physical evidence and digital information. While technology could improve this linkage, experience shows that technological "solutions" are not always adopted in practice.Our study of the practices, preferences and contexts of Australian DVI practitioners suggests 10 critical considerations for these technologies.We review and evaluate 44 candidate technologies against these considerations and highlight the role of human factors in adoption.
RESUMO
We present a novel approach to the Metagenomic Geolocation Challenge based on random projection of the sample reads from each location. This approach explores the direct use of k-mer composition to characterise samples so that we can avoid the computationally demanding step of aligning reads to available microbial reference sequences. Each variable-length read is converted into a fixed-length, k-mer-based read signature. Read signatures are then clustered into location signatures which provide a more compact characterisation of the reads at each location. Classification is then treated as a problem in ranked retrieval of locations, where signature similarity is used as a measure of similarity in microbial composition. We evaluate our approach using the CAMDA 2020 Challenge dataset and obtain promising results based on nearest neighbour classification. The main findings of this study are that k-mer representations carry sufficient information to reveal the origin of many of the CAMDA 2020 Challenge metagenomic samples, and that this reference-free approach can be achieved with much less computation than methods that need reads to be assigned to operational taxonomic units-advantages which become clear through comparison to previously published work on the CAMDA 2019 Challenge data.
RESUMO
It is often assumed that genotoxic substances will be detected more easily by using in vitro rather than in vivo genotoxicity tests since higher concentrations, more cytotoxicity and static exposures can be achieved. However, there is a paucity of data demonstrating whether genotoxic substances are detected at lower concentrations in cell culture in vitro than can be reached in the blood of animals treated in vivo. To investigate this issue, we compared the lowest concentration required for induction of chromosomal damage in vitro (lowest observed effective concentration, or LOEC) with the concentration of the test substance in blood at the lowest dose required for biologically relevant induction of micronuclei in vivo (lowest observed effective dose, or LOED). In total, 83 substances were found for which the LOED could be identified or estimated, where concentrations in blood and micronucleus data were available via the same route of administration in the same species, and in vitro chromosomal damage data were available. 39.8 % of substances were positive in vivo at blood concentrations that were lower than the LOEC in vitro, 22.9 % were positive at similar concentrations, and 37.3 % of substances were positive in vivo at higher concentrations. Distribution analysis showed a very wide scatter of > 6 orders of magnitude across these 3 categories. When mode of action was evaluated, the distribution of clastogens and aneugens across the 3 categories was very similar. Thus, the ability to detect induction of micronuclei in bone marrow in vivo regardless of the mechanism for micronucleus induction, is clearly not solely determined by the concentration of test substance which induced chromosomal damage in vitro.