Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Interdiscip Sci ; 16(1): 73-90, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37776475

RESUMO

In cancer treatment, adaptive therapy holds promise for delaying the onset of recurrence through regulating the competition between drug-sensitive and drug-resistant cells. Adaptive therapy has been studied in well-mixed models assuming free mixing of all cells and spatial models considering the interactions of single cells with their immediate adjacent cells. Both models do not reflect the spatial structure in glandular tumours where intra-gland cellular interaction is high, while inter-gland interaction is limited. Here, we use mathematical modelling to study the effects of adaptive therapy on glandular tumours that expand using either glandular fission or invasive growth. A two-dimensional, lattice-based model of sites containing sensitive and resistant cells within individual glands is developed to study the evolution of glandular tumour cells under continuous and adaptive therapies. We found that although both growth models benefit from adaptive therapy's ability to prevent recurrence, invasive growth benefits more from it than fission growth. This difference is due to the migration of daughter cells into neighboring glands that is absent in fission but present in invasive growth. The migration resulted in greater mixing of cells, enhancing competition induced by adaptive therapy. By varying the initial spatial spread and location of the resistant cells within the tumour, we found that modifying the conditions within the resistant cells containing glands affect both fission and invasive growth. However, modifying the conditions surrounding these glands affect invasive growth only. Our work reveals the interplay between growth mechanism and tumour topology in modulating the effectiveness of cancer therapy.


Assuntos
Neoplasias , Humanos , Modelos Teóricos
2.
Adv Clin Chem ; 115: 175-203, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37673520

RESUMO

Delta check is an electronic error detection tool. It compares the difference in sequential results within a patient against a predefined limit, and when exceeded, the delta check rule is considered triggered. The patient results should be withheld for review and troubleshooting before releasing to the clinical team for patient management. Delta check was initially developed as a tool to detect wrong-blood-in-tube (sample misidentification) errors. It is now applied to detect errors more broadly within the total testing process. Recent advancements in the theoretical understanding of delta check has allowed for more precise application of this tool to achieve the desired clinical performance and operational set up. In this Chapter, we review the different pre-implementation considerations, the foundation concepts of delta check, the process of setting up key delta check parameters, performance verification and troubleshooting of a delta check flag.

3.
Crit Rev Clin Lab Sci ; 60(7): 502-517, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37194676

RESUMO

Quality control practices in the modern laboratory are the result of significant advances over the many years of the profession. Major advance in conventional internal quality control has undergone a philosophical shift from a focus solely on the statistical assessment of the probability of error identification to more recent thinking on the capability of the measurement procedure (e.g. sigma metrics), and most recently, the risk of harm to the patient (the probability of patient results being affected by an error or the number of patient results with unacceptable analytical quality). Nonetheless, conventional internal quality control strategies still face significant limitations, such as the lack of (proven) commutability of the material with patient samples, the frequency of episodic testing, and the impact of operational and financial costs, that cannot be overcome by statistical advances. In contrast, patient-based quality control has seen significant developments including algorithms that improve the detection of specific errors, parameter optimization approaches, systematic validation protocols, and advanced algorithms that require very low numbers of patient results while retaining sensitive error detection. Patient-based quality control will continue to improve with the development of new algorithms that reduce biological noise and improve analytical error detection. Patient-based quality control provides continuous and commutable information about the measurement procedure that cannot be easily replicated by conventional internal quality control. Most importantly, the use of patient-based quality control helps laboratories to improve their appreciation of the clinical impact of the laboratory results produced, bringing them closer to the patients.Laboratories are encouraged to implement patient-based quality control processes to overcome the limitations of conventional internal quality control practices. Regulatory changes to recognize the capability of patient-based quality approaches, as well as laboratory informatics advances, are required for this tool to be adopted more widely.

4.
Ann Lab Med ; 43(5): 408-417, 2023 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-37080741

RESUMO

Functional reference limits describe key changes in the physiological relationship between a pair of physiologically related components. Statistically, this can be represented by a significant change in the curvature of a mathematical function or curve (e.g., an observed plateau). The point at which the statistical relationship changes significantly is the point of curvature inflection and can be mathematically modeled from the relationship between the interrelated biomarkers. Conceptually, they reside between reference intervals, which describe the statistical boundaries of a single biomarker within the reference population, and clinical decision limits that are often linked to the risk of morbidity or mortality and set as thresholds. Functional reference limits provide important physiological and pathophysiological insights that can aid laboratory result interpretation. Laboratory professionals are in a unique position to harness data from laboratory information systems to derive clinically relevant values. Increasing research on and reporting of functional reference limits in the literature will enhance their contribution to laboratory medicine and widen the evidence base used in clinical decision limits, which are currently almost exclusively contributed to by clinical trials. Their inclusion in laboratory reports will enhance the intellectual value of laboratory professionals in clinical care beyond the statistical boundaries of a healthy reference population and pave the way to them being considered in shaping clinical decision limits. This review provides an overview of the concepts related to functional reference limits, clinical examples of their use, and the impetus to include them in laboratory reports.


Assuntos
Técnicas de Laboratório Clínico , Laboratórios , Humanos , Valores de Referência , Biomarcadores
5.
Ann Lab Med ; 43(1): 5-18, 2023 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-36045052

RESUMO

Background: Calibration is a critical component for the reliability, accuracy, and precision of mass spectrometry measurements. Optimal practice in the construction, evaluation, and implementation of a new calibration curve is often underappreciated. This systematic review examined how calibration practices are applied to liquid chromatography-tandem mass spectrometry measurement procedures. Methods: The electronic database PubMed was searched from the date of database inception to April 1, 2022. The search terms used were "calibration," "mass spectrometry," and "regression." Twenty-one articles were identified and included in this review, following evaluation of the titles, abstracts, full text, and reference lists of the search results. Results: The use of matrix-matched calibrators and stable isotope-labeled internal standards helps to mitigate the impact of matrix effects. A higher number of calibration standards or replicate measurements improves the mapping of the detector response and hence the accuracy and precision of the regression model. Constructing a calibration curve with each analytical batch recharacterizes the instrument detector but does not reduce the actual variability. The analytical response and measurand concentrations should be considered when constructing a calibration curve, along with subsequent use of quality controls to confirm assay performance. It is important to assess the linearity of the calibration curve by using actual experimental data and appropriate statistics. The heteroscedasticity of the calibration data should be investigated, and appropriate weighting should be applied during regression modeling. Conclusions: This review provides an outline and guidance for optimal calibration practices in clinical mass spectrometry laboratories.


Assuntos
Calibragem , Cromatografia Líquida/métodos , Humanos , Espectrometria de Massas , Padrões de Referência , Reprodutibilidade dos Testes
6.
Clin Chem Lab Med ; 60(8): 1164-1174, 2022 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-35647783

RESUMO

OBJECTIVES: One approach to assessing reference material (RM) commutability and agreement with clinical samples (CS) is to use ordinary least squares or Deming regression with prediction intervals. This approach assumes constant variance that may not be fulfilled by the measurement procedures. Flexible regression frameworks which relax this assumption, such as quantile regression or generalized additive models for location, scale, and shape (GAMLSS), have recently been implemented, which can model the changing variance with measurand concentration. METHODS: We simulated four imprecision profiles, ranging from simple constant variance to complex mixtures of constant and proportional variance, and examined the effects on commutability assessment outcomes with above four regression frameworks and varying the number of CS, data transformations and RM location relative to CS concentration. Regression framework performance was determined by the proportion of false rejections of commutability from prediction intervals or centiles across relative RM concentrations and was compared with the expected nominal probability coverage. RESULTS: In simple variance profiles (constant or proportional variance), Deming regression, without or with logarithmic transformation respectively, is the most efficient approach. In mixed variance profiles, GAMLSS with smoothing techniques are more appropriate, with consideration given to increasing the number of CS and the relative location of RM. In the case where analytical coefficients of variation profiles are U-shaped, even the more flexible regression frameworks may not be entirely suitable. CONCLUSIONS: In commutability assessments, variance profiles of measurement procedures and location of RM in respect to clinical sample concentration significantly influence the false rejection rate of commutability.


Assuntos
Padrões de Referência , Humanos
7.
Small Methods ; 6(8): e2200185, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35652511

RESUMO

During the past decade, breakthroughs in sequencing technology have provided the basis for studies of the myriad ways in which microbial communities in and on the human body influence human health and disease. In almost every medical specialty, there is now a growing interest in accurate and quantitative profiling of the microbiota for use in diagnostic and therapeutic applications. However, the current next-generation sequencing approach for microbiome profiling is costly, requires laborious library preparation, and is challenging to scale up for routine diagnostics. Split, Amplify, and Melt analysis of BActeria-community (SAMBA), a novel multicolor digital melting polymerase chain reaction platform with unprecedented multiplexing capability is presented, and the capability to distinguish and quantify 16 bacteria species in mixtures is demonstrated. Subsequently, SAMBA is applied to measure the compositions of bacteria in the gut microbiome to identify microbial dysbiosis related to colorectal cancer. This rapid, low cost, and high-throughput approach will enable the implementation of microbiome diagnostics in clinical laboratories and routine medical practice.


Assuntos
Microbiota , Bactérias/genética , Disbiose , Sequenciamento de Nucleotídeos em Larga Escala , Humanos , Microbiota/genética , Reação em Cadeia da Polimerase
8.
Ann Lab Med ; 42(5): 597-601, 2022 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-35470278

RESUMO

This study describes an objective approach to deriving the clinical performance of autoverification rules to inform laboratory practice when implementing them. Anonymized historical laboratory data for 12 biochemistry measurands were collected and Box-Cox-transformed to approximate a Gaussian distribution. The historical laboratory data were assumed to be error-free. Using the probability theory, the clinical specificity of a set of autoverification limits can be derived by calculating the percentile values of the overall distribution of a measurand. The 5th and 95th percentile values of the laboratory data were calculated to achieve a 90% clinical specificity. Next, a predefined tolerable total error adopted from the Royal College of Pathologists of Australasia Quality Assurance Program was applied to the extracted data before subjecting to Box-Cox transformation. Using a standard normal distribution, the clinical sensitivity can be derived from the probability of the Z-value to the right of the autoverification limit for a one-tailed probability and multiplied by two for a two-tailed probability. The clinical sensitivity showed an inverse relationship with between-subject biological variation. The laboratory can set and assess the clinical performance of its autoverification rules that conforms to its desired risk profile.


Assuntos
Laboratórios , Humanos
9.
Clin Biochem ; 105-106: 57-63, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35489473

RESUMO

BACKGROUND: Between-subject biological variation (CVg) is an important parameter in several aspects of laboratory practice, including setting of analytical performance specification, delta checks and calculation of index of individuality. Using simulations, we compare the performance of two indirect (data mining) approaches for deriving CVg. METHODS: The expected mean squares (EMS) method was compared against that proposed by Harris and Fraser. Using numerical simulations, d the percentage difference in the mean between the non-pathological and pathological populations, CVi the within-subject coefficient of variation of the non-pathological distribution, f the fraction of pathological values, and e the relative increase in CVi of the pathological distribution were varied for a total of 320 conditions to examine the impact on the relative fractional of error of the recovered CVg compared to the true value. RESULTS: Comparing the two methods, the EMS and Harris and Fraser's approaches yielded similar performance of 158 conditions and 157 conditions within ± 0.20 fractional error of the true underlying CVg, for the normal and lognormal distributions, respectively. It is observed that both EMS and Harris and Fraser's method performed better using the calculated CVi rather than the actual ('presumptive') CVi. The number of conditions within 0.20 fractional error of the true underlying CVg did not differ significantly between the normal and lognormal distributions. The estimation of CVg improved with decreasing values of f, d and CViCVg. DISCUSSIONS: The two statistical approaches included in this study showed reliable performance under the simulation conditions examined.


Assuntos
Variação Biológica da População , Laboratórios , Simulação por Computador , Mineração de Dados , Humanos , Valores de Referência
11.
Clin Chem Lab Med ; 60(4): 636-644, 2022 03 28.
Artigo em Inglês | MEDLINE | ID: mdl-35107229

RESUMO

OBJECTIVES: Within-subject biological variation (CVi ) is a fundamental aspect of laboratory medicine, from interpretation of serial results, partitioning of reference intervals and setting analytical performance specifications. Four indirect (data mining) approaches in determination of CVi were directly compared. METHODS: Paired serial laboratory results for 5,000 patients was simulated using four parameters, d the percentage difference in the means between the pathological and non-pathological populations, CVi the within-subject coefficient of variation for non-pathological values, f the fraction of pathological values, and e the relative increase in CVi of the pathological distribution. These parameters resulted in a total of 128 permutations. Performance of the Expected Mean Squares method (EMS), the median method, a result ratio method with Tukey's outlier exclusion method and a modified result ratio method with Tukey's outlier exclusion were compared. RESULTS: Within the 128 permutations examined in this study, the EMS method performed the best with 101/128 permutations falling within ±0.20 fractional error of the 'true' simulated CVi , followed by the result ratio method with Tukey's exclusion method for 78/128 permutations. The median method grossly under-estimated the CVi . The modified result ratio with Tukey's rule performed best overall with 114/128 permutations within allowable error. CONCLUSIONS: This simulation study demonstrates that with careful selection of the statistical approach the influence of outliers from pathological populations can be minimised, and it is possible to recover CVi values close to the 'true' underlying non-pathological population. This finding provides further evidence for use of routine laboratory databases in derivation of biological variation components.


Assuntos
Mineração de Dados , Projetos de Pesquisa , Simulação por Computador , Humanos , Laboratórios , Valores de Referência
12.
Clin Biochem ; 103: 16-24, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35181292

RESUMO

BACKGROUND: Indirect reference intervals and biological variation studies heavily rely on statistical methods to separate pathological and non-pathological subpopulations within the same dataset. In recognition of this, we compare the performance of eight univariate statistical methods for identification and exclusion of values originating from pathological subpopulations. METHODS: The eight approaches examined were: Tukey's rule with and without Box-Cox transformation; median absolute deviation; double median absolute deviation; Gaussian mixture models; van der Loo (Vdl) methods 1 and 2; and the Kosmic approach. Using four scenarios including lognormal distributions and varying the conditions through the number of pathological populations, central location, spread and proportion for a total of 256 simulated mixed populations. A performance criterion of ± 0.05 fractional error from the true underlying lower and upper reference interval was chosen. RESULTS: Overall, the Kosmic method was a standout with the highest number of scenarios lying within the acceptable error, followed by Vdl method 1 and Tukey's rule. Kosmic and Vdl method 1 appears to discriminate better the non-pathological reference population in the case of log-normal distributed data. When the proportion and spread of pathological subpopulations is high, the performance of statistical exclusion deteriorated considerably. DISCUSSIONS: It is important that laboratories use a priori defined clinical criteria to minimise the proportion of pathological subpopulation in a dataset prior to analysis. The curated dataset should then be carefully examined so that the appropriate statistical method can be applied.


Assuntos
Laboratórios , Projetos de Pesquisa , Humanos , Valores de Referência
13.
Clin Biochem ; 98: 63-69, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34534518

RESUMO

INTRODUCTION: Internal quality control (IQC) is traditionally interpreted against predefined control limits using multi-rules or 'Westgard rules'. These include the commonly used 1:3s and 2:2s rules. Either individually or in combination, these rules have limited sensitivity for detection of systematic errors. In this proof-of-concept study, we directly compare the performance of three moving average algorithms with Westgard rules for detection of systematic error. METHODS: In this simulation study, 'error-free' IQC data (control case) was generated. Westgard rules (1:3s and 2:2s) and three moving average algorithms (simple moving average (SMA), weighted moving average (WMA), exponentially weighted moving average (EWMA); all using ±3SD as control limits) were applied to examine the false positive rates. Following this, systematic errors were introduced to the baseline IQC data to evaluate the probability of error detection and average number of episodes for error detection (ANEed). RESULTS: From the power function graphs, in comparison to Westgard rules, all three moving average algorithms showed better probability of error detection. Additionally, they also had lower ANEed compared to Westgard rules. False positive rates were comparable between the moving average algorithms and Westgard rules (all <0.5%). The performance of the SMA algorithm was comparable to the weighted algorithms forms (i.e. WMA and EWMA). CONCLUSION: Application of an SMA algorithm on IQC data improves systematic error detection compared to Westgard rules. Application of SMA algorithms can simplify laboratories IQC strategy.


Assuntos
Algoritmos , Laboratórios , Modelos Teóricos , Linguagens de Programação , Controle de Qualidade , Humanos
14.
Crit Rev Clin Lab Sci ; 58(1): 49-59, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32795201

RESUMO

Delta checks are a post-analytical verification tool that compare the difference in sequential laboratory results belonging to the same patient against a predefined limit. This unique quality tool highlights a potential error at the individual patient level. A difference in sequential laboratory results that exceeds the predefined limit is considered likely to contain an error that requires further investigation that can be time and resource intensive. This may cause a delay in the provision of the result to the healthcare provider or entail recollection of the patient sample. Delta checks have been used primarily to detect sample misidentification (sample mix-up, wrong blood in tube), and recent advancements in laboratory medicine, including the adoption of protocolized procedures, information technology and automation in the total testing process, have significantly reduced the prevalence of such errors. As such, delta check rules need to be selected carefully to balance the clinical risk of these errors and the need to maintain operational efficiency. Historically, delta check rules have been set by professional opinion based on reference change values (biological variation) or the published literature. Delta check rules implemented in this manner may not inform laboratory practitioners of their real-world performance. This review discusses several evidence-based approaches to the optimal setting of delta check rules that directly inform the laboratory practitioner of the error detection capabilities of the selected rules. Subsequent verification of workflow for the selected delta check rules is also discussed. This review is intended to provide practical assistance to laboratories in setting evidence-based delta check rules that best suits their local operational and clinical needs.


Assuntos
Laboratórios , Humanos , Controle de Qualidade , Valores de Referência
17.
Clin Biochem ; 80: 42-47, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32247779

RESUMO

OBJECTIVES: The performance of delta check rules has been considered to be dependent on the biological variation characteristics of the analyte of interest. The assumed relationships have not been formally studied. The mathematical relationship between biological variation and delta check rules is explored in this study. DESIGN AND METHODS: From the mathematical model for absolute difference delta check, the threshold for specificity and sensitivity are observed to be normalized differently. For specificity, the threshold is normalized by the within-subject biological variation (expressed as a coefficient of variation, CVi), whereas for sensitivity the threshold is normalized by the between-subject biological variation (expressed as a coefficient of variation, CVg). This highlights the different roles the two biological variations play in affecting the absolute difference distribution for correct and switched patient samples. Analogous to absolute difference delta checks, for relative difference delta checks, the expressions for specificity and sensitivity are scaled by CVi and CVg, respectively. However, the expressions are independent of µg(the average of the population). RESULTS: A comparison between the mathematical model and empirical/ historical laboratory data obtained from patients was conducted for both absolute and relative difference delta checks. In general it was found that the specificity obtained from the historical laboratory data was less than the model predicted values, while on the other hand, good correspondence was obtained between the experimental sensitivity and predicted sensitivity. CONCLUSIONS: The difference in within-subject biological variation in different patients may contribute to the observed discrepancy in predicted and empirical delta check performance.


Assuntos
Variação Biológica da População , Testes de Química Clínica , Controle de Qualidade , Humanos , Laboratórios , Valores de Referência
18.
Ann Clin Biochem ; 57(3): 215-222, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-31955587

RESUMO

OBJECTIVES: The interpretation of delta check rules in a panel of tests should be different to that at the single analyte level, as the number of hypothesis tests conducted (i.e. the number of delta check rules) is greater and needs to be taken into account. METHODS: De-identified paediatric laboratory results were extracted, and the first two serial results for each patient were used for analysis. Analytes were grouped into four common laboratory test panels consisting of renal, liver, bone and full blood count panels. The sensitivities and specificities of delta check limits as discrete panel tests were assessed by random permutation of the original data-set to simulate a wrong blood in tube situation. RESULTS: Generally, as the number of analytes included in a panel increases, the delta check rules deteriorate considerably due to the increased number of false positives, i.e. increased number hypothesis tests performed. To reduce high false-positive rates, patient results may be rejected from autovalidation only if the number of analytes failing the delta check limits exceeds a certain threshold of the total number of analytes in the panel (N). Our study found that the use of the (N2 rule) for panel results had a specificity >90% and sensitivity ranging from 25% to 45% across the four common laboratory panels. However, this did not achieve performance close to some analytes when considered in isolation. CONCLUSIONS: The simple N2 rule reduces the false-positive rate and minimizes unnecessary, resource-intensive investigations for potentially erroneous results.


Assuntos
Testes de Química Clínica , Confiabilidade dos Dados , Controle de Qualidade , Criança , Humanos , Laboratórios Hospitalares , Pediatria , Manejo de Espécimes
19.
Am J Clin Pathol ; 153(5): 605-612, 2020 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-31889173

RESUMO

OBJECTIVES: Preanalytical processes in pediatric patients are generally manual and associated with a higher risk of error. The optimized delta check rules for detecting misidentified children samples are examined. METHODS: Relative difference and absolute different delta check limits were applied on original and reshuffled (to simulate sample mislabeling/mix-up) paired deidentified pediatric results of 57 laboratory tests. The sensitivity, specificity, and accuracy of a range of delta check limits were determined. The delta check limit associated with the highest accuracy was considered optimal. RESULTS: In general, the delta check limits had poor to moderate accuracy (0.50-0.81) in detecting misidentified patient samples. The sensitivity (rule out misidentified sample) quickly deteriorated at increasing delta check limits. At the same time, the specificity (rule in misidentified sample) of the delta check limit was also low. The performance of the relative difference and absolute difference delta check rules was similar. CONCLUSIONS: Our findings showed poor delta check performance in the pediatric population. The high false-positive flag rate may lead to wasteful resource-intensive investigations and delay in result reporting. In addition, we observed that the optimized pediatric delta check correlated strongly with within-subject biologic variation, whereas delta check accuracy correlated poorly with index of individuality.


Assuntos
Patologia/normas , Controle de Qualidade , Manejo de Espécimes/normas , Criança , Humanos
20.
Clin Chem Lab Med ; 58(3): 384-389, 2020 02 25.
Artigo em Inglês | MEDLINE | ID: mdl-31734649

RESUMO

Background The delta check time interval limit is the maximum time window within which two sequential results of a patient will be evaluated by the delta check rule. The impact of time interval on delta check performance is not well studied. Methods De-identified historical laboratory data were extracted from the laboratory information system and divided into children (≤18 years) and adults (>21 years). The relative and absolute differences of the original pair of results from each patient were compared against the delta check limits associated with 90% specificity. The data were then randomly reshuffled to simulate a switched (misidentified) sample scenario. The data were divided into 1-day, 3-day, 7-day, 14-day, 1-month, 3-month, 6-month and 1-year time interval bins. The true positive- and false-positive rates at different intervals were examined. Results Overall, 24 biochemical and 20 haematological tests were analysed. For nearly all the analytes, there was no statistical evidence of any difference in the true- or false-positive rates of the delta check rules at different time intervals when compared to the overall data. The only exceptions to this were mean corpuscular volume (using both relative- and absolute-difference delta check) and mean corpuscular haemoglobin (only absolute-difference delta check) in the children population, where the false-positive rates became significantly lower at 1-year interval. Conclusions This study showed that there is no optimal delta check time interval. This fills an important evidence gap for future guidance development.


Assuntos
Análise de Dados , Projetos de Pesquisa , Técnicas de Laboratório Clínico , Humanos , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA