Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
Add more filters

Country/Region as subject
Publication year range
1.
N Engl J Med ; 390(18): 1663-1676, 2024 May 09.
Article in English | MEDLINE | ID: mdl-38657265

ABSTRACT

BACKGROUND: Exagamglogene autotemcel (exa-cel) is a nonviral cell therapy designed to reactivate fetal hemoglobin synthesis through ex vivo clustered regularly interspaced short palindromic repeats (CRISPR)-Cas9 gene editing of the erythroid-specific enhancer region of BCL11A in autologous CD34+ hematopoietic stem and progenitor cells (HSPCs). METHODS: We conducted an open-label, single-group, phase 3 study of exa-cel in patients 12 to 35 years of age with transfusion-dependent ß-thalassemia and a ß0/ß0, ß0/ß0-like, or non-ß0/ß0-like genotype. CD34+ HSPCs were edited by means of CRISPR-Cas9 with a guide mRNA. Before the exa-cel infusion, patients underwent myeloablative conditioning with pharmacokinetically dose-adjusted busulfan. The primary end point was transfusion independence, defined as a weighted average hemoglobin level of 9 g per deciliter or higher without red-cell transfusion for at least 12 consecutive months. Total and fetal hemoglobin concentrations and safety were also assessed. RESULTS: A total of 52 patients with transfusion-dependent ß-thalassemia received exa-cel and were included in this prespecified interim analysis; the median follow-up was 20.4 months (range, 2.1 to 48.1). Neutrophils and platelets engrafted in each patient. Among the 35 patients with sufficient follow-up data for evaluation, transfusion independence occurred in 32 (91%; 95% confidence interval, 77 to 98; P<0.001 against the null hypothesis of a 50% response). During transfusion independence, the mean total hemoglobin level was 13.1 g per deciliter and the mean fetal hemoglobin level was 11.9 g per deciliter, and fetal hemoglobin had a pancellular distribution (≥94% of red cells). The safety profile of exa-cel was generally consistent with that of myeloablative busulfan conditioning and autologous HSPC transplantation. No deaths or cancers occurred. CONCLUSIONS: Treatment with exa-cel, preceded by myeloablation, resulted in transfusion independence in 91% of patients with transfusion-dependent ß-thalassemia. (Supported by Vertex Pharmaceuticals and CRISPR Therapeutics; CLIMB THAL-111 ClinicalTrials.gov number, NCT03655678.).


Subject(s)
Fetal Hemoglobin , Gene Editing , Hematopoietic Stem Cell Transplantation , beta-Thalassemia , Adolescent , Adult , Child , Female , Humans , Male , Young Adult , Antigens, CD34 , beta-Thalassemia/therapy , beta-Thalassemia/genetics , Blood Transfusion , Busulfan/therapeutic use , CRISPR-Cas Systems , Fetal Hemoglobin/biosynthesis , Fetal Hemoglobin/genetics , Gene Editing/methods , Hematopoietic Stem Cell Transplantation/methods , Hematopoietic Stem Cells , Repressor Proteins/genetics , Transplantation Conditioning , Transplantation, Autologous , Myeloablative Agonists/therapeutic use , North America , Europe
2.
Stat Med ; 40(23): 4947-4960, 2021 10 15.
Article in English | MEDLINE | ID: mdl-34111902

ABSTRACT

Response adaptive randomization (RAR) is appealing from methodological, ethical, and pragmatic perspectives in the sense that subjects are more likely to be randomized to better performing treatment groups based on accumulating data. However, applications of RAR in confirmatory drug clinical trials with multiple active arms are limited largely due to its complexity, and lack of control of randomization ratios to different treatment groups. To address the aforementioned issues, we propose a Response Adaptive Block Randomization (RABR) design allowing arbitrarily prespecified randomization ratios for the control and high-performing groups to meet clinical trial objectives. We show the validity of the conventional unweighted test in RABR with a controlled type I error rate based on the weighted combination test for sample size adaptive design invoking no large sample approximation. The advantages of the proposed RABR in terms of robustly reaching target final sample size to meet regulatory requirements and increasing statistical power as compared with the popular Doubly Adaptive Biased Coin Design are demonstrated by statistical simulations and a practical clinical trial design example.


Subject(s)
Research Design , Humans , Random Allocation , Sample Size
3.
Am J Physiol Endocrinol Metab ; 319(1): E34-E42, 2020 07 01.
Article in English | MEDLINE | ID: mdl-32228319

ABSTRACT

Nonalcoholic fatty liver disease (NAFLD) amplifies the risk of various liver diseases, ranging from simple steatosis to nonalcoholic steatohepatitis, fibrosis, and cirrhosis, and ultimately hepatocellular carcinoma. Accumulating evidence suggests the involvement of aberrant microRNAs (miRNAs or miRs) in the activation of cellular stress, inflammation, and fibrogenesis in hepatic cells at different stages of NAFLD and liver fibrosis. Here, we explored the potential role of miR-130b-5p in the pathogenesis of NAFLD, including lipid accumulation and insulin resistance, as well as the underlying mechanism. Initially, the expression of miR-130b-5p and insulin-like growth factor binding protein 2 (IGFBP2) was examined in the established high-fat diet-induced NAFLD mouse models. Then, the interaction between miR-130b-5p and IGFBP2 was validated using dual luciferase reporter assay. The effects of miR-130b-5p and IGFBP2 on lipid accumulation and insulin resistance, as well as the AKT pathway-related proteins, were evaluated using gain or loss-of-function approaches. miR-130b-5p was upregulated, and IGFBP2 was downregulated in liver tissues of NAFLD mice. miR-130b-5p targeted IGFBP2 and downregulated its expression. MiR-130b-5p inhibition or IGFBP2 overexpression reduced the expression of SREBP-1, LXRα, ChREBP, stearoyl CoA desaturase 1, acetyl CoA carboxylase 1, and fatty acid synthase, and levels of fasting blood glucose, fasting insulin, and homeostasis model assessment-insulin resistance, while increasing the ratio of p-AKT/AKT in NAFLD mice. Overall, downregulation of miR-130b-5p can prevent hepatic lipid accumulation and insulin resistance in NAFLD by activating IGFBP2-dependent AKT pathway, highlighting the potential use of anti-miR-130b-5p as therapeutic approaches for the prevention and treatment of NAFLD.


Subject(s)
Diet, High-Fat , Insulin Resistance/genetics , Insulin-Like Growth Factor Binding Protein 2/genetics , Liver/metabolism , MicroRNAs/genetics , Non-alcoholic Fatty Liver Disease/genetics , Acetyl-CoA Carboxylase/genetics , Animals , Basic Helix-Loop-Helix Leucine Zipper Transcription Factors/genetics , Blood Glucose/metabolism , Disease Models, Animal , Down-Regulation , Fatty Acid Synthase, Type I/genetics , Gene Expression , Gene Expression Regulation , Insulin/metabolism , Insulin-Like Growth Factor Binding Protein 2/metabolism , Lipid Metabolism/genetics , Liver X Receptors/genetics , Mice , MicroRNAs/metabolism , Non-alcoholic Fatty Liver Disease/metabolism , Proto-Oncogene Proteins c-akt , Signal Transduction , Stearoyl-CoA Desaturase/genetics , Sterol Regulatory Element Binding Protein 1/genetics
4.
Stat Med ; 38(6): 933-944, 2019 03 15.
Article in English | MEDLINE | ID: mdl-30450621

ABSTRACT

Adaptive sample size designs, including group sequential designs, have been used as alternatives to fixed sample size designs to achieve more robust statistical power and better trial efficiency. This work investigates the efficiency of adaptive sample size designs as compared to group sequential designs. We show that given a group sequential design, a uniformly more efficient adaptive sample size design based on the same maximum sample size and rejection boundary can be constructed. While maintaining stable statistical power at the required level, the expected sample size of the obtained adaptive sample size design is uniformly smaller than that of the group sequential design with respect to a range of the true treatment difference. The finding provides further insights into the efficiency of adaptive sample size designs and challenges the popular belief of better efficiency associated with group sequential designs. Good adaptive performance plus easy implementation and other desirable operational features make adaptive sample size designs more attractive and applicable to modern clinical trials.


Subject(s)
Sample Size , Clinical Trials as Topic/methods , Humans , Models, Statistical , Research Design , Time Factors
5.
J Biopharm Stat ; 27(4): 673-682, 2017.
Article in English | MEDLINE | ID: mdl-27315528

ABSTRACT

It is common in multiregional clinical development that data from a global trial and a local trial (in a target country) together will be used to support local filing in the target country. This approach is considered efficient drug development both globally and in the target country. However, it remains a challenge how to combine global trial data and local trial data toward local filing. To address this challenge, we propose an "interpretation-centric" evaluation criterion based on a weighted estimator that weights data from the target country and outside of the target country. This approach provides an unbiased estimate of a global treatment effect with appropriate representation of the target country patient population, where the "appropriate representation" is the desired proportion of the target country participants in a global trial and is measured by the weight parameter. This natural interpretation can facilitate drug development discussion with local regulatory agencies. Sample size of the local trial can be determined using the proposed weighted estimator. Approaches for weight determination are also discussed.


Subject(s)
Clinical Trials as Topic , Data Interpretation, Statistical , Drug Design , Multicenter Studies as Topic , Humans , Sample Size
6.
Stat Med ; 35(19): 3385-96, 2016 08 30.
Article in English | MEDLINE | ID: mdl-26999385

ABSTRACT

It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd.


Subject(s)
Clinical Trials as Topic , Research Design , Sample Size , Humans , Uncertainty
7.
Pharm Dev Technol ; 21(2): 147-51, 2016 Mar.
Article in English | MEDLINE | ID: mdl-25384711

ABSTRACT

A risk- and science-based approach to control the quality in pharmaceutical manufacturing includes a full understanding of how product attributes and process parameters relate to product performance through a proactive approach in formulation and process development. For dry manufacturing, where moisture content is not directly manipulated within the process, the variability in moisture of the incoming raw materials can impact both the processability and drug product quality attributes. A statistical approach is developed using individual raw material historical lots as a basis for the calculation of tolerance intervals for drug product moisture content so that risks associated with excursions in moisture content can be mitigated. The proposed method is based on a model-independent approach that uses available data to estimate parameters of interest that describe the population of blend moisture content values and which do not require knowledge of the individual blend moisture content values. Another advantage of the proposed tolerance intervals is that, it does not require the use of tabulated values for tolerance factors. This facilitates the implementation on any spreadsheet program like Microsoft Excel. A computational example is used to demonstrate the proposed method.


Subject(s)
Drug Compounding/methods , Pharmaceutical Preparations/chemistry , Water/chemistry , Chemistry, Pharmaceutical/methods , Quality Control , Risk Management/methods
8.
J Biopharm Stat ; 25(2): 307-16, 2015.
Article in English | MEDLINE | ID: mdl-25358076

ABSTRACT

One of the most challenging aspects of the pharmaceutical development is the demonstration and estimation of chemical stability. It is imperative that pharmaceutical products be stable for two or more years. Long-term stability studies are required to support such shelf life claim at registration. However, during drug development to facilitate formulation and dosage form selection, an accelerated stability study with stressed storage condition is preferred to quickly obtain a good prediction of shelf life under ambient storage conditions. Such a prediction typically uses Arrhenius equation that describes relationship between degradation rate and temperature (and humidity). Existing methods usually rely on the assumption of normality of the errors. In addition, shelf life projection is usually based on confidence band of a regression line. However, the coverage probability of a method is often overlooked or under-reported. In this paper, we introduce two nonparametric bootstrap procedures for shelf life estimation based on accelerated stability testing, and compare them with a one-stage nonlinear Arrhenius prediction model. Our simulation results demonstrate that one-stage nonlinear Arrhenius method has significant lower coverage than nominal levels. Our bootstrap method gave better coverage and led to a shelf life prediction closer to that based on long-term stability data.


Subject(s)
Biopharmaceutics/statistics & numerical data , Models, Statistical , Pharmaceutical Preparations/chemistry , Technology, Pharmaceutical/statistics & numerical data , Biopharmaceutics/standards , Chemistry, Pharmaceutical , Computer Simulation , Data Interpretation, Statistical , Drug Stability , Drug Storage , Guidelines as Topic , Humidity , Nonlinear Dynamics , Pharmaceutical Preparations/standards , Quality Control , Reproducibility of Results , Technology, Pharmaceutical/methods , Technology, Pharmaceutical/standards , Temperature , Time Factors
9.
J Biopharm Stat ; 25(2): 295-306, 2015.
Article in English | MEDLINE | ID: mdl-25356500

ABSTRACT

Administration of biological therapeutics can generate undesirable immune responses that may induce anti-drug antibodies (ADAs). Immunogenicity can negatively affect patients, ranging from mild reactive effect to hypersensitivity reactions or even serious autoimmune diseases. Assessment of immunogenicity is critical as the ADAs can adversely impact the efficacy and safety of the drug products. Well-developed and validated immunogenicity assays are required by the regulatory agencies as tools for immunogenicity assessment. Key to the development and validation of an immunogenicity assay is the determination of a cut point, which serves as the threshold for classifying patients as ADA positive(reactive) or negative. In practice, the cut point is determined as either the quantile of a parametric or nonparametric empirical distribution. The parametric method, which is often based on a normality assumption, may lead to biased cut point estimates when the normality assumption is violated. The non-parametric method, which yields unbiased estimates of the cut point, may have low efficiency when the sample size is small. As the distribution of immune responses are often skewed and sometimes heavy-tailed, we propose two non-normal random effects models for cut point determination. The random effects, following a skew-t or log-gamma distribution, can incorporate the skewed and heavy-tailed responses and the correlation among repeated measurements. Simulation study is conducted to compare the proposed method with the current normal and nonparametric alternatives. The proposed models are also applied to a real dataset generated from assay validation studies.


Subject(s)
Biological Products/immunology , Biopharmaceutics/statistics & numerical data , Models, Statistical , Technology, Pharmaceutical/statistics & numerical data , Animals , Bayes Theorem , Biological Products/adverse effects , Biopharmaceutics/standards , Chemistry, Pharmaceutical , Computer Simulation , Data Interpretation, Statistical , Guidelines as Topic , Humans , Numerical Analysis, Computer-Assisted , Quality Control , Reproducibility of Results , Risk Assessment , Sample Size , Statistics, Nonparametric , Technology, Pharmaceutical/methods , Technology, Pharmaceutical/standards
10.
Value Health ; 17(5): 619-28, 2014 Jul.
Article in English | MEDLINE | ID: mdl-25128056

ABSTRACT

BACKGROUND: The Problem formulation, Objectives, Alternatives, Consequences, Trade-offs, Uncertainties, Risk attitude, and Linked decisions (PrOACT-URL) framework and multiple criteria decision analysis (MCDA) have been recommended by the European Medicines Agency for structured benefit-risk assessment of medicinal products undergoing regulatory review. OBJECTIVE: The objective of this article was to provide solutions to incorporate the uncertainty from clinical data into the MCDA model when evaluating the overall benefit-risk profiles among different treatment options. METHODS: Two statistical approaches, the δ-method approach and the Monte-Carlo approach, were proposed to construct the confidence interval of the overall benefit-risk score from the MCDA model as well as other probabilistic measures for comparing the benefit-risk profiles between treatment options. Both approaches can incorporate the correlation structure between clinical parameters (criteria) in the MCDA model and are straightforward to implement. RESULTS: The two proposed approaches were applied to a case study to evaluate the benefit-risk profile of an add-on therapy for rheumatoid arthritis (drug X) relative to placebo. It demonstrated a straightforward way to quantify the impact of the uncertainty from clinical data to the benefit-risk assessment and enabled statistical inference on evaluating the overall benefit-risk profiles among different treatment options. CONCLUSIONS: The δ-method approach provides a closed form to quantify the variability of the overall benefit-risk score in the MCDA model, whereas the Monte-Carlo approach is more computationally intensive but can yield its true sampling distribution for statistical inference. The obtained confidence intervals and other probabilistic measures from the two approaches enhance the benefit-risk decision making of medicinal products.


Subject(s)
Antirheumatic Agents/therapeutic use , Arthritis, Rheumatoid/drug therapy , Decision Making , Models, Statistical , Risk Assessment/methods , Antirheumatic Agents/adverse effects , Confidence Intervals , Decision Support Techniques , Humans , Monte Carlo Method , Probability , Uncertainty
11.
J Biopharm Stat ; 24(3): 535-45, 2014.
Article in English | MEDLINE | ID: mdl-24697778

ABSTRACT

Past decades have seen a rapid growth of biopharmaceutical products on the market. The administration of such large molecules can generate antidrug antibodies that can induce unwanted immune reactions in the recipients. Assessment of immunogenicity is required by regulatory agencies in clinical and nonclinical development, and this demands a well-validated assay. One of the important performance characteristics during assay validation is the cut point, which serves as a threshold between positive and negative samples. To precisely determine the cut point, a sufficiently large data set is often needed. However, there is no guideline other than some rule-of-thumb recommendations for sample size requirement in immunoassays. In this article, we propose a systematic approach to sample size determination for immunoassays and provide tables that facilitate its applications by scientists.


Subject(s)
Antibodies/analysis , Biological Products/immunology , Immunoassay/statistics & numerical data , Models, Statistical , Sample Size , Analysis of Variance , Drug-Related Side Effects and Adverse Reactions/immunology , Humans , Statistical Distributions
12.
Stat Methods Med Res ; 30(4): 1013-1025, 2021 04.
Article in English | MEDLINE | ID: mdl-33459183

ABSTRACT

In a drug development program, the efficacy and safety of multiple doses can be evaluated in patients through a phase 2b dose ranging study. With a demonstrated dose response in the trial, promising doses are identified. Their effectiveness then is further investigated and confirmed in phase 3 studies. Although this two-step approach serves the purpose of the program, in general, it is inefficient because of its prolonged development duration and the exclusion of the phase 2b data in the final efficacy evaluation and confirmation which are only based on phase 3 data. To address the issue, we propose a new adaptive design, which seamlessly integrates the dose finding and confirmation steps under one pivotal study. Unlike existing adaptive seamless phase 2b/3 designs, the proposed design combines the response adaptive randomization, sample size modification, and multiple testing techniques to achieve better efficiency. The design can be easily implemented through an automated randomization process. At the end, a number of targeted doses are selected and their effectiveness is confirmed with guaranteed control of family-wise error rate.


Subject(s)
Research Design , Automation , Humans , Sample Size
13.
PDA J Pharm Sci Technol ; 75(2): 173-187, 2021.
Article in English | MEDLINE | ID: mdl-32999078

ABSTRACT

Relative potency assays for biological therapeutics require statistical evaluation to demonstrate similarity between the dose-response curves of a reference standard and the test samples. We developed an equivalence testing approach that can be utilized for the complete potency assay life cycle, from early development until commercialization. This approach was based on the use of generic equivalence margins to enable equivalence testing at the beginning of assay development, when the body of assay-specific data is still very limited. Generic equivalence margins for equivalence testing of four-parameter logistic curve fits were established for bioassays and binding assays spanning a variety of designs, formats, and read-outs. We also established that equivalence testing using ratios of the reference standard and test sample was superior to equivalence testing using absolute differences. Based on a large body of historical data, generic equivalence margins were determined for the curve upper asymptote, slope, and dynamic range. Furthermore, we developed a road map to guide the implementation of generic or assay-specific margins to ensure the appropriate data analysis approach is being applied during the assay life cycle.


Subject(s)
Biological Assay , Reference Standards
14.
J Biopharm Stat ; 20(1): 172-84, 2010 Jan.
Article in English | MEDLINE | ID: mdl-20077256

ABSTRACT

The problem of deriving an upper tolerance limit for a ratio of two normally distributed random variables is addressed, when the random variables follow a bivariate normal distribution, or when they are independent normal. The derivation uses the fact that an upper tolerance limit for a random variable can be derived from a lower confidence limit for the cumulative distribution function (cdf) of the random variable. The concept of a generalized confidence interval is used to derive the required lower confidence limit for the cdf. In the bivariate normal case, a suitable representation of the cdf of the ratio of the marginal normal random variables is also used, coupled with the generalized confidence interval idea. In addition, a simplified derivation is presented in the situation when one of the random variables has a small coefficient of variation. The problem is motivated by an application from a reverse transcriptase assay. Such an example is used to illustrate our results. Numerical results are also reported regarding the performance of the proposed tolerance limit.


Subject(s)
Confidence Intervals , Models, Statistical , Normal Distribution , Random Allocation
15.
Ther Innov Regul Sci ; 54(1): 21-31, 2020 01.
Article in English | MEDLINE | ID: mdl-32008228

ABSTRACT

Inconsistent results across regions have been reported in a number of recent large trials. In this research, by reviewing results from studies that showed inconsistent treatment effects, and summarizing lessons learned, we provide some recommendations for minimizing the chance of inconsistency and allowing more accurate interpretation when such signs of heterogeneity arise, for example: keep the number of regions for consistency evaluation at a minimum to avoid observing false inconsistency signals; proactively address in the protocol the differences in culture, medical practices, and other factors that are potentially different across regions; closely monitor the blinded data from early-enrolled patients to more effectively identify and address issues such as imbalance of baseline covariates or inconsistency of primary outcome rates across regions. For treatments of life-threatening conditions, the stakes for accurate interpretation of MRCT results are high; the criteria for decisions warrant careful consideration.


Subject(s)
Biomedical Research/standards , Clinical Trials as Topic , Research Design/standards , Humans
16.
Ther Innov Regul Sci ; 54(4): 850-860, 2020 07.
Article in English | MEDLINE | ID: mdl-32557308

ABSTRACT

Historical data have been used to augment or replace control arms in some rare disease and pediatric clinical trials. With greater availability of historical data and new methodology such as dynamic borrowing, the inclusion of historical data in clinical trials is an increasingly appealing approach for larger disease areas as well, as this can result in increased power and precision and can minimize the burden on patients in clinical trials. However, sponsors must assess whether the potential biases incurred with this approach outweigh the benefits and discuss this trade-off with the regulatory agencies. This paper discusses important points for the appropriate selection of historical controls for inclusion in the analysis of primary and/or key secondary endpoint(s) in clinical trials. The general steps are as follows: (1) Assess whether a trial is a suitable candidate for this approach. (2) If it is, then carefully identify appropriate historical trials to minimize selection bias. (3) Refine the historical control set if appropriate, for example, by selecting subsets of studies or patients. Identification of trial settings that are amenable to historical borrowing and selection of appropriate historical data using the principles discussed in this paper has the potential to lead to more efficient estimation and decision making. Ultimately, this efficiency gain results in lower patient burden and gets effective drugs to patients more quickly.


Subject(s)
Rare Diseases , Bias , Child , Humans
17.
J Biopharm Stat ; 19(1): 67-76, 2009.
Article in English | MEDLINE | ID: mdl-19127467

ABSTRACT

A weighted least squares statistic is commonly used to test homogeneity of the risk difference for a series of 2 x 2 tables. Since the method is based on asymptotic theory, its type I error rate is inflated when the data are sparse. Two new methods for testing the homogeneity of risk difference across different groups in clinical trials are proposed in this paper. These methods are constructed, based on the Wilson's score test and traditional weighted least squares statistics. The performance of the new methods is evaluated and compared to the currently available approaches. Results show that one of our new methods has a type I error rate that is closest to the nominal level among all the methods and is much more powerful than those proposed by Lipsitz et al.


Subject(s)
Clinical Trials as Topic/statistics & numerical data , Models, Statistical , Algorithms , Computer Simulation , Humans , Likelihood Functions , Multiple Myeloma/drug therapy , Randomized Controlled Trials as Topic/statistics & numerical data , Risk Assessment/statistics & numerical data , Survival Analysis , Treatment Outcome
18.
J Rheumatol ; 46(9): 1228-1231, 2019 09.
Article in English | MEDLINE | ID: mdl-30554152

ABSTRACT

OBJECTIVE: To assess the longitudinal reliability of the Outcome Measures in Rheumatology (OMERACT) Thumb base Osteoarthritis Magnetic resonance imaging (MRI) Scoring system (TOMS). METHODS: Paired MRI of patients with hand osteoarthritis were scored in 2 exercises (6-mo and 2-yr followup) for synovitis, subchondral bone defects (SBD), osteophytes, cartilage assessment, bone marrow lesions (BML), and subluxation. Interreader reliability of delta scores was assessed. RESULTS: Little change occurred. Average-measure intraclass correlation coefficients were good-excellent (≥ 0.71), except synovitis (0.55-0.83) and carpometacarpal-1 osteophytes/cartilage assessment (0.47/0.39). Percentage exact/close agreement was 52-92%/68-100%, except BML in 2 years (28%/64-76%). Smallest detectable change was below the scoring increment, except in SBD and BML. CONCLUSION: TOMS longitudinal reliability was moderate-good. Limited change hampered assessment.


Subject(s)
Hand Joints/diagnostic imaging , Osteoarthritis/diagnostic imaging , Thumb/diagnostic imaging , Humans , Magnetic Resonance Imaging , Reproducibility of Results , Severity of Illness Index
19.
Arthritis Rheumatol ; 71(7): 1056-1069, 2019 07.
Article in English | MEDLINE | ID: mdl-30653843

ABSTRACT

OBJECTIVE: To assess the efficacy and safety of the anti-interleukin-1α/ß (anti-IL-1α/ß) dual variable domain immunoglobulin lutikizumab (ABT-981) in patients with knee osteoarthritis (OA) and evidence of synovitis. METHODS: Patients (n = 350; 347 analyzed) with Kellgren/Lawrence grade 2-3 knee OA and synovitis (determined by magnetic resonance imaging [MRI] or ultrasound) were randomized to receive placebo or lutikizumab 25, 100, or 200 mg subcutaneously every 2 weeks for 50 weeks. The coprimary end points were change from baseline in Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) pain score at week 16 and change from baseline in MRI-assessed synovitis at week 26. RESULTS: The WOMAC pain score at week 16 had improved significantly versus placebo with lutikizumab 100 mg (P = 0.050) but not with the 25 mg or 200 mg doses. Beyond week 16, the WOMAC pain score was reduced in all groups but was not significantly different between lutikizumab-treated and placebo-treated patients. Changes from baseline in MRI-assessed synovitis at week 26 and other key symptom- and most structure-related end points at weeks 26 and 52 were not significantly different between the lutikizumab and placebo groups. Injection site reactions, neutropenia, and discontinuations due to neutropenia were more frequent with lutikizumab versus placebo. Reductions in neutrophil and high-sensitivity C-reactive protein levels plateaued with lutikizumab 100 mg, with further reductions not observed with the 200 mg dose. Immunogenic response to lutikizumab did not meaningfully affect systemic lutikizumab concentrations. CONCLUSION: The limited improvement in the WOMAC pain score and the lack of synovitis improvement with lutikizumab, together with published results from trials of other IL-1 inhibitors, suggest that IL-1 inhibition is not an effective analgesic/antiinflammatory therapy in most patients with knee OA and associated synovitis.


Subject(s)
Immunoglobulins/therapeutic use , Osteoarthritis, Knee/drug therapy , Synovitis/drug therapy , Aged , C-Reactive Protein/immunology , Double-Blind Method , Female , Humans , Injection Site Reaction/etiology , Interleukin-1alpha/antagonists & inhibitors , Interleukin-1beta/antagonists & inhibitors , Male , Middle Aged , Neutropenia/chemically induced , Neutrophils , Osteoarthritis, Knee/complications , Osteoarthritis, Knee/diagnostic imaging , Osteoarthritis, Knee/immunology , Synovitis/diagnostic imaging , Synovitis/etiology , Synovitis/immunology , Treatment Outcome
20.
Biom J ; 49(6): 928-40, 2007 Dec.
Article in English | MEDLINE | ID: mdl-17722195

ABSTRACT

Optimal response-adaptive designs in phase III clinical trial set up are gaining more interest. Most of the available designs are not based on any optimal consideration. An optimal design for binary responses is given by Rosenberger et al. (2001) and one for continuous responses is provided by Biswas and Mandal (2004). Recently, Zhang and Rosenberger (2006) proposed another design for normal responses. This paper illustrates that the Zhang and Rosenberger (2006) design is not suitable for normally distributed responses, in general. The approach cannot be extended for other continuous response cases, such as exponential or gamma. In this paper, we first describe when the optimal design of Zhang and Rosenberger (2006) fails. We then suggest the appropriate adjustments for designs in different continuous distributions. A unified framework to find optimal response-adaptive designs for two competing treatments is proposed. The proposed methods are illustrated using some real data.


Subject(s)
Clinical Trials, Phase III as Topic/methods , Research Design , Analgesics/therapeutic use , Clinical Trials, Phase III as Topic/ethics , Computer Simulation , Humans , Neuralgia, Postherpetic/drug therapy , Pain Measurement/methods , Pregabalin , gamma-Aminobutyric Acid/analogs & derivatives , gamma-Aminobutyric Acid/therapeutic use
SELECTION OF CITATIONS
SEARCH DETAIL