Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
Bioengineering (Basel) ; 9(11)2022 Nov 03.
Article in English | MEDLINE | ID: mdl-36354553

ABSTRACT

Statistical experimental designs such as factorial, optimal, or definitive screening designs represent the state of the art in biopharmaceutical process characterization. However, such methods alone do not leverage the fact that processes operate as a mutual interplay of multiple steps. Instead, they aim to investigate only one process step at a time. Here, we want to develop a new experimental design method that seeks to gain information about final product quality, placing the right type of run at the right unit operation. This is done by minimizing the simulated out-of-specification rate of an integrated process model comprised of a chain of regression models that map process parameters to critical quality attributes for each unit operation. Unit operation models are connected by passing their response to the next unit operation model as a load parameter, as is done in real-world manufacturing processes. The proposed holistic DoE (hDoE) method is benchmarked against standard process characterization approaches in a set of in silico simulation studies where data are generated by different ground truth processes to illustrate the validity over a range of scenarios. Results show that the hDoE approach leads to a >50% decrease in experiments, even for simple cases, and, at the same time, achieves the main goal of process development, validation, and manufacturing to consistently deliver product quality.

2.
Front Bioeng Biotechnol ; 10: 1010583, 2022.
Article in English | MEDLINE | ID: mdl-36213075

ABSTRACT

Intermediate acceptance criteria are the foundation for developing control strategies in process validation stage 1 in the pharmaceutical industry. At drug substance or product level such intermediate acceptance criteria for quality are available and referred to as specification limits. However, it often remains a challenge to define acceptance criteria for intermediate process steps. Available guidelines underpin the importance of intermediate acceptance criteria, because they are an integral part for setting up a control strategy for the manufacturing process. The guidelines recommend to base the definition of acceptance criteria on the entirety of process knowledge. Nevertheless, the guidelines remain unclear on how to derive such limits. Within this contribution we aim to present a sound data science methodology for the definition of intermediate acceptance criteria by putting the guidelines recommendations into practice (ICH Q6B, 1999). By using an integrated process model approach, we leverage manufacturing data and experimental data from small scale to derive intermediate acceptance criteria. The novelty of this approach is that the acceptance criteria are based on pre-defined out-of-specification probabilities, while also considering manufacturing variability in process parameters. In a case study we compare this methodology to a conventional +/- 3 standard deviations (3SD) approach and demonstrate that the presented methodology is superior to conventional approaches and provides a solid line of reasoning for justifying them in audits and regulatory submission.

3.
AAPS J ; 24(6): 112, 2022 10 21.
Article in English | MEDLINE | ID: mdl-36271265

ABSTRACT

Showing analytical similarity is key to license biosimilar products with reduced or circumvented clinical effort. Statistical procedures to assess analytical similarity of quality attributes at drug product level have been highly debated by academia, industry, and regulatory agencies. In the past, a tiered approach was recommended by regulators, consisting of equivalence tests and quality range tests. However, this approach has recently been withdrawn by FDA. New guidelines of FDA and EMA favour the usage of quality range tests. Moreover, it has recently been shown that simple range tests, such as the 3SD test, are flawed, since they do not control the agency risk of falsely declaring a non-biosimilar product as being biosimilar (Type I error). This has also been highlighted by regulators recently. In this contribution, we developed a novel bootstrapping test for assessing analytical similarity that overcomes current flaws of equivalence and range tests. The developed test shows the desired properties, that is, (i) similarity conditions can be easily defined, (ii) differences of mean and the variance between the biosimilar and the innovator can be studied simultaneously, and (iii) the Type I error of the test can be controlled at a low level, e.g. 5%, evenly along the entire similarity condition. Moreover, the test shows up to 10% higher mean power values in the similarity region compared to existing range tests that aim to control the Type I error. Hence, this test is superior to existing quality range tests and is perceived compliant with current regulatory requirements.


Subject(s)
Biosimilar Pharmaceuticals , United States , United States Food and Drug Administration , Research Design
4.
Bioengineering (Basel) ; 9(10)2022 Oct 09.
Article in English | MEDLINE | ID: mdl-36290502

ABSTRACT

Integrated or holistic process models may serve as the engine of a digital asset in a multistep-process digital twin. Concatenated individual-unit operation models are effective at propagating errors over an entire process, but are nonetheless limited in certain aspects of recent applications that prevent their deployment as a plausible digital asset, particularly regarding bioprocess development requirements. Sequential critical quality attribute tests along the process chain that form output-input (i.e., pool-to-load) relationships, are impacted by nonaligned design spaces at different scales and by simulation distribution challenges. Limited development experiments also inhibit the exploration of the overall design space, particularly regarding the propagation of extreme noncontrolled parameter values. In this contribution, bioprocess requirements are used as the framework to improve integrated process models by introducing a simplified data model for multiunit operation processes, increasing statistical robustness, adding a new simulation flow for scale-dependent variables, and describing a novel algorithm for extrapolation in a data-driven environment. Lastly, architectural and procedural requirements for a deployed digital twin are described, and a real-time workflow is proposed, thus providing a final framework for a digital asset in bioprocessing along the full product life cycle.

5.
Bioengineering (Basel) ; 8(11)2021 Oct 24.
Article in English | MEDLINE | ID: mdl-34821722

ABSTRACT

Maximizing the value of each available data point in bioprocess development is essential in order to reduce the time-to-market, lower the number of expensive wet-lab experiments, and maximize process understanding. Advanced in silico methods are increasingly being investigated to accomplish these goals. Within this contribution, we propose a novel integrated process model procedure to maximize the use of development data to optimize the Stage 1 process validation work flow. We generate an integrated process model based on available data and apply two innovative Monte Carlo simulation-based parameter sensitivity analysis linearization techniques to automate two quality by design activities: determining risk assessment severity rankings and establishing preliminary control strategies for critical process parameters. These procedures are assessed in a case study for proof of concept on a candidate monoclonal antibody bioprocess after process development, but prior to process characterization. The evaluation was successful in returning results that were used to support Stage I process validation milestones and demonstrated the potential to reduce the investigated parameters by up to 24% in process characterization, while simultaneously setting up a strategy for iterative updates of risk assessments and process controls throughout the process life-cycle to ensure a robust and efficient drug supply.

6.
J Pharm Sci ; 110(4): 1540-1544, 2021 04.
Article in English | MEDLINE | ID: mdl-33493480

ABSTRACT

A wide variety of computational models covering statistical, mechanistic, and machine learning (locked and adaptive) methods are explored for use in biopharmaceutical manufacturing. Limited discussion exists on how to establish the credibility of a computational model for application in biopharmaceutical manufacturing. In this work, we tried to use the American Society of Mechanical Engineers (ASME) Verification and Validation 40 (V&V 40) standard and FDA proposed AI/ML model life cycle management framework for Software as a Medical Device (SaMD) in biopharmaceutical manufacturing use cases, by applying to a set of curated hypothetical examples. We discussed the need for standardized frameworks to facilitate consistent decision making to enable efficient adoption of computational models in biopharmaceutical manufacturing and alignment of existing good practices with existing frameworks. In the study of our examples, we anticipate existing frameworks like V&V 40 can be adopted.


Subject(s)
Biological Products , Animals , Computer Simulation , Life Cycle Stages , Machine Learning , United States
7.
Bioengineering (Basel) ; 6(4)2019 Dec 13.
Article in English | MEDLINE | ID: mdl-31847142

ABSTRACT

Risk assessments (RAs) are frequently conducted to assess the potential effect of process parameters (PPs) on product quality attributes (e.g., a critical quality attribute (CQA)). To evaluate the PPs criticality the risk priority number (RPN) for each PP is often calculated. This number is generated by the multiplication of three factors: severity, occurrence, and detectability. This mathematical operation may result in some potential errors due to the multiplication of ordinal scaled values and the assumption that the factors contribute equally to the PPs criticality. To avoid these misinterpretations and to assess the out of specification (OOS) probability of the drug substance, we present a novel and straightforward mathematical algorithm. This algorithm quantitatively describes the PPs effect on each CQA assessed within the RA. The transcription of severity and occurrence to model effect sizes and parameters distribution are the key elements of the herein developed approach. This approach can be applied to any conventional RA within the biopharmaceutical industry. We demonstrate that severity and occurrence contribute differently to the PP criticality and compare these results with the RPN number. Detectability is used in a final step to precisely sort the contribution of each factor. To illustrate, we show the misinterpretation risk of the PP critically by using the conventional RPN approach.

8.
Bioengineering (Basel) ; 4(4)2017 Oct 12.
Article in English | MEDLINE | ID: mdl-29023375

ABSTRACT

Identification of critical process parameters that impact product quality is a central task during regulatory requested process validation. Commonly, this is done via design of experiments and identification of parameters significantly impacting product quality (rejection of the null hypothesis that the effect equals 0). However, parameters which show a large uncertainty and might result in an undesirable product quality limit critical to the product, may be missed. This might occur during the evaluation of experiments since residual/un-modelled variance in the experiments is larger than expected a priori. Estimation of such a risk is the task of the presented novel retrospective power analysis permutation test. This is evaluated using a data set for two unit operations established during characterization of a biopharmaceutical process in industry. The results show that, for one unit operation, the observed variance in the experiments is much larger than expected a priori, resulting in low power levels for all non-significant parameters. Moreover, we present a workflow of how to mitigate the risk associated with overlooked parameter effects. This enables a statistically sound identification of critical process parameters. The developed workflow will substantially support industry in delivering constant product quality, reduce process variance and increase patient safety.

9.
Bioengineering (Basel) ; 4(4)2017 Oct 17.
Article in English | MEDLINE | ID: mdl-29039771

ABSTRACT

During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.

10.
Anal Chim Acta ; 982: 48-61, 2017 Aug 22.
Article in English | MEDLINE | ID: mdl-28734365

ABSTRACT

In this paper, we propose a new strategy for retrospective identification of feed phases from online sensor-data enriched feed profiles of an Escherichia Coli (E. coli) fed-batch fermentation process. In contrast to conventional (static), data-driven multi-class machine learning (ML), we exploit process knowledge in order to constrain our classification system yielding more parsimonious models compared to static ML approaches. In particular, we enforce unidirectionality on a set of binary, multivariate classifiers trained to discriminate between adjacent feed phases by linking the classifiers through a one-way switch. The switch is activated when the actual classifier output changes. As a consequence, the next binary classifier in the classifier chain is used for the discrimination between the next feed phase pair etc. We allow activation of the switch only after a predefined number of consecutive predictions of a transition event in order to prevent premature activation of the switch and undertake a sensitivity analysis regarding the optimal choice of the (time) lag parameter. From a complexity/parsimony perspective the benefit of our approach is three-fold: i) The multi-class learning task is broken down into binary subproblems which usually have simpler decision surfaces and tend to be less susceptible to the class-imbalance problem. ii) We exploit the fact that the process follows a rigid feed cycle structure (i.e. batch-feed-batch-feed) which allows us to focus on the subproblems involving phase transitions as they occur during the process while discarding off-transition classifiers and iii) only one binary classifier is active at the time which keeps effective model complexity low. We further use a combination of logistic regression and Lasso (i.e. regularized logistic regression, RLR) as a wrapper to extract the most relevant features for individual subproblems from the whole set of high-dimensional sensor data. We train different soft computing classifiers, including decision trees (DT), k-nearest neighbors (k-NN), support vector machines (SVM) and an own developed fuzzy classifier and compare our method with conventional multi-class ML. Our results show a remarkable out-performance of the here proposed method over static ML approaches in terms of accuracy and robustness. We achieved close to error free feed phase classification while reducing the misclassification rates in 17 out of 20 investigated test cases in the range between 39% and 98.2% depending on feature set and classifier architecture. Models trained on features based on selection by RLR significantly outperformed those trained on features suggested by experts and their predictive performance was considerably less affected by the choice of the lag parameter.


Subject(s)
Batch Cell Culture Techniques , Fermentation , Support Vector Machine , Algorithms , Decision Trees , Escherichia coli , Fuzzy Logic
11.
Anal Bioanal Chem ; 409(3): 693-706, 2017 Jan.
Article in English | MEDLINE | ID: mdl-27376358

ABSTRACT

In biopharmaceutical process development and manufacturing, the online measurement of biomass and derived specific turnover rates is a central task to physiologically monitor and control the process. However, hard-type sensors such as dielectric spectroscopy, broth fluorescence, or permittivity measurement harbor various disadvantages. Therefore, soft-sensors, which use measurements of the off-gas stream and substrate feed to reconcile turnover rates and provide an online estimate of the biomass formation, are smart alternatives. For the reconciliation procedure, mass and energy balances are used together with accuracy estimations of measured conversion rates, which were so far arbitrarily chosen and static over the entire process. In this contribution, we present a novel strategy within the soft-sensor framework (named adaptive soft-sensor) to propagate uncertainties from measurements to conversion rates and demonstrate the benefits: For industrially relevant conditions, hereby the error of the resulting estimated biomass formation rate and specific substrate consumption rate could be decreased by 43 and 64 %, respectively, compared to traditional soft-sensor approaches. Moreover, we present a generic workflow to determine the required raw signal accuracy to obtain predefined accuracies of soft-sensor estimations. Thereby, appropriate measurement devices and maintenance intervals can be selected. Furthermore, using this workflow, we demonstrate that the estimation accuracy of the soft-sensor can be additionally and substantially increased.


Subject(s)
Biomass , Biosensing Techniques/methods , Quality Control , Technology, Pharmaceutical/instrumentation , Technology, Pharmaceutical/methods , Biosensing Techniques/instrumentation
12.
Biotechnol Biofuels ; 9: 56, 2016.
Article in English | MEDLINE | ID: mdl-26962329

ABSTRACT

BACKGROUND: Enzymatic hydrolysis of cellulose involves the spatiotemporally correlated action of distinct polysaccharide chain cleaving activities confined to the surface of an insoluble substrate. Because cellulases differ in preference for attacking crystalline compared to amorphous cellulose, the spatial distribution of structural order across the cellulose surface imposes additional constraints on the dynamic interplay between the enzymes. Reconstruction of total system behavior from single-molecule activity parameters is a longstanding key goal in the field. RESULTS: We have developed a stochastic, cellular automata-based modeling approach to describe degradation of cellulosic material by a cellulase system at single-molecule resolution. Substrate morphology was modeled to represent the amorphous and crystalline phases as well as the different spatial orientations of the polysaccharide chains. The enzyme system model consisted of an internally chain-cleaving endoglucanase (EG) as well as two processively acting, reducing and non-reducing chain end-cleaving cellobiohydrolases (CBHs). Substrate preference (amorphous: EG, CBH II; crystalline: CBH I) and characteristic frequencies for chain cleavage, processive movement, and dissociation were assigned from biochemical data. Once adsorbed, enzymes were allowed to reach surface-exposed substrate sites through "random-walk" lateral diffusion or processive motion. Simulations revealed that slow dissociation of processive enzymes at obstacles obstructing further movement resulted in local jamming of the cellulases, with consequent delay in the degradation of the surface area affected. Exploiting validation against evidence from atomic force microscopy imaging as a unique opportunity opened up by the modeling approach, we show that spatiotemporal characteristics of cellulose surface degradation by the system of synergizing cellulases were reproduced quantitatively at the nanometer resolution of the experimental data. This in turn gave useful prediction of the soluble sugar release rate. CONCLUSIONS: Salient dynamic features of cellulose surface degradation by different cellulases acting in synergy were reproduced in simulations in good agreement with evidence from high-resolution visualization experiments. Due to the single-molecule resolution of the modeling approach, the utility of the presented model lies not only in predicting system behavior but also in elucidating inherently complex (e.g., stochastic) phenomena involved in enzymatic cellulose degradation. Thus, it creates synergy with experiment to advance the mechanistic understanding for improved application.

SELECTION OF CITATIONS
SEARCH DETAIL