Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
J Res Natl Inst Stand Technol ; 126: 126036, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-38469434

RESUMO

Three types of uncertainties exist in the estimation of the minimum fracture strength of a full-scale component or structure size. The first, to be called the "model selection uncertainty," is in selecting a statistical distribution that best fits the laboratory test data. The second, to be called the "laboratory-scale strength uncertainty," is in estimating model parameters of a specific distribution from which the minimum failure strength of a material at a certain confidence level is estimated using the laboratory test data. To extrapolate the laboratory-scale strength prediction to that of a full-scale component, a third uncertainty exists that can be called the "full-scale strength uncertainty." In this paper, we develop a three-step approach to estimating the minimum strength of a full-scale component using two metrics: One metric is based on six goodness-of-fit and parameter-estimation-method criteria, and the second metric is based on the uncertainty quantification of the so-called A-basis design allowable (99 % coverage at 95 % level of confidence) of the full-scale component. The three steps of our approach are: (1) Find the "best" model for the sample data from a list of five candidates, namely, normal, two-parameter Weibull, three-parameter Weibull, two-parameter lognormal, and three-parameter lognormal. (2) For each model, estimate (2a) the parameters of that model with uncertainty using the sample data, and (2b) the minimum strength at the laboratory scale at 95 % level of confidence. (3) Introduce the concept of "coverage" and estimate the fullscale allowable minimum strength of the component at 95 % level of confidence for two types of coverages commonly used in the aerospace industry, namely, 99 % (A-basis for critical parts) and 90 % (B-basis for less critical parts). This uncertainty-based approach is novel in all three steps: In step-1 we use a composite goodness-of-fit metric to rank and select the "best" distribution, in step-2 we introduce uncertainty quantification in estimating the parameters of each distribution, and in step-3 we introduce the concept of an uncertainty metric based on the estimates of the upper and lower tolerance limits of the so-called A-basis design allowable minimum strength. To illustrate the applicability of this uncertainty-based approach to a diverse group of data, we present results of our analysis for six sets of laboratory failure strength data from four engineering materials. A discussion of the significance and limitations of this approach and some concluding remarks are included.

2.
Powder Diffr ; 35(3)2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34795466

RESUMO

The National Institute of Standards and Technology (NIST) certifies a suite of Standard Reference Materials (SRMs) to be used to evaluate specific aspects of the instrument performance of both X-ray and neutron powder diffractometers. This report describes SRM 640f, the seventh generation of this powder diffraction SRM, which is designed to be used primarily for calibrating powder diffractometers with respect to line position; it also can be used for the determination of the instrument profile function. It is certified with respect to the lattice parameter and consists of approximately 7.5 g of silicon powder prepared to minimize line broadening. A NIST-built diffractometer, incorporating many advanced design features, was used to certify the lattice parameter of the Si powder. Both statistical and systematic uncertainties have been assigned to yield a certified value for the lattice parameter at 22.5 °C of a = 0.5431144 ± 0.000008 nm.

3.
Powder Diffr ; 35(1)2020.
Artigo em Inglês | MEDLINE | ID: mdl-33311851

RESUMO

The National Institute of Standards and Technology (NIST) certifies a suite of Standard Reference Materials (SRMs) to evaluate specific aspects of instrument performance of both X-ray and neutron powder diffractometers. This report describes SRM 660c, the fourth generation of this powder diffraction SRM, which is used primarily for calibrating powder diffractometers with respect to line position and line shape for the determination of the instrument profile function (IPF). It is certified with respect to lattice parameter and consists of approximately 6 g of lanthanum hexaboride (LaB6) powder. So that this SRM would be applicable for the neutron diffraction community, the powder was prepared from an isotopically enriched 11B precursor material. The microstructure of the LaB6 powder was engineered specifically to yield a crystallite size above that where size broadening is typically observed and to minimize the crystallographic defects that lead to strain broadening. A NIST-built diffractometer, incorporating many advanced design features, was used to certify the lattice parameter of the LaB6 powder. Both Type A, statistical, and Type B, systematic, uncertainties have been assigned to yield a certified value for the lattice parameter at 22.5 °C of a = 0.415 682 6 ± 0.000 008 nm (95% confidence).

4.
Anal Chem ; 91(11): 7336-7345, 2019 06 04.
Artigo em Inglês | MEDLINE | ID: mdl-31045344

RESUMO

Hydrogen-deuterium exchange mass spectrometry (HDX-MS) is an established, powerful tool for investigating protein-ligand interactions, protein folding, and protein dynamics. However, HDX-MS is still an emergent tool for quality control of biopharmaceuticals and for establishing dynamic similarity between a biosimilar and an innovator therapeutic. Because industry will conduct quality control and similarity measurements over a product lifetime and in multiple locations, an understanding of HDX-MS reproducibility is critical. To determine the reproducibility of continuous-labeling, bottom-up HDX-MS measurements, the present interlaboratory comparison project evaluated deuterium uptake data from the Fab fragment of NISTmAb reference material (PDB: 5K8A ) from 15 laboratories. Laboratories reported ∼89 800 centroid measurements for 430 proteolytic peptide sequences of the Fab fragment (∼78 900 centroids), giving ∼100% coverage, and ∼10 900 centroid measurements for 77 peptide sequences of the Fc fragment. Nearly half of peptide sequences are unique to the reporting laboratory, and only two sequences are reported by all laboratories. The majority of the laboratories (87%) exhibited centroid mass laboratory repeatability precisions of ⟨ sLab⟩ ≤ (0.15 ± 0.01) Da (1σx̅). All laboratories achieved ⟨sLab⟩ ≤ 0.4 Da. For immersions of protein at THDX = (3.6 to 25) °C and for D2O exchange times of tHDX = (30 s to 4 h) the reproducibility of back-exchange corrected, deuterium uptake measurements for the 15 laboratories is σreproducibility15 Laboratories( tHDX) = (9.0 ± 0.9) % (1σ). A nine laboratory cohort that immersed samples at THDX = 25 °C exhibited reproducibility of σreproducibility25C cohort( tHDX) = (6.5 ± 0.6) % for back-exchange corrected, deuterium uptake measurements.


Assuntos
Anticorpos Monoclonais/química , Espectrometria de Massa com Troca Hidrogênio-Deutério , Fragmentos Fab das Imunoglobulinas/análise
5.
Anal Chem ; 90(8): 5066-5074, 2018 04 17.
Artigo em Inglês | MEDLINE | ID: mdl-29613771

RESUMO

As has long been understood, the noise on a spectrometric signal can be reduced by averaging over time, and the averaged noise is expected to decrease as t1/2, the square root of the data collection time. However, with contemporary capability for fast data collection and storage, we can retain and access a great deal more information about a signal train than just its average over time. During the same collection time, we can record the signal averaged over much shorter, equal, fixed periods. This is, then, the set of signals over submultiples of the total collection time. With a sufficiently large set of submultiples, the distribution of the signal's fluctuations over the submultiple periods of the data stream can be acquired at each wavelength (or frequency). From the autocorrelations of submultiple sets, we find only some fraction of these fluctuations consist of stochastic noise. Part of the fluctuations are what we call "fast drift", which is defined as drift over a time shorter than the complete measurement period of the average spectrum. In effect, what is usually assumed to be stochastic noise has a significant component of fast drift due to changes of conditions in the spectroscopic system. In addition, we show that the extreme values of the fluctuation of the signals are usually not balanced (equal magnitudes, equal probabilities) on either side of the mean or median without an inconveniently long measurement time; the data is almost inevitably biased. In other words, the unbalanced data is collected in an unbalanced manner around the mean, and so the median provides a better measure of the true spectrum. As is shown here, by using the medians of these distributions, the signal-to-noise of the spectrum can be increased and sampling bias reduced. The effect of this submultiple median data treatment is demonstrated for infrared, circular dichroism, and Raman spectrometry.

6.
Anal Bioanal Chem ; 410(8): 2095-2110, 2018 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-29428991

RESUMO

The NISTmAb is a monoclonal antibody Reference Material from the National Institute of Standards and Technology; it is a class-representative IgG1κ intended to serve as a pre-competitive platform for harmonization and technology development in the biopharmaceutical industry. The publication series of which this paper is a part describes NIST's overall control strategy to ensure NISTmAb quality and availability over its lifecycle. In this paper, the development of a control strategy for monitoring NISTmAb size heterogeneity is described. Optimization and qualification of size heterogeneity measurement spanning a broad size range are described, including capillary electrophoresis-sodium dodecyl sulfate (CE-SDS), size exclusion chromatography (SEC), dynamic light scattering (DLS), and flow imaging analysis. This paper is intended to provide relevant details of NIST's size heterogeneity control strategy to facilitate implementation of the NISTmAb as a test molecule in the end user's laboratory. Graphical abstract Representative size exclusion chromatogram of the NIST monoclonal antibody (NISTmAb). The NISTmAb is a publicly available research tool intended to facilitate advancement of biopharmaceutical analytics. HMW = high molecular weight (trimer and dimer), LMW = low molecular weight (2 fragment peaks). Peak labeled buffer is void volume of the column from L-histidine background buffer.


Assuntos
Anticorpos Monoclonais Humanizados/química , Anticorpos Monoclonais/química , Cromatografia em Gel/métodos , Difusão Dinâmica da Luz/métodos , Eletroforese Capilar/métodos , Imunoglobulina G/química , Agregados Proteicos , Animais , Anticorpos Monoclonais/análise , Anticorpos Monoclonais Humanizados/análise , Cromatografia em Gel/normas , Difusão Dinâmica da Luz/normas , Eletroforese Capilar/normas , Humanos , Imunoglobulina G/análise , Limite de Detecção , Camundongos , Modelos Moleculares , Controle de Qualidade , Padrões de Referência , Dodecilsulfato de Sódio/química
7.
Powder Diffr ; 332018.
Artigo em Inglês | MEDLINE | ID: mdl-30996514

RESUMO

The National Institute of Standards and Technology (NIST) certifies a suite of Standard Reference Materials (SRMs) to address specific aspects of the performance of X-ray powder diffraction instruments. This report describes SRM 1879b, the third generation of this powder diffraction SRM. SRM 1879b is intended for use in the preparation of calibration standards for the quantitative analyses of cristobalite by X-ray powder diffraction in accordance with National Institute for Occupational Safety and Health (NIOSH) Analytical Method 7500, or equivalent. A unit of SRM 1879b consists of approximately 5 g of cristobalite powder bottled in an argon atmosphere. It is certified with respect to crystalline phase purity, or amorphous phase content, and lattice parameter. Neutron powder diffraction, both time-of-flight and constant-wavelength, was used to certify the phase purity using SRM 676a as an internal standard. A NIST-built diffractometer, incorporating many advanced design features was used for certification measurements for lattice parameters.

8.
J Res Natl Inst Stand Technol ; 121: 476-497, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-34434636

RESUMO

A new material has been certified to become Standard Reference Material (SRM) 2806b - Medium Test Dust in Hydraulic Fluid. SRM 2806b consists of trace polydisperse, irregularly shaped mineral dust particles suspended in hydraulic fluid. The certified values of SRM 2806b are the projected area circular-equivalent diameters of the collected dust particles from the hydraulic fluid. The dimensional measurements were determined from the area of the collected dust particles using images obtained from automated scanning electron microscopy (SEM) followed by image analysis. An automated SEM and an automated image analysis software allowed the processing of over 29 million particles. The dimensional calibration of the SEM images (actual length per pixel and thus the actual projected diameters) are traceable to the NIST Line Scale Interferometer (LSI) through a NIST calibrated Geller MRS-4XY pitch standard. The certified diameters are correlated with the numeric concentration of particles greater than each diameter, referred to as the cumulative number size distribution. SRM 2806b is intended to be used to calibrate liquid-borne optical particle counters in conjunction with the reference method ISO 11171:2010.

9.
Radiology ; 277(1): 124-33, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25989480

RESUMO

PURPOSE: To compare image resolution from iterative reconstruction with resolution from filtered back projection for low-contrast objects on phantom computed tomographic (CT) images across vendors and exposure levels. MATERIALS AND METHODS: Randomized repeat scans of an American College of Radiology CT accreditation phantom (module 2, low contrast) were performed for multiple radiation exposures, vendors, and vendor iterative reconstruction algorithms. Eleven volunteers were presented with 900 images by using a custom-designed graphical user interface to perform a task created specifically for this reader study. Results were analyzed by using statistical graphics and analysis of variance. RESULTS: Across three vendors (blinded as A, B, and C) and across three exposure levels, the mean correct classification rate was higher for iterative reconstruction than filtered back projection (P < .01): 87.4% iterative reconstruction and 81.3% filtered back projection at 20 mGy, 70.3% iterative reconstruction and 63.9% filtered back projection at 12 mGy, and 61.0% iterative reconstruction and 56.4% filtered back projection at 7.2 mGy. There was a significant difference in mean correct classification rate between vendor B and the other two vendors. Across all exposure levels, images obtained by using vendor B's scanner outperformed the other vendors, with a mean correct classification rate of 74.4%, while the mean correct classification rate for vendors A and C was 68.1% and 68.3%, respectively. Across all readers, the mean correct classification rate for iterative reconstruction (73.0%) was higher compared with the mean correct classification rate for filtered back projection (67.0%). CONCLUSION: The potential exists to reduce radiation dose without compromising low-contrast detectability by using iterative reconstruction instead of filtered back projection. There is substantial variability across vendor reconstruction algorithms.


Assuntos
Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Exposição à Radiação , Tomógrafos Computadorizados , Tomografia Computadorizada por Raios X
10.
Radiology ; 275(3): 725-34, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-25686365

RESUMO

PURPOSE: To develop and validate a metric of computed tomographic (CT) image quality that incorporates the noise texture and resolution properties of an image. MATERIALS AND METHODS: Images of the American College of Radiology CT quality assurance phantom were acquired by using three commercial CT systems at seven dose levels with filtered back projection (FBP) and iterative reconstruction (IR). Image quality was characterized by the contrast-to-noise ratio (CNR) and a detectability index (d') that incorporated noise texture and spatial resolution. The measured CNR and d' were compared with a corresponding observer study by using the Spearman rank correlation coefficient to determine how well each metric reflects the ability of an observer to detect subtle lesions. Statistical significance of the correlation between each metric and observer performance was determined by using a Student t distribution; P values less than .05 indicated a significant correlation. Additionally, each metric was used to estimate the dose reduction potential of IR algorithms while maintaining image quality. RESULTS: Across all dose levels, scanner models, and reconstruction algorithms, the d' correlated strongly with observer performance in the corresponding observer study (ρ = 0.95; P < .001), whereas the CNR correlated weakly with observer performance (ρ = 0.31; P = .21). Furthermore, the d' showed that the dose-reduction capabilities differed between clinical implementations (range, 12%-35%) and were less than those predicted from the CNR (range, 50%-54%). CONCLUSION: The strong correlation between the observer performance and the d' indicates that the d' is superior to the CNR for the evaluation of CT image quality. Moreover, the results of this study indicate that the d' improves less than the CNR with the use of IR, which indicates less potential for IR dose reduction than previously thought.


Assuntos
Processamento de Imagem Assistida por Computador , Análise e Desempenho de Tarefas , Tomografia Computadorizada por Raios X/normas , Desenho de Equipamento , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X/instrumentação
11.
J Res Natl Inst Stand Technol ; 118: 218-59, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-26401431

RESUMO

The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST's measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform.

12.
Appl Environ Microbiol ; 78(16): 5872-81, 2012 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-22706055

RESUMO

Environmental sampling for microbiological contaminants is a key component of hygiene monitoring and risk characterization practices utilized across diverse fields of application. However, confidence in surface sampling results, both in the field and in controlled laboratory studies, has been undermined by large variation in sampling performance results. Sources of variation include controlled parameters, such as sampling materials and processing methods, which often differ among studies, as well as random and systematic errors; however, the relative contributions of these factors remain unclear. The objective of this study was to determine the relative impacts of sample processing methods, including extraction solution and physical dissociation method (vortexing and sonication), on recovery of Gram-positive (Bacillus cereus) and Gram-negative (Burkholderia thailandensis and Escherichia coli) bacteria from directly inoculated wipes. This work showed that target organism had the largest impact on extraction efficiency and recovery precision, as measured by traditional colony counts. The physical dissociation method (PDM) had negligible impact, while the effect of the extraction solution was organism dependent. Overall, however, extraction of organisms from wipes using phosphate-buffered saline with 0.04% Tween 80 (PBST) resulted in the highest mean recovery across all three organisms. The results from this study contribute to a better understanding of the factors that influence sampling performance, which is critical to the development of efficient and reliable sampling methodologies relevant to public health and biodefense.


Assuntos
Bacillus cereus/isolamento & purificação , Técnicas Bacteriológicas/métodos , Burkholderia/isolamento & purificação , Microbiologia Ambiental , Escherichia coli/isolamento & purificação , Manejo de Espécimes/métodos , Sensibilidade e Especificidade
13.
Appl Environ Microbiol ; 77(7): 2374-80, 2011 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-21296945

RESUMO

The need for the precise and reliable collection of potential biothreat contaminants has motivated research in developing a better understanding of the variability in biological surface sampling methods. In this context, the objective of this work was to determine parameters affecting the efficiency of extracting Bacillus anthracis Sterne spores from commonly used wipe sampling materials and to describe performance using the interfacial energy concept. In addition, surface thermodynamics was applied to understand and predict surface sampling performance. Wipe materials were directly inoculated with known concentrations of B. anthracis spores and placed into extraction solutions, followed by sonication or vortexing. Experimental factors investigated included wipe material (polyester, cotton, and polyester-rayon), extraction solution (sterile deionized water [H(2)O], deionized water with 0.04% Tween 80 [H(2)O-T], phosphate-buffered saline [PBS], and PBS with 0.04% Tween 80 [PBST]), and physical dissociation method (vortexing or sonication). The most efficient extraction from wipes was observed for solutions containing the nonionic surfactant Tween 80. The increase in extraction efficiency due to surfactant addition was attributed to an attractive interfacial energy between Tween 80 and the centrifuge tube wall, which prevented spore adhesion. Extraction solution significantly impacted the extraction efficiency, as determined by statistical analysis (P < 0.05). Moreover, the extraction solution was the most important factor in extraction performance, followed by the wipe material. Polyester-rayon was the most efficient wipe material for releasing spores into solution by rank; however, no statistically significant difference between polyester-rayon and cotton was observed (P > 0.05). Vortexing provided higher spore recovery in H(2)O and H(2)O-T than sonication, when all three wipe materials and the reference control were considered (P < 0.05).


Assuntos
Bacillus anthracis/isolamento & purificação , Técnicas Bacteriológicas/métodos , Microbiologia Ambiental , Esporos Bacterianos/isolamento & purificação , Soluções Tampão , Sonicação , Manejo de Espécimes/métodos
14.
Cytometry A ; 79(7): 545-59, 2011 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-21674772

RESUMO

The analysis of fluorescence microscopy of cells often requires the determination of cell edges. This is typically done using segmentation techniques that separate the cell objects in an image from the surrounding background. This study compares segmentation results from nine different segmentation techniques applied to two different cell lines and five different sets of imaging conditions. Significant variability in the results of segmentation was observed that was due solely to differences in imaging conditions or applications of different algorithms. We quantified and compared the results with a novel bivariate similarity index metric that evaluates the degree of underestimating or overestimating a cell object. The results show that commonly used threshold-based segmentation techniques are less accurate than k-means clustering with multiple clusters. Segmentation accuracy varies with imaging conditions that determine the sharpness of cell edges and with geometric features of a cell. Based on this observation, we propose a method that quantifies cell edge character to provide an estimate of how accurately an algorithm will perform. The results of this study will assist the development of criteria for evaluating interlaboratory comparability.


Assuntos
Algoritmos , Células/citologia , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Microscopia de Fluorescência/métodos , Animais , Camundongos , Ratos
15.
J Res Natl Inst Stand Technol ; 116(5): 771-83, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-26989599

RESUMO

Experimenters characterize the behavior of simulation models for data communications networks by measuring multiple responses under selected parameter combinations. The resulting multivariate data may include redundant responses reflecting aspects of a smaller number of underlying behaviors. Reducing the dimension of multivariate responses can reveal the most significant model behaviors, allowing subsequent analyses to focus on one response per behavior. This paper investigates two methods for reducing dimension in multivariate data generated from simulation models. One method combines correlation analysis and clustering. The second method uses principal components analysis. We apply both methods to reduce a 22-dimensional dataset generated by a network simulator. We identify issues that an analyst must decide, and we compare the reductions suggested by the methods. We have used these methods to identify significant behaviors in simulated networks, and we suspect they may be applied to reduce the dimension of empirical data measured from real networks.

16.
Artigo em Inglês | MEDLINE | ID: mdl-33042200

RESUMO

Applying geometric and dimensional tolerances (GD&T) to part features in computer-aided design (CAD) software is essential so that the part will function properly and to guide downstream manufacturing and inspection processes. However, it is not well characterized how CAD software implements capabilities for a designer to apply GD&T to a part. Of course, CAD software vendors do their own internal testing of those capabilities and users evaluate CAD software so that it satisfies their CAD modeling requirements. However, there has never been any rigorous public-domain testing of CAD software GD&T implementations. To improve that situation, the National Institute of Standards and Technology (NIST) has developed a system to test implementations of GD&T in CAD software. Representative part geometry with GD&T applied to features was modeled in four of the major CAD systems. Errors with semantic representation and graphical presentation of the GD&T were collected and analyzed. The testing methodology, test results, and data analysis demonstrate how well the CAD system GD&T implementations perform. The testing project results can be used as a basis for future testing, methods, and standards to evaluate defects in GD&T applied to part features.

17.
J Res Natl Inst Stand Technol ; 113(4): 221-38, 2008.
Artigo em Inglês | MEDLINE | ID: mdl-27096123

RESUMO

We present methods for measuring errors in the rendering of three-dimensional points, line segments, and polygons in pixel-based computer graphics systems. We present error metrics for each of these three cases. These methods are applied to rendering with OpenGL on two common hardware platforms under several rendering conditions. Results are presented and differences in measured errors are analyzed and characterized. We discuss possible extensions of this error analysis approach to other aspects of the process of generating visual representations of synthetic scenes.

18.
Commun Biol ; 1: 219, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30534611

RESUMO

Reproducing, exchanging, comparing, and building on each other's work is foundational to technological advances. Advancing biotechnology calls for reliable reuse of engineered organisms. Reliable reuse of engineered organisms requires reproducible growth and productivity. Here, we identify the experimental factors that have the greatest effect on the growth and productivity of our engineered organisms in order to demonstrate reproducibility for biotechnology. We present a draft of a Minimum Information Standard for Engineered Organism Experiments (MIEO) based on this method. We evaluate the effect of 22 factors on Escherichia coli engineered to produce the small molecule lycopene, and 18 factors on E. coli engineered to produce red fluorescent protein. Container geometry and shaking have the greatest effect on product titer and yield. We reproduce our results under two different conditions of reproducibility: conditions of use (different fractional factorial experiments), and time (48 biological replicates performed on 12 different days over 4 months).

19.
Artigo em Inglês | MEDLINE | ID: mdl-33312086

RESUMO

Uncertainty in modeling the fatigue life of a full-scale component using experimental data at microscopic (Level 1), specimen (Level 2), and full-size (Level 3) scales, is addressed by applying statistical theory of prediction intervals, and that of tolerance intervals based on the concept of coverage, p. Using a nonlinear least squares fit algorithm and the physical assumption that the one-sided Lower Tolerance Limit (LTL), at 95% confidence level, of the fatigue life, i.e., the minimum cycles-to-failure, minNf, of a full-scale component, cannot be negative as the lack or "Failure" of coverage (Fp), defined as 1 - p, approaches zero, we develop a new fatigue life model, where the minimum cycles-to-failure, minNf, at extremely low "Failure" of coverage, Fp, can be estimated. Since the concept of coverage is closely related to that of an inspection strategy, and if one assumes that the predominent cause of failure of a full-size component is due to the "Failure" of inspection or coverage, it is reasonable to equate the quantity, Fp, to a Failure Probability, FP, thereby leading to a new approach of estimating the frequency of in-service inspection of a full-size component. To illustrate this approach, we include a numerical example using the published data of the fatigue of an AISI 4340 steel (N.E. Dowling, Journal of Testing and Evaluation, ASTM, Vol. 1(4) (1973), 271-287) and a linear least squares fit to generate the necessary uncertainties for performing a dynamic risk analysis, where a graphical plot of an estimate of risk with uncertainty vs. a predicted most likely date of a high consequence failure event becomes available. In addition, a nonlinear least squares logistic function fit of the fatigue data yields a prediction of the statistical distribution of both the ultimate strength and the endurance limit.

20.
J Am Soc Mass Spectrom ; 17(2): 246-52, 2006 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-16413204

RESUMO

One of the most significant issues in any analytical practice is optimization. Optimization and calibration are key factors in quantitation. In matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS), instrument optimization is a limitation restricting quantitation. An understanding of the parameters that are most influential and the effects of these parameters on the mass spectrum is required for optimization. This understanding is especially important when characterizing synthetic polymers by MALDI-TOF-MS, due to the breadth of the polymer molecular mass distribution (MMD). Two considerations are important in quantitation, additivity of signal and signal-to-noise (S/N). In this study, the effects of several instrument parameters were studied using an orthogonal experimental design to understand effects on the signal-to-noise (S/N) of a polystyrene distribution. The instrument parameters examined included detector voltage, laser energy, delay time, extraction voltage, and lens voltage. Other parameters considered were polymer concentration and matrix. The results showed detector voltage and delay time were the most influential of the instrument parameters for polystyrene using all trans-retinoic acid (RA) as the matrix. These parameters, as well as laser energy, were most influential for the polystyrene with dithranol as the matrix.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA