Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
1.
J Res Natl Inst Stand Technol ; 126: 126036, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-38469434

RESUMEN

Three types of uncertainties exist in the estimation of the minimum fracture strength of a full-scale component or structure size. The first, to be called the "model selection uncertainty," is in selecting a statistical distribution that best fits the laboratory test data. The second, to be called the "laboratory-scale strength uncertainty," is in estimating model parameters of a specific distribution from which the minimum failure strength of a material at a certain confidence level is estimated using the laboratory test data. To extrapolate the laboratory-scale strength prediction to that of a full-scale component, a third uncertainty exists that can be called the "full-scale strength uncertainty." In this paper, we develop a three-step approach to estimating the minimum strength of a full-scale component using two metrics: One metric is based on six goodness-of-fit and parameter-estimation-method criteria, and the second metric is based on the uncertainty quantification of the so-called A-basis design allowable (99 % coverage at 95 % level of confidence) of the full-scale component. The three steps of our approach are: (1) Find the "best" model for the sample data from a list of five candidates, namely, normal, two-parameter Weibull, three-parameter Weibull, two-parameter lognormal, and three-parameter lognormal. (2) For each model, estimate (2a) the parameters of that model with uncertainty using the sample data, and (2b) the minimum strength at the laboratory scale at 95 % level of confidence. (3) Introduce the concept of "coverage" and estimate the fullscale allowable minimum strength of the component at 95 % level of confidence for two types of coverages commonly used in the aerospace industry, namely, 99 % (A-basis for critical parts) and 90 % (B-basis for less critical parts). This uncertainty-based approach is novel in all three steps: In step-1 we use a composite goodness-of-fit metric to rank and select the "best" distribution, in step-2 we introduce uncertainty quantification in estimating the parameters of each distribution, and in step-3 we introduce the concept of an uncertainty metric based on the estimates of the upper and lower tolerance limits of the so-called A-basis design allowable minimum strength. To illustrate the applicability of this uncertainty-based approach to a diverse group of data, we present results of our analysis for six sets of laboratory failure strength data from four engineering materials. A discussion of the significance and limitations of this approach and some concluding remarks are included.

2.
Artículo en Inglés | MEDLINE | ID: mdl-33042200

RESUMEN

Applying geometric and dimensional tolerances (GD&T) to part features in computer-aided design (CAD) software is essential so that the part will function properly and to guide downstream manufacturing and inspection processes. However, it is not well characterized how CAD software implements capabilities for a designer to apply GD&T to a part. Of course, CAD software vendors do their own internal testing of those capabilities and users evaluate CAD software so that it satisfies their CAD modeling requirements. However, there has never been any rigorous public-domain testing of CAD software GD&T implementations. To improve that situation, the National Institute of Standards and Technology (NIST) has developed a system to test implementations of GD&T in CAD software. Representative part geometry with GD&T applied to features was modeled in four of the major CAD systems. Errors with semantic representation and graphical presentation of the GD&T were collected and analyzed. The testing methodology, test results, and data analysis demonstrate how well the CAD system GD&T implementations perform. The testing project results can be used as a basis for future testing, methods, and standards to evaluate defects in GD&T applied to part features.

3.
Anal Chem ; 91(11): 7336-7345, 2019 06 04.
Artículo en Inglés | MEDLINE | ID: mdl-31045344

RESUMEN

Hydrogen-deuterium exchange mass spectrometry (HDX-MS) is an established, powerful tool for investigating protein-ligand interactions, protein folding, and protein dynamics. However, HDX-MS is still an emergent tool for quality control of biopharmaceuticals and for establishing dynamic similarity between a biosimilar and an innovator therapeutic. Because industry will conduct quality control and similarity measurements over a product lifetime and in multiple locations, an understanding of HDX-MS reproducibility is critical. To determine the reproducibility of continuous-labeling, bottom-up HDX-MS measurements, the present interlaboratory comparison project evaluated deuterium uptake data from the Fab fragment of NISTmAb reference material (PDB: 5K8A ) from 15 laboratories. Laboratories reported ∼89 800 centroid measurements for 430 proteolytic peptide sequences of the Fab fragment (∼78 900 centroids), giving ∼100% coverage, and ∼10 900 centroid measurements for 77 peptide sequences of the Fc fragment. Nearly half of peptide sequences are unique to the reporting laboratory, and only two sequences are reported by all laboratories. The majority of the laboratories (87%) exhibited centroid mass laboratory repeatability precisions of ⟨ sLab⟩ ≤ (0.15 ± 0.01) Da (1σx̅). All laboratories achieved ⟨sLab⟩ ≤ 0.4 Da. For immersions of protein at THDX = (3.6 to 25) °C and for D2O exchange times of tHDX = (30 s to 4 h) the reproducibility of back-exchange corrected, deuterium uptake measurements for the 15 laboratories is σreproducibility15 Laboratories( tHDX) = (9.0 ± 0.9) % (1σ). A nine laboratory cohort that immersed samples at THDX = 25 °C exhibited reproducibility of σreproducibility25C cohort( tHDX) = (6.5 ± 0.6) % for back-exchange corrected, deuterium uptake measurements.


Asunto(s)
Anticuerpos Monoclonales/química , Espectrometría de Masas de Intercambio de Hidrógeno-Deuterio , Fragmentos Fab de Inmunoglobulinas/análisis
4.
Anal Chem ; 90(8): 5066-5074, 2018 04 17.
Artículo en Inglés | MEDLINE | ID: mdl-29613771

RESUMEN

As has long been understood, the noise on a spectrometric signal can be reduced by averaging over time, and the averaged noise is expected to decrease as t1/2, the square root of the data collection time. However, with contemporary capability for fast data collection and storage, we can retain and access a great deal more information about a signal train than just its average over time. During the same collection time, we can record the signal averaged over much shorter, equal, fixed periods. This is, then, the set of signals over submultiples of the total collection time. With a sufficiently large set of submultiples, the distribution of the signal's fluctuations over the submultiple periods of the data stream can be acquired at each wavelength (or frequency). From the autocorrelations of submultiple sets, we find only some fraction of these fluctuations consist of stochastic noise. Part of the fluctuations are what we call "fast drift", which is defined as drift over a time shorter than the complete measurement period of the average spectrum. In effect, what is usually assumed to be stochastic noise has a significant component of fast drift due to changes of conditions in the spectroscopic system. In addition, we show that the extreme values of the fluctuation of the signals are usually not balanced (equal magnitudes, equal probabilities) on either side of the mean or median without an inconveniently long measurement time; the data is almost inevitably biased. In other words, the unbalanced data is collected in an unbalanced manner around the mean, and so the median provides a better measure of the true spectrum. As is shown here, by using the medians of these distributions, the signal-to-noise of the spectrum can be increased and sampling bias reduced. The effect of this submultiple median data treatment is demonstrated for infrared, circular dichroism, and Raman spectrometry.

5.
Artículo en Inglés | MEDLINE | ID: mdl-33312086

RESUMEN

Uncertainty in modeling the fatigue life of a full-scale component using experimental data at microscopic (Level 1), specimen (Level 2), and full-size (Level 3) scales, is addressed by applying statistical theory of prediction intervals, and that of tolerance intervals based on the concept of coverage, p. Using a nonlinear least squares fit algorithm and the physical assumption that the one-sided Lower Tolerance Limit (LTL), at 95% confidence level, of the fatigue life, i.e., the minimum cycles-to-failure, minNf, of a full-scale component, cannot be negative as the lack or "Failure" of coverage (Fp), defined as 1 - p, approaches zero, we develop a new fatigue life model, where the minimum cycles-to-failure, minNf, at extremely low "Failure" of coverage, Fp, can be estimated. Since the concept of coverage is closely related to that of an inspection strategy, and if one assumes that the predominent cause of failure of a full-size component is due to the "Failure" of inspection or coverage, it is reasonable to equate the quantity, Fp, to a Failure Probability, FP, thereby leading to a new approach of estimating the frequency of in-service inspection of a full-size component. To illustrate this approach, we include a numerical example using the published data of the fatigue of an AISI 4340 steel (N.E. Dowling, Journal of Testing and Evaluation, ASTM, Vol. 1(4) (1973), 271-287) and a linear least squares fit to generate the necessary uncertainties for performing a dynamic risk analysis, where a graphical plot of an estimate of risk with uncertainty vs. a predicted most likely date of a high consequence failure event becomes available. In addition, a nonlinear least squares logistic function fit of the fatigue data yields a prediction of the statistical distribution of both the ultimate strength and the endurance limit.

6.
Powder Diffr ; 332018.
Artículo en Inglés | MEDLINE | ID: mdl-30996514

RESUMEN

The National Institute of Standards and Technology (NIST) certifies a suite of Standard Reference Materials (SRMs) to address specific aspects of the performance of X-ray powder diffraction instruments. This report describes SRM 1879b, the third generation of this powder diffraction SRM. SRM 1879b is intended for use in the preparation of calibration standards for the quantitative analyses of cristobalite by X-ray powder diffraction in accordance with National Institute for Occupational Safety and Health (NIOSH) Analytical Method 7500, or equivalent. A unit of SRM 1879b consists of approximately 5 g of cristobalite powder bottled in an argon atmosphere. It is certified with respect to crystalline phase purity, or amorphous phase content, and lattice parameter. Neutron powder diffraction, both time-of-flight and constant-wavelength, was used to certify the phase purity using SRM 676a as an internal standard. A NIST-built diffractometer, incorporating many advanced design features was used for certification measurements for lattice parameters.

7.
J Chromatogr A ; 1473: 122-132, 2016 Nov 18.
Artículo en Inglés | MEDLINE | ID: mdl-27802881

RESUMEN

Asymmetric flow field flow fractionation (AF4) has several instrumental factors that may have a direct effect on separation performance. A sensitivity analysis was applied to ascertain the relative importance of AF4 primary instrument factor settings for the separation of a complex environmental sample. The analysis evaluated the impact of instrumental factors namely, cross flow, ramp time, focus flow, injection volume, and run buffer concentration on the multi-angle light scattering measurement of natural organic matter (NOM) molar mass (MM). A 2(5-1) orthogonal fractional factorial design was used to minimize analysis time while preserving the accuracy and robustness in the determination of the main effects and interactions between any two instrumental factors. By assuming that separations resulting in smaller MM measurements would be more accurate, the analysis produced a ranked list of effects estimates for factors and interactions of factors based on their relative importance in minimizing the MM. The most important and statistically significant AF4 instrumental factors were buffer concentration and cross flow. The least important was ramp time. A parallel 2(5-2) orthogonal fractional factorial design was also employed on five environmental factors for synthetic natural water samples containing silver nanoparticles (NPs), namely: NP concentration, NP size, NOM concentration, specific conductance, and pH. None of the water quality characteristic effects or interactions were found to be significant in minimizing the measured MM; however, the interaction between NP concentration and NP size was an important effect when considering NOM recovery. This work presents a structured approach for the rigorous assessment of AF4 instrument factors and optimal settings for the separation of complex samples utilizing efficient orthogonal factional factorial design and appropriate graphical analysis.


Asunto(s)
Técnicas de Química Analítica/métodos , Fraccionamiento de Campo-Flujo , Luz , Nanopartículas/análisis , Dispersión de Radiación , Plata/análisis , Tamaño de la Partícula , Contaminantes Químicos del Agua/análisis
8.
J Am Dent Assoc ; 147(6): 394-404, 2016 06.
Artículo en Inglés | MEDLINE | ID: mdl-27017181

RESUMEN

BACKGROUND: In this study, the authors conducted an alveolar osteitis (AO) risk assessment and global sensitivity meta-analysis within populations using oral contraceptives (OCs). Sex, smoking, and timing within the menstrual cycle were considered as factors. TYPES OF STUDIES REVIEWED: Eligibility criteria for inclusion of a study in the meta-analysis were experimental or medical record survey data evaluating AO and OC use, ability to draw pairwise comparisons for factors of interest, and description of the number of AO events relative to the number of participants in the respective group. RESULTS: The risk ratio of AO in females not using OCs was 1.2 greater (P ≤ .05) than that in males. Among females, OC use significantly increased (P ≤ .05) the average risk of AO occurrence by nearly 2-fold (13.9% versus 7.5%). There was no statistical evidence of lower risk in females menstruating at the time of exodontia. In 85.7% of the studies, smokers had an overall higher rate (P ≤ .05) of AO than did nonsmokers. CONCLUSIONS AND PRACTICAL IMPLICATIONS: To mitigate the increased risk of AO occurrence in females, the dentist should be cognizant of patients using OCs and smoking tobacco.


Asunto(s)
Anticonceptivos Orales/efectos adversos , Alveolo Seco , Extracción Dental , Femenino , Humanos , Medición de Riesgo , Factores de Riesgo , Fumar
9.
Anal Chim Acta ; 886: 207-13, 2015 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-26320655

RESUMEN

The analysis of natural and otherwise complex samples is challenging and yields uncertainty about the accuracy and precision of measurements. Here we present a practical tool to assess relative accuracy among separation protocols for techniques using light scattering detection. Due to the highly non-linear relationship between particle size and the intensity of scattered light, a few large particles may obfuscate greater numbers of small particles. Therefore, insufficiently separated mixtures may result in an overestimate of the average measured particle size. Complete separation of complex samples is needed to mitigate this challenge. A separation protocol can be considered improved if the average measured size is smaller than a previous separation protocol. Further, the protocol resulting in the smallest average measured particle size yields the best separation among those explored. If the differential in average measured size between protocols is less than the measurement uncertainty, then the selected protocols are of equivalent precision. As a demonstration, this assessment metric is applied to optimization of cross flow (V(x)) protocols in asymmetric flow field flow fractionation (AF(4)) separation interfaced with online quasi-elastic light scattering (QELS) detection using mixtures of polystyrene beads spanning a large size range. Using this assessment metric, the V(x) parameter was modulated to improve separation until the average measured size of the mixture was in statistical agreement with the calculated average size of particles in the mixture. While we demonstrate this metric by improving AF(4) V(x) protocols, it can be applied to any given separation parameters for separation techniques that employ dynamic light scattering detectors.


Asunto(s)
Nanopartículas/química , Poliestirenos/química , Fraccionamiento de Campo-Flujo , Luz , Tamaño de la Partícula , Dispersión de Radiación
10.
Radiology ; 275(3): 725-34, 2015 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-25686365

RESUMEN

PURPOSE: To develop and validate a metric of computed tomographic (CT) image quality that incorporates the noise texture and resolution properties of an image. MATERIALS AND METHODS: Images of the American College of Radiology CT quality assurance phantom were acquired by using three commercial CT systems at seven dose levels with filtered back projection (FBP) and iterative reconstruction (IR). Image quality was characterized by the contrast-to-noise ratio (CNR) and a detectability index (d') that incorporated noise texture and spatial resolution. The measured CNR and d' were compared with a corresponding observer study by using the Spearman rank correlation coefficient to determine how well each metric reflects the ability of an observer to detect subtle lesions. Statistical significance of the correlation between each metric and observer performance was determined by using a Student t distribution; P values less than .05 indicated a significant correlation. Additionally, each metric was used to estimate the dose reduction potential of IR algorithms while maintaining image quality. RESULTS: Across all dose levels, scanner models, and reconstruction algorithms, the d' correlated strongly with observer performance in the corresponding observer study (ρ = 0.95; P < .001), whereas the CNR correlated weakly with observer performance (ρ = 0.31; P = .21). Furthermore, the d' showed that the dose-reduction capabilities differed between clinical implementations (range, 12%-35%) and were less than those predicted from the CNR (range, 50%-54%). CONCLUSION: The strong correlation between the observer performance and the d' indicates that the d' is superior to the CNR for the evaluation of CT image quality. Moreover, the results of this study indicate that the d' improves less than the CNR with the use of IR, which indicates less potential for IR dose reduction than previously thought.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Análisis y Desempeño de Tareas , Tomografía Computarizada por Rayos X/normas , Diseño de Equipo , Relación Señal-Ruido , Tomografía Computarizada por Rayos X/instrumentación
11.
J Res Natl Inst Stand Technol ; 118: 218-59, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-26401431

RESUMEN

The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST's measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform.

12.
Appl Environ Microbiol ; 78(16): 5872-81, 2012 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-22706055

RESUMEN

Environmental sampling for microbiological contaminants is a key component of hygiene monitoring and risk characterization practices utilized across diverse fields of application. However, confidence in surface sampling results, both in the field and in controlled laboratory studies, has been undermined by large variation in sampling performance results. Sources of variation include controlled parameters, such as sampling materials and processing methods, which often differ among studies, as well as random and systematic errors; however, the relative contributions of these factors remain unclear. The objective of this study was to determine the relative impacts of sample processing methods, including extraction solution and physical dissociation method (vortexing and sonication), on recovery of Gram-positive (Bacillus cereus) and Gram-negative (Burkholderia thailandensis and Escherichia coli) bacteria from directly inoculated wipes. This work showed that target organism had the largest impact on extraction efficiency and recovery precision, as measured by traditional colony counts. The physical dissociation method (PDM) had negligible impact, while the effect of the extraction solution was organism dependent. Overall, however, extraction of organisms from wipes using phosphate-buffered saline with 0.04% Tween 80 (PBST) resulted in the highest mean recovery across all three organisms. The results from this study contribute to a better understanding of the factors that influence sampling performance, which is critical to the development of efficient and reliable sampling methodologies relevant to public health and biodefense.


Asunto(s)
Bacillus cereus/aislamiento & purificación , Técnicas Bacteriológicas/métodos , Burkholderia/aislamiento & purificación , Microbiología Ambiental , Escherichia coli/aislamiento & purificación , Manejo de Especímenes/métodos , Sensibilidad y Especificidad
13.
Cytometry A ; 79(7): 545-59, 2011 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-21674772

RESUMEN

The analysis of fluorescence microscopy of cells often requires the determination of cell edges. This is typically done using segmentation techniques that separate the cell objects in an image from the surrounding background. This study compares segmentation results from nine different segmentation techniques applied to two different cell lines and five different sets of imaging conditions. Significant variability in the results of segmentation was observed that was due solely to differences in imaging conditions or applications of different algorithms. We quantified and compared the results with a novel bivariate similarity index metric that evaluates the degree of underestimating or overestimating a cell object. The results show that commonly used threshold-based segmentation techniques are less accurate than k-means clustering with multiple clusters. Segmentation accuracy varies with imaging conditions that determine the sharpness of cell edges and with geometric features of a cell. Based on this observation, we propose a method that quantifies cell edge character to provide an estimate of how accurately an algorithm will perform. The results of this study will assist the development of criteria for evaluating interlaboratory comparability.


Asunto(s)
Algoritmos , Células/citología , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Microscopía Fluorescente/métodos , Animales , Ratones , Ratas
14.
Acta Crystallogr A ; 67(Pt 4): 357-67, 2011 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-21694474

RESUMEN

A non-diffracting surface layer exists at any boundary of a crystal and can comprise a mass fraction of several percent in a finely divided solid. This has led to the long-standing issue of amorphous content in standards for quantitative phase analysis (QPA). NIST standard reference material (SRM) 676a is a corundum (α-Al(2)O(3)) powder, certified with respect to phase purity for use as an internal standard in powder diffraction QPA. The amorphous content of SRM 676a is determined by comparing diffraction data from mixtures with samples of silicon powders that were engineered to vary their specific surface area. Under the (supported) assumption that the thickness of an amorphous surface layer on Si was invariant, this provided a method to control the crystalline/amorphous ratio of the silicon components of 50/50 weight mixtures of SRM 676a with silicon. Powder diffraction experiments utilizing neutron time-of-flight and 25 keV and 67 keV X-ray energies quantified the crystalline phase fractions from a series of specimens. Results from Rietveld analyses, which included a model for extinction effects in the silicon, of these data were extrapolated to the limit of zero amorphous content of the Si powder. The certified phase purity of SRM 676a is 99.02% ± 1.11% (95% confidence interval). This novel certification method permits quantification of amorphous content for any sample of interest, by spiking with SRM 676a.

15.
Appl Environ Microbiol ; 77(7): 2374-80, 2011 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-21296945

RESUMEN

The need for the precise and reliable collection of potential biothreat contaminants has motivated research in developing a better understanding of the variability in biological surface sampling methods. In this context, the objective of this work was to determine parameters affecting the efficiency of extracting Bacillus anthracis Sterne spores from commonly used wipe sampling materials and to describe performance using the interfacial energy concept. In addition, surface thermodynamics was applied to understand and predict surface sampling performance. Wipe materials were directly inoculated with known concentrations of B. anthracis spores and placed into extraction solutions, followed by sonication or vortexing. Experimental factors investigated included wipe material (polyester, cotton, and polyester-rayon), extraction solution (sterile deionized water [H(2)O], deionized water with 0.04% Tween 80 [H(2)O-T], phosphate-buffered saline [PBS], and PBS with 0.04% Tween 80 [PBST]), and physical dissociation method (vortexing or sonication). The most efficient extraction from wipes was observed for solutions containing the nonionic surfactant Tween 80. The increase in extraction efficiency due to surfactant addition was attributed to an attractive interfacial energy between Tween 80 and the centrifuge tube wall, which prevented spore adhesion. Extraction solution significantly impacted the extraction efficiency, as determined by statistical analysis (P < 0.05). Moreover, the extraction solution was the most important factor in extraction performance, followed by the wipe material. Polyester-rayon was the most efficient wipe material for releasing spores into solution by rank; however, no statistically significant difference between polyester-rayon and cotton was observed (P > 0.05). Vortexing provided higher spore recovery in H(2)O and H(2)O-T than sonication, when all three wipe materials and the reference control were considered (P < 0.05).


Asunto(s)
Bacillus anthracis/aislamiento & purificación , Técnicas Bacteriológicas/métodos , Microbiología Ambiental , Esporas Bacterianas/aislamiento & purificación , Tampones (Química) , Sonicación , Manejo de Especímenes/métodos
16.
J Res Natl Inst Stand Technol ; 116(5): 771-83, 2011.
Artículo en Inglés | MEDLINE | ID: mdl-26989599

RESUMEN

Experimenters characterize the behavior of simulation models for data communications networks by measuring multiple responses under selected parameter combinations. The resulting multivariate data may include redundant responses reflecting aspects of a smaller number of underlying behaviors. Reducing the dimension of multivariate responses can reveal the most significant model behaviors, allowing subsequent analyses to focus on one response per behavior. This paper investigates two methods for reducing dimension in multivariate data generated from simulation models. One method combines correlation analysis and clustering. The second method uses principal components analysis. We apply both methods to reduce a 22-dimensional dataset generated by a network simulator. We identify issues that an analyst must decide, and we compare the reductions suggested by the methods. We have used these methods to identify significant behaviors in simulated networks, and we suspect they may be applied to reduce the dimension of empirical data measured from real networks.

17.
Environ Sci Technol ; 44(4): 1386-91, 2010 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-20092299

RESUMEN

Potable water treatment facilities may become an important barrier in limiting human exposure to engineered nanoparticles (ENPs) as ENPs begin to contaminate natural aquatic systems. Coagulation of ENPs will likely be a major process that controls the ENP fate and the subsequent removal in the aqueous phase. The influence that source water quality has on ENP coagulation is still relatively unknown. The current study uses a 2(3) x 2(4-1) fractional factorial design to identify seven key surface water constituents that affect multiwall carbon nanotube (MWCNT) coagulation. These seven factors include: influent concentrations of kaolin, organic matter (OM), alginate, and MWCNTs; type and dosage of coagulant; and method of MWCNT stabilization. MWCNT removal was most affected by coagulant type and dosage, with alum outperforming ferric chloride at circumneutral pH. None of the other factors were universally significant but instead depended on coagulant type, dose, and method of stabilization. In all cases where factors were found to have a significant impact on MWCNT removal, however, the relationship was consistent: higher influent concentrations of kaolin and alginate improved MWCNT removal while higher influent concentrations of OM hindered MWCNT coagulation. Once MWCNTs are released into the natural environment, their coagulation behavior will be determined by the type and quantity of pollutants (i.e., factors) present in the aquatic environment and are governed by the same mechanisms that influence the colloidal stability of "natural" nanoparticles.


Asunto(s)
Nanotubos de Carbono/química , Agua/química , Alginatos/química , Ácido Glucurónico/química , Ácidos Hexurónicos/química , Caolín/química , Nanotecnología , Purificación del Agua
18.
J Res Natl Inst Stand Technol ; 113(4): 221-38, 2008.
Artículo en Inglés | MEDLINE | ID: mdl-27096123

RESUMEN

We present methods for measuring errors in the rendering of three-dimensional points, line segments, and polygons in pixel-based computer graphics systems. We present error metrics for each of these three cases. These methods are applied to rendering with OpenGL on two common hardware platforms under several rendering conditions. Results are presented and differences in measured errors are analyzed and characterized. We discuss possible extensions of this error analysis approach to other aspects of the process of generating visual representations of synthetic scenes.

19.
J Am Soc Mass Spectrom ; 17(2): 246-52, 2006 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-16413204

RESUMEN

One of the most significant issues in any analytical practice is optimization. Optimization and calibration are key factors in quantitation. In matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS), instrument optimization is a limitation restricting quantitation. An understanding of the parameters that are most influential and the effects of these parameters on the mass spectrum is required for optimization. This understanding is especially important when characterizing synthetic polymers by MALDI-TOF-MS, due to the breadth of the polymer molecular mass distribution (MMD). Two considerations are important in quantitation, additivity of signal and signal-to-noise (S/N). In this study, the effects of several instrument parameters were studied using an orthogonal experimental design to understand effects on the signal-to-noise (S/N) of a polystyrene distribution. The instrument parameters examined included detector voltage, laser energy, delay time, extraction voltage, and lens voltage. Other parameters considered were polymer concentration and matrix. The results showed detector voltage and delay time were the most influential of the instrument parameters for polystyrene using all trans-retinoic acid (RA) as the matrix. These parameters, as well as laser energy, were most influential for the polystyrene with dithranol as the matrix.

20.
Appl Radiat Isot ; 56(1-2): 57-63, 2002.
Artículo en Inglés | MEDLINE | ID: mdl-11842809

RESUMEN

Five alpha spectrometry analysis algorithms were evaluated for their ability to resolve the 241Am and 243Am peak overlap present under typical low-level counting conditions. The major factors affecting the performance of the algorithms were identified using design-of-experiment combined with statistical analysis of the results. The study showed that the accuracy of the 241Am/243Am ratios calculated by the algorithms depends greatly on the degree of peak deformation and tailing. Despite the improved data quality obtained using an algorithm that may include peak addition and tail estimation, the accurate determination of 241Am by alpha spectrometry relies primarily on reduction of peak overlap rather than on algorithm selection.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...