Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
J Chromatogr A ; 1640: 461931, 2021 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-33581675

RESUMEN

The average minimum resolution required for separating adjacent single-component peaks (SCPs) in one-dimensional chromatograms is an important metric in statistical overlap theory (SOT). However, its value changes with changing chromatographic conditions in non-intuitive ways, when SOT predicts the average number of peaks (maxima). A more stable and easily understood value of resolution is obtained on making a different prediction. A general equation is derived for the sum of all separated and superposed widths of SCPs in a chromatogram. The equation is a function of the saturation α, a metric of chromatographic crowdedness, and is expressed in dimensionless form by dividing by the duration of the chromatogram. This dimensionless function, f(α), is also the cumulative distribution function of the probability of separating adjacent SCPs. Simulations based on the clustering of line segments representing SCPs verify expressions for f(α) calculated from five functions for the distribution of intervals between adjacent SCPs. Synthetic chromatograms are computed with different saturations, distributions of intervals, and distribution of SCP amplitudes. The chromatograms are analyzed by calculating the sum of the widths of peaks at different relative responses, dividing the sum by the duration of the chromatograms, and graphing the reduced sum against relative response. For small values of relative response, the reduced sum approaches the fraction of baseline that is occupied by chromatographic peaks. This fraction can be identified with f(α), if the saturation α is defined with the average minimum resolution equaling 1.5. The identification is general and is independent of the saturation, the interval distribution, or the amplitude distribution. This constant value of resolution corresponds to baseline resolution, which simplifies the interpretation of SOT.


Asunto(s)
Cromatografía/métodos , Estadística como Asunto , Simulación por Computador , Probabilidad
2.
Anal Chem ; 82(1): 307-15, 2010 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-20041721

RESUMEN

The separation of organelles by capillary electrophoresis (CE) produces large numbers of narrow peaks, which commonly are assumed to originate from single particles. In this paper, we show this is not always true. Here, we use established methods to partition simulated and real organelle CEs into regions of constant peak density and then use statistical-overlap theory to calculate the number of peaks (single particles) in each region. The only required measurements are the number of observed peaks (maxima) and peak standard deviation in the regions and the durations of the regions. Theory is developed for the precision of the estimated peak number and the threshold saturation above which the calculation is not advisable due to fluctuation of peak numbers. Theory shows that the relative precision is good when the saturation lies between 0.2 and 1.0 and is optimal when the saturation is slightly greater than 0.5. It also shows the threshold saturation depends on the peak standard deviation divided by the region's duration. The accuracy and precision of peak numbers estimated in different regions of organelle CEs are verified by computer simulations having both constant and nonuniform peak densities. The estimates are accurate to 6%. The estimated peak numbers in different regions are used to calculate migration-time and electrophoretic-mobility distributions. These distributions are less biased by peak overlap than ones determined by counting maxima and provide more correct measures of the organelle properties. The procedure is applied to a mitochondrial CE, in which over 20% of peaks are hidden by peak overlap.


Asunto(s)
Electroforesis Capilar/instrumentación , Electroforesis Capilar/métodos , Modelos Estadísticos , Orgánulos , Modelos Químicos
3.
J Chromatogr A ; 1626: 461266, 2020 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-32797862

RESUMEN

The search for biomarkers allowing the assessment of disease by early diagnosis is facilitated by liquid chromatography. However, it is not clear how many components are lost due to being present in concentrations below the detection limit and/or being obscured by chromatographic peak overlap. First, we extend the study of missing components undertaken by Enke and Nagels, who employed the log-normal probability density function (pdf) for the distribution of signal intensities (and concentrations) of three mixtures. The Weibull and exponential pdfs, which have a higher probability of small-concentration components than the log-normal pdf, are also investigated. Results show that assessments of the loss of low-intensity signals by curve fitting are ambiguous. Next, we simulate synthetic chromatograms to compare the loss of peaks from superposition (overlap) with neighboring peaks to the loss arising from lying below the limit of detection (LOD) imposed by a finite signal-to-noise ratio (SNR). The simulations are made using amplitude pdfs based on the Enke-Nagels data as functions of relative column efficiency, i.e., saturation, and SNR. Results show that at the highest efficiencies, the lowest-amplitude peaks are lost below the LOD. However, at small and medium efficiencies, peak overlap is the dominant loss mechanism, suggesting that low-level components will not be found easily in liquid chromatography with single channel detectors regardless of SNR. A simple treatment shows that a multichannel detector, e.g., a mass spectrometer, is necessary to expose more low-level components.


Asunto(s)
Biomarcadores/análisis , Cromatografía Líquida de Alta Presión/métodos , Límite de Detección , Relación Señal-Ruido
4.
Anal Chem ; 81(3): 1198-207, 2009 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-19178343

RESUMEN

A theoretical comparison is made of the numbers of observed peaks in one-dimensional (1D) and two-dimensional (2D) separations having the same peak capacity, as calculated from the traditional metric of resolution. The shortcoming of the average minimum resolution of statistical overlap theory (SOT) for this comparison is described. A new metric called the "effective saturation" is introduced to ameliorate the shortcoming. Unlike the "saturation", which is the usual metric of peak crowding in SOT, the effective saturation is independent of the average minimum resolution and can be determined using traditional values of resolution and peak capacity. Our most important finding is that, under a wide range of practical conditions, 1D and 2D separations of the same mixture produce almost equal numbers of observed peaks when the traditional peak capacities of the separations are the same, provided that the effective saturation and not the usual saturation is used as the measure of crowding. This is the case when peak distributions are random and when edge effects are minor. The numerical results supporting this finding can be described by empirical functions of the effective saturation, including one for the traditional peak capacity needed to separate a given fraction of mixture constituents as observed peaks. The near equality of the number of observed peaks in 1D and 2D separations based on the effective saturation is confirmed by simulations. However, this equality is compromised in 2D separations when edge effects are large. The new finding does not contradict previous predictions by SOT of differences between 1D and 2D separations at equal saturation. Indeed, the simulations reaffirm their validity. Rather, the usual metric, i.e., the saturation, is just not as simple a metric for comparing 1D and 2D separations as is the new metric, i.e., the effective saturation. We strongly recommend use of the new metric for its great simplifying effect.


Asunto(s)
Cromatografía Liquida/métodos , Algoritmos , Simulación por Computador , Modelos Estadísticos
5.
J Chromatogr A ; 1588: 150-158, 2019 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-30638714

RESUMEN

An equation is proposed for the probability that all mixture constituents are separated, when the density (i.e., average number of eluting constituents per time) and width of single-component peaks (SCPs) vary systematically. The probability Pr that m SCPs are separated is modeled as the product of the m - 1 probabilities that adjacent pairs of SCPs are separated. Pr is then expressed as the geometric mean of the probability product raised to the power of m - 1. This geometric mean is approximated by an arithmetic mean equaling the probability that adjacent SCPs are separated, as calculated from previously developed statistical overlap theory (SOT) for variable SCP density and width. The theory is tested using previously reported and current in-house simulations of isocratic chromatograms of SCPs with random differences in standard chemical potential. In such chromatograms, more SCPs elute at short times than long times, and their widths are less at short times than long times. The average difference between 179 previously reported and currently predicted values of 100 x Pr is about 0.6, when 100 x Pr > 5. The theory requires numerical computation of one integral but can be approximated by an analytic equation for SOT probabilities close to one. For SCPs having retention times exceeding twice the void time, this equation simplifies to a previous SOT expression, with the gradient peak capacity replaced by the isocratic peak capacity. The versatility of the Pr theory is tested using three other models of chromatograms, in which the density and width of SCPs vary. The Pr predictions agree with simulation for all three models.


Asunto(s)
Cromatografía/métodos , Modelos Químicos , Probabilidad
6.
Anal Chem ; 80(21): 8122-34, 2008 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-18841937

RESUMEN

One of the basic tenets of comprehensive two-dimensional chromatography is that the total peak capacity is simply the product of the first- and second-dimension peak capacities. As formulated, the total peak capacity does not depend on the relative values of the individual dimensions but only on the product of the two. This concept is tested here for the experimentally realistic situation wherein the first-dimension separation is undersampled. We first propose that a relationship exists between the number of observed peaks in a two-dimensional separation and the effective peak capacity. We then show here for a range of reasonable total peak capacities (500-4000) and various contributions of peak capacity in each dimension (10-150) that the number of observed peaks is only slightly dependent on the relative contributions over a reasonable and realistic range in sampling times (equal to the first-dimension peak standard deviation, multiplied by 0.2-16). Most of this work was carried out under the assumption of totally uncorrelated retention times. For uncorrelated separations, the small deviations from the product rule are due to the "edge effect" of statistical overlap theory and a recently introduced factor that corrects for the broadening of first-dimension peaks by undersampling them. They predict that relatively more peaks will be observed when the ratio of the first- to the second-dimension peak capacity is much less than unity. Additional complications are observed when first- and second-dimension retention times show some correlation, but again the effects are small. In both cases, deviations from the product rule are measured by the relative standard deviations of the number of observed peaks, which are typically 10 or less. Thus, although the basic tenet of two-dimensional chromatography is not exact when the first dimension is undersampled, the deviations from the product rule are sufficiently small as to be unimportant in practical work. Our results show that practitioners have a high degree of flexibility in designing and optimizing experimental comprehensive two-dimensional separations.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Simulación por Computador
7.
J Chromatogr A ; 1537: 43-57, 2018 Feb 16.
Artículo en Inglés | MEDLINE | ID: mdl-29338871

RESUMEN

The probability Pr(sLC×LC) that all peaks are separated by a resolution of 1.5 or more in selective comprehensive two-dimensional liquid chromatography (sLC × LC) is computed for simple model systems of 5 to 60 peaks and first-dimension (1D) gradient times of 100 to 2000 s. The computations include mimics of a commercial instrument, whose fixed second-dimension (2D) gradient time and use of one cycle time for initialization reduces Pr(sLC×LC) relative to an earlier report. For serial sLC × LC, in which a single device collects and transfers 1D multiplets to the second dimension, Pr(sLC×LC) under practical conditions is predicted to be only slightly larger than the probability of total resolution in LC × LC for separations of the same duration in each case. To increase Pr(sLC×LC), two model systems are proposed based on parallel processing, in which one device collects multiplets from the first separation while a second device simultaneously transfers fractions from previously collected multiplets to the second dimension for further separation. A sum of probabilities guideline is proposed by which optimal fixed 2D gradient times, ranging from 9.5 to 12 s, are found for both serial and parallel models. The increases of Pr(sLC×LC) based on parallel processing are modest; the largest is only 0.062 for one system and 0.106 for the other, relative to the serial model. A theory is derived that rationalizes the modesty of the increase, which was unexpected. It shows that Pr(sLC×LC) equals the probability of total resolution in the first dimension, plus the product of the probability that all 1D multiplets are transferred to the second dimension and the probability that all multiplets are separated in the second dimension. The theory shows that, although parallel processing is better than serial processing for multiplet transfer, the ability to leverage this gain is offset by the limited probability that all multiplets are then actually separated in the second dimension, which is only about 0.55 for conditions where the change from serial to parallel processing is most beneficial. With these findings in hand, two scenarios are examined for future consideration: one in which the 2D peak capacity is doubled, and another in which multiplets are always transferred to the second dimension. The latter shows considerable promise for increasing Pr(sLC×LC) substantially beyond its counterpart in LC × LC. For example, a 50% probability of separating all peaks in a 15-component mixture can be reached in 1150 s using LC × LC. The same probability can be reached in the same time for a sample with nearly twice as many components (27) in the case of sLC × LC, assuming transfer of all multiplets to the second dimension. These findings will be useful to those considering systematic approaches to developing 2D-LC methods for moderately complex mixtures, and to those interested in instrument development for 2D-LC.


Asunto(s)
Cromatografía Liquida , Simulación por Computador , Modelos Químicos , Probabilidad
8.
J Chromatogr A ; 1147(1): 111-9, 2007 Apr 13.
Artículo en Inglés | MEDLINE | ID: mdl-17320093

RESUMEN

The possibility is discussed that micellar isotherms determined by vacancy-micellar electrokinetic chromatography (vacancy-MEKC) differ from isotherms in electrolyte-free surfactants due to thermodynamic effects of buffer. Also discussed is the possibility that they are biased at high solute concentrations by solubilization-induced changes of electrical conductivity. Such bias could invalidate a theory on peak asymmetry of neutral solutes in MEKC that is based on thermodynamic interpretation of the isotherms. To evaluate these possibilities, the nonlinear concave upward isotherm of benzene in a pH 7.0, 0.0060 M sodium phosphate buffer containing 50 mM sodium dodecyl sulfate (SDS) was measured by headspace gas chromatography. Of interest is the finding that benzene is more stable in the surfactant-free buffer than in water. The isotherm was compared to that previously measured by vacancy-MEKC in the same buffer and 10, 30, or 50 mM SDS. No difference was found between the isotherms. However, the isotherm indeed differed from that of benzene in buffer-free 50 mM SDS, which was also determined and agreed favorably with previous results. A partial explanation is given for the independence of the vacancy-MEKC isotherm of solubilization-induced conductivity changes.


Asunto(s)
Cromatografía de Gases/métodos , Cromatografía Capilar Electrocinética Micelar/métodos , Benceno/análisis , Tampones (Química) , Fenómenos Químicos , Química Física , Conductividad Eléctrica , Concentración de Iones de Hidrógeno , Micelas , Modelos Químicos , Dodecil Sulfato de Sodio/química , Solubilidad , Termodinámica
10.
J Chromatogr A ; 1523: 148-161, 2017 Nov 10.
Artículo en Inglés | MEDLINE | ID: mdl-28673634

RESUMEN

Orthogonality metrics (OMs) for three and higher dimensional separations are proposed as extensions of previously developed OMs, which were used to evaluate the zone utilization of two-dimensional (2D) separations. These OMs include correlation coefficients, dimensionality, information theory metrics and convex-hull metrics. In a number of these cases, lower dimensional subspace metrics exist and can be readily calculated. The metrics are used to interpret previously generated experimental data. The experimental datasets are derived from Gilar's peptide data, now modified to be three dimensional (3D), and a comprehensive 3D chromatogram from Moore and Jorgenson. The Moore and Jorgenson chromatogram, which has 25 identifiable 3D volume elements or peaks, displayed good orthogonality values over all dimensions. However, OMs based on discretization of the 3D space changed substantially with changes in binning parameters. This example highlights the importance in higher dimensions of having an abundant number of retention times as data points, especially for methods that use discretization. The Gilar data, which in a previous study produced 21 2D datasets by the pairing of 7 one-dimensional separations, was reinterpreted to produce 35 3D datasets. These datasets show a number of interesting properties, one of which is that geometric and harmonic means of lower dimensional subspace (i.e., 2D) OMs correlate well with the higher dimensional (i.e., 3D) OMs. The space utilization of the Gilar 3D datasets was ranked using OMs, with the retention times of the datasets having the largest and smallest OMs presented as graphs. A discussion concerning the orthogonality of higher dimensional techniques is given with emphasis on molecular diversity in chromatographic separations. In the information theory work, an inconsistency is found in previous studies of orthogonality using the 2D metric often identified as %O. A new choice of metric is proposed, extended to higher dimensions, characterized by mixes of ordered and random retention times, and applied to the experimental datasets. In 2D, the new metric always equals or exceeds the original one. However, results from both the original and new methods are given.


Asunto(s)
Técnicas de Química Analítica/métodos , Cromatografía , Péptidos/química , Técnicas de Química Analítica/normas , Teoría de la Información , Péptidos/análisis , Péptidos/aislamiento & purificación
11.
J Chromatogr A ; 1126(1-2): 244-56, 2006 Sep 08.
Artículo en Inglés | MEDLINE | ID: mdl-16782109

RESUMEN

A theory is proposed for the dependence on saturation of the average minimum resolution R(*) in point-process statistical-overlap theory for two-dimensional separations. Peak maxima are modelled by clusters of overlapping circles in hexagonal arrangements similar to close-packed layers. Such clusters exist only for specific circle numbers, but equations are derived that facilitate prediction of equivalent cluster properties for any number of circles. A metric is proposed for the average minimum resolution that separates two such clusters into two maxima. From this metric, the average minimum resolution of the two nearest-neighbor single-component peaks (SCPs)--one in each cluster--is calculated. Its value varies with the number of SCPs in both clusters. These resolutions are weighted by the probability that the two clusters contain the postulated numbers of SCPs and summed to give R(*), which decreases with increasing saturation. The dependence of R(*) on saturation is combined with a theory correcting the probability of overlap in a reduced square for boundary effects. The numbers of maxima in simulations of 75, 150, and 300 randomly distributed bi-Gaussians having exponential heights and aspect ratios of 1, 30, and 60 are compared to predictions. Excellent agreement between maxima numbers and theory is found at low and high saturation. Good estimates of the numbers of bi-Gaussians in simulations are calculated by fitting theory to numbers of maxima using least-squares regression. The theory is applied to mimicked GC x GCs of 93 compounds having many correlated retention times, with predictions that agree fairly well with maxima numbers.


Asunto(s)
Cromatografía , Cromatografía de Gases , Modelos Teóricos
12.
J Chromatogr A ; 1096(1-2): 28-39, 2005 Nov 25.
Artículo en Inglés | MEDLINE | ID: mdl-16301067

RESUMEN

A general theory is proposed for the probability of different outcomes of success and failure of component resolution, when complex mixtures are partially separated by n independent columns. Such a separation is called an n-column separation. An outcome of particular interest is component resolution by at least one column. Its probability is identified with the probability of component resolution by a single column, thereby defining the effective saturation of the n-column separation. Several trends are deduced from limiting expressions of the effective saturation. In particular, at low saturation the probability that components cluster together as unresolved peaks decreases exponentially with the number of columns, and the probability that components cluster together on addition of another column decreases by a factor equal to twice the column saturation. The probabilities of component resolution by n-column and two-dimensional separations also are compared. The theory is applied by interpreting three sets of previously reported retention indices of the 209 polychlorinated biphenyls (PCBs), as determined by GC. The origin of column independence is investigated from two perspectives. First, it is suggested that independence exists when the difference between indices of the same compound on two columns is much larger than the interval between indices required for separation. Second, it is suggested that independence exists when the smaller of the two intervals between a compound and its adjacent neighbors is not correlated with its counterpart on another column.


Asunto(s)
Cromatografía/métodos , Modelos Teóricos , Probabilidad , Cromatografía de Gases/métodos , Simulación por Computador , Bifenilos Policlorados/aislamiento & purificación
13.
J Chromatogr A ; 1414: 60-76, 2015 Oct 02.
Artículo en Inglés | MEDLINE | ID: mdl-26338213

RESUMEN

Twenty orthogonality metrics (OMs) derived from convex hull, information theory, fractal dimension, correlation coefficients, nearest neighbor distances and bin-density techniques were calculated from a diverse group of 47 experimental two-dimensional (2D) chromatograms. These chromatograms comprise two datasets; one dataset is a collection of 2D chromatograms from Peter Carr's laboratory at the University of Minnesota, and the other dataset is based on pairs of one-dimensional chromatograms previously published by Martin Gilar and coworkers (Waters Corp.). The chromatograms were pooled to make a third or combined dataset. Cross-correlation results suggest that specific OMs are correlated within families of nearest neighbor methods, correlation coefficients and the information theory methods. Principal component analysis of the OMs show that none of the OMs stands out as clearly better at explaining the data variance than any another OM. Principal component analysis of individual chromatograms shows that different OMs favor certain chromatograms. The chromatograms exhibit a range of quality, as subjectively graded by nine experts experienced in 2D chromatography. The subjective (grading) evaluations were taken at two intervals per expert and demonstrated excellent consistency for each expert. Excellent agreement for both very good and very bad chromatograms was seen across the range of experts. However, evaluation uncertainty increased for chromatograms that were judged as average to mediocre. The grades were converted to numbers (percentages) for numerical computations. The percentages were correlated with OMs to establish good OMs for evaluating the quality of 2D chromatograms. Certain metrics correlate better than others. However, these results are not consistent across all chromatograms examined. Most of the nearest neighbor methods were observed to correlate poorly with the percentages. However, one method, devised by Clark and Evans, appeared to work moderately well. Products of OMs show better correlation with the percentages than do single OMs. Product OMs that utilize one discretized metric paired with the convex hull relative area, which measures overall zone occupancy, perform well in determining the "best" chromatogram among both datasets and the combined dataset. A definition of chromatographic orthogonality is suggested that is based on maximizing the values of OMs or OM products. This optimization criterion suggests using the product of a global metric that measures the utilization of separation space (e.g., the convex hull relative area) and a local metric that measures peak spacing (e.g., the box-counting fractal dimension). The "best" column pairs for 2D chromatography are chosen by the product of these OMs.


Asunto(s)
Cromatografía/estadística & datos numéricos , Algoritmos , Teoría de la Información , Análisis de Componente Principal
14.
J Chromatogr A ; 1360: 128-42, 2014 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-25108764

RESUMEN

Computer simulations of three methods of liquid chromatography (LC) are developed to understand better the conditions under which each method is superior to the others. The methods are one-dimensional LC (1D-LC), comprehensive two-dimensional LC (LC×LC), and selective comprehensive two-dimensional LC (sLC×LC). The criterion by which superiority is measured in this case is the probability that all peaks in a given sample are separated by a resolution equaling or exceeding unity. A point-process model is developed for the simulation of sLC×LC to complement existing models for 1D-LC and LC×LC. In the sLC×LC model, first-dimension singlet peaks remain in that dimension, and first-dimension multiplets, or clusters of overlapping peaks, are transferred to the second dimension for further separation during the interval of time between successive multiplets. Criteria are developed for the success or failure of multiplet transfer. The three LC methods are simulated for peak numbers ranging from 2 to 50 and analysis times ranging from 10 to 1200s, using peak capacities that reflect the performance of modern instrumentation. The probability computations predict the experimental finding that LC×LC is superior to 1D-LC at long times (over 210 or so seconds) but is inferior at shorter times due to the broadening of first-dimension peaks by sampling. In general, sLC×LC is predicted to be superior to LC×LC for samples with less than 40 peaks separated using three samples or less per multiplet. Conversely, LC×LC is predicted to be superior to sLC×LC for samples containing more than 40 peaks and when sLC×LC separations are carried out with six samples per multiplet. We find that the analysis time required to attain a 50% probability of total resolution is always predicted to be shorter for sLC×LC than for 1D-LC, and 30-75% shorter than for LC×LC when 20 or so peaks are separated. Finally, in light of the substantial predicted time savings for sLC×LC analyses the computations are interpreted relative to practical concerns, e.g., retention-time shifts, to establish good working conditions (e.g., the number of samples per multiplet) for future experimental studies of sLC×LC.


Asunto(s)
Cromatografía Liquida/métodos , Cromatografía Liquida/instrumentación , Simulación por Computador , Funciones de Verosimilitud , Modelos Teóricos , Factores de Tiempo
15.
J Chromatogr A ; 1251: 1-9, 2012 Aug 17.
Artículo en Inglés | MEDLINE | ID: mdl-22771062

RESUMEN

Equations were proposed recently for computing the distribution of minimum resolution (resolution distribution) of two Gaussian peaks with equal standard deviations, when peak heights in a multi-component separation follow a statistical distribution. The computation depended on the survival function of the peak-height ratio. Previously, an equation was derived for a first-order survival function that excluded peaks with heights less than a noise/detection limit. Here, an equation is derived for a corrected survival function, under the more realistic assumption that two minimally resolved peaks are lost if the height of their shoulder is less than the noise/detection limit. First-order and corrected survival functions and resolution distributions are derived for the exponential and uniform distributions of peak heights, and a corrected survival function and resolution distribution are derived for the log-normal distribution (LND) to complement a previous first-order derivation. Large peak losses (up to 99.3% of the noise/detection limit) are considered to find significant differences between the first-order and corrected resolution distributions. For the LND and exponential distribution, the corrected resolution distribution has slightly greater density in the low-resolution region but otherwise differs little from its first-order counterpart, unless the scale parameter of the LND is small (e.g. 0.75). For the uniform peak-height distribution, the corrected resolution distribution has higher density in the high-resolution region. The first-order and corrected resolution distributions are almost the same as long as the first moment of the first-order resolution distribution is greater than 0.6. The predictions are confirmed by Monte-Carlo simulation.


Asunto(s)
Cromatografía/métodos , Cromatografía/normas , Simulación por Computador , Límite de Detección , Modelos Teóricos , Método de Montecarlo
16.
J Chromatogr A ; 1255: 267-76, 2012 Sep 14.
Artículo en Inglés | MEDLINE | ID: mdl-22226455

RESUMEN

Optimization of comprehensive two-dimensional separations frequently relies on the assessment of the peak capacity of the system. A correction is required for the fact that many pairs of separation systems are to some degree correlated, and consequently the entire separation space is not chemically accessible to solutes. This correction is essentially a measure of the fraction of separation space area where the solutes may elute. No agreement exists in the literature as to the best form of the spatial coverage factor. In this work, we distinguish between spatial coverage factors that measure the maximum occupiable space, which is characteristic of the separation dimensionality, and the space actually occupied by a particular sample, which is characteristic of the sample dimensionality. It is argued that the former, which we call f(coverage), is important to calculating the peak capacity. We propose five criteria for a good f(coverage) metric and use them to evaluate various area determination methods that are used to measure animal home ranges in ecology. We consider minimum convex hulls, convex hull peels, α-hulls, three variations of local hull methods, and a kernel method and compare the results to the intuitively satisfying but subjective Stoll-Gilar method. The most promising methods are evaluated using two experimental LC×LC data sets, one with fixed separation chemistry but variable gradient times, and a second with variable first dimension column chemistry. For the 12 separations in the first data set, in which f(coverage) is expected to be constant, the minimum convex hull is the most precise method (f(coverage)=0.68±0.04) that gives similar results to the Stoll-Gilar method (f(coverage)=0.67±0.06). The minimum convex hull is proposed as the best method for calculating f(coverage), because it has no adjustable parameters, can be scaled to different retention time normalizations, is easily calculated using available software, and represents the expected area of solute occupation based on a proposed linear free energy formalism.


Asunto(s)
Algoritmos , Cromatografía/métodos , Programas Informáticos , Simulación por Computador , Modelos Lineales , Modelos Químicos , Extractos Vegetales/química , Semillas/química , Zea mays/química
17.
Talanta ; 83(4): 1068-73, 2011 Jan 30.
Artículo en Inglés | MEDLINE | ID: mdl-21215840

RESUMEN

The average numbers of singlet peaks in one-dimensional (1D) and two-dimensional (2D) separations of randomly distributed peaks are predicted by statistical-overlap theory and compared against the effective saturation. The effective saturation is a recently introduced metric of peak crowding that is more practitioner-friendly than the usual metric, the saturation. The effective saturation absorbs the average minimum resolution of statistical-overlap theory, facilitating the comparison of 1D and 2D separations by traditional metrics of resolution and peak capacity. In this paper, singlet peaks are identified with maxima produced by a single mixture constituent. Their effective saturations are calculated from published equations for the average minimum resolution of 1D singlet peaks, and from equations derived here for the average minimum resolution of 2D singlet peaks. The fractions of peaks that are singlets in 1D and 2D separations are predicted by statistical-overlap theory as functions of saturation but are compared as functions of effective saturation. The two fractions differ by no more than 0.033 at any effective saturation between 0 and 6, when the distribution of peak heights is exponential and the edge effect is neglected. This result shows that 1D and 2D separations of randomly distributed peaks are about the same in their ability to separate singlet peaks as maxima, when assessed relative to effective saturation. Empirical equations in effective saturation are reported for the fractions of peaks that are singlets. It is argued that the effective saturation is a good metric for comparing separations having different average minimum resolutions.


Asunto(s)
Fraccionamiento Químico/métodos , Estadística como Asunto/métodos
18.
J Chromatogr A ; 1218(43): 7841-9, 2011 Oct 28.
Artículo en Inglés | MEDLINE | ID: mdl-21939978

RESUMEN

General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation.


Asunto(s)
Cromatografía/métodos , Distribución Normal , Simulación por Computador , Método de Montecarlo
19.
J Chromatogr A ; 1218(52): 9297-306, 2011 Dec 30.
Artículo en Inglés | MEDLINE | ID: mdl-22088670

RESUMEN

The chromatographic dimensionality was recently proposed as a measure of retention time spacing based on a power law (fractal) distribution. Using this model, a statistical overlap theory (SOT) for chromatographic peaks is developed that estimates the number of peak maxima as a function of the chromatographic dimension, saturation and scale. Power law models exhibit a threshold region whereby below a critical saturation value no loss of peak maxima due to peak fusion occurs as saturation increases. At moderate saturation, behavior is similar to the random (Poisson) peak model. At still higher saturation, the power law model shows loss of peaks nearly independent of the scale and dimension of the model. The physicochemical meaning of the power law scale parameter is discussed and shown to be equal to the Boltzmann-weighted free energy of transfer over the scale limits. The scale is discussed. Small scale range (small ß) is shown to generate more uniform chromatograms. Large scale range chromatograms (large ß) are shown to give occasional large excursions of retention times; this is a property of power laws where "wild" behavior is noted to occasionally occur. Both cases are shown to be useful depending on the chromatographic saturation. A scale-invariant model of the SOT shows very simple relationships between the fraction of peak maxima and the saturation, peak width and number of theoretical plates. These equations provide much insight into separations which follow power law statistics.


Asunto(s)
Cromatografía de Gases/métodos , Cromatografía Liquida/métodos , Fractales , Estadística como Asunto/métodos , Algoritmos , Termodinámica
20.
J Chromatogr A ; 1218(34): 5819-28, 2011 Aug 26.
Artículo en Inglés | MEDLINE | ID: mdl-21777917

RESUMEN

The average value of the multivariate selectivity (SEL) of randomly positioned peaks in a multi-component separation is shown to equal the average fraction of peaks that are singlets, as predicted by statistical-overlap theory (SOT). This equality is the basis for proposing a useful metric, specifically the average minimum resolution of nearest-neighbor peaks, for the performance of comprehensive two-dimensional (2D) separations. Furthermore this metric was computed both without ancillary spectroscopic information and with the assistance of such help, specifically multi-wavelength UV-vis spectra, acquired during the separation. Separations are simulated with randomly positioned peaks over wide ranges of total number of peaks, first- and second-dimension peak capacity, dimensionless first-dimension sampling time, and spectral diversity. The specific version of the general multivariate selectivity concept that is used here--identified as SEL--gives the relative precision of quantification when using the PARAFAC (parallel factor analysis) method, a popular curve resolution algorithm. The SEL values of all peaks were calculated, averaged, and compared to the predictions of SOT. In the absence of auxiliary spectral data, the SEL-based average minimum resolution required to separate two peaks in a 2D separation is 0.256, compared to resolution of 0.5 if no chemometric assistance is available. This was found to be valid over a wide range of conditions and is essentially independent of peak crowding. With the assistance of the spectral data, the requisite minimum resolution substantially improves, that is, it decreases, especially when peak crowding is severe. The requisite minimum resolution decreases even further, up to a limit, as the spectral diversity is increased. In contrast, the SEL-based average under-sampling correction factor is virtually independent of the presence of the additional spectral data, and additionally is about the same as calculated with SOT from the average number of maxima in closely analogous simulations. The use of selectivity greatly increases the fraction of peaks that are singlets, relative to the number of singlet maxima, especially when spectral assistance is added. The insensitivity of the under-sampling correction factor to either the use of selectivity or added spectral data simplifies optimization of the corrected peak capacity in on-line comprehensive 2D separations.


Asunto(s)
Cromatografía Liquida/instrumentación , Modelos Teóricos , Algoritmos , Análisis Espectral
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA