Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
1.
Proc Natl Acad Sci U S A ; 121(10): e2313719121, 2024 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-38416677

RESUMO

Single-cell data integration can provide a comprehensive molecular view of cells, and many algorithms have been developed to remove unwanted technical or biological variations and integrate heterogeneous single-cell datasets. Despite their wide usage, existing methods suffer from several fundamental limitations. In particular, we lack a rigorous statistical test for whether two high-dimensional single-cell datasets are alignable (and therefore should even be aligned). Moreover, popular methods can substantially distort the data during alignment, making the aligned data and downstream analysis difficult to interpret. To overcome these limitations, we present a spectral manifold alignment and inference (SMAI) framework, which enables principled and interpretable alignability testing and structure-preserving integration of single-cell data with the same type of features. SMAI provides a statistical test to robustly assess the alignability between datasets to avoid misleading inference and is justified by high-dimensional statistical theory. On a diverse range of real and simulated benchmark datasets, it outperforms commonly used alignment methods. Moreover, we show that SMAI improves various downstream analyses such as identification of differentially expressed genes and imputation of single-cell spatial transcriptomics, providing further biological insights. SMAI's interpretability also enables quantification and a deeper understanding of the sources of technical confounders in single-cell data.


Assuntos
Algoritmos , Perfilação da Expressão Gênica , Expressão Gênica , Análise de Célula Única
2.
Anal Bioanal Chem ; 415(13): 2641-2651, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37036485

RESUMO

Comprehensive two-dimensional gas chromatography coupled with mass spectrometry (GC × GC-MS) has great potential for analyses of complicated mixtures and sample matrices, due to its separation power and possible high resolution. The second component of the measurement results, the mass spectra, is reproducible. However, the reproducibility of two-dimensional chromatography is affected by many factors and makes the evaluation of long-term experiments or cross-laboratory collaborations complicated. This paper presents a new open-source data alignment tool to tackle the problem of retention time shifts - with 5 different algorithms implemented: BiPACE 2D, DISCO, MSort, PAM, and TNT-DA, along with Pearson's correlation and dot product as optional methods for mass spectra comparison. The implemented data alignment algorithms and their variations were tested on real samples to demonstrate the functionality of the presented tool. The suitability of each implemented algorithm for significantly/non-significantly shifted data was discussed on the basis of the results obtained. For the evaluation of the "goodness" of the alignment, Kolmogorov-Smirnov test values were calculated, and comparison graphs were generated. The DA_2DChrom is available online with its documentation, fully open-sourced, and the user can use the tool without the need of uploading their data to external third-party servers.

3.
J Comput Inf Sci Eng ; 22(6)2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37720111

RESUMO

Recently, The number and types of measurement devices that collect data that is used to monitor Laser-Based Powder Bed Fusion of Metals processes and inspect Additive Manufacturing (AM) metal parts have increased rapidly. Each measurement device generates data in a unique coordinate system and in a unique format. Data alignment is the process of spatially aligning different datasets to a single coordinate system. It is part of a broader process called "Data Registration". This paper provides a data-registration procedure and includes an example of aligning data to a single, reference, coordinate system. Such a reference coordinate system is needed for downstream applications, including data analytic, artificial intelligence, and part qualification.

4.
Sensors (Basel) ; 20(8)2020 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-32290582

RESUMO

This paper focuses on data fusion, which is fundamental to one of the most important modules in any autonomous system: perception. Over the past decade, there has been a surge in the usage of smart/autonomous mobility systems. Such systems can be used in various areas of life like safe mobility for the disabled, senior citizens, and so on and are dependent on accurate sensor information in order to function optimally. This information may be from a single sensor or a suite of sensors with the same or different modalities. We review various types of sensors, their data, and the need for fusion of the data with each other to output the best data for the task at hand, which in this case is autonomous navigation. In order to obtain such accurate data, we need to have optimal technology to read the sensor data, process the data, eliminate or at least reduce the noise and then use the data for the required tasks. We present a survey of the current data processing techniques that implement data fusion using different sensors like LiDAR that use light scan technology, stereo/depth cameras, Red Green Blue monocular (RGB) and Time-of-flight (TOF) cameras that use optical technology and review the efficiency of using fused data from multiple sensors rather than a single sensor in autonomous navigation tasks like mapping, obstacle detection, and avoidance or localization. This survey will provide sensor information to researchers who intend to accomplish the task of motion control of a robot and detail the use of LiDAR and cameras to accomplish robot navigation.

5.
Sensors (Basel) ; 18(12)2018 Dec 10.
Artigo em Inglês | MEDLINE | ID: mdl-30544732

RESUMO

The three-dimensional (3D) geometric evaluation of large thermal forging parts online is critical to quality control and energy conservation. However, this online 3D measurement task is extremely challenging for commercially available 3D sensors because of the enormous amount of heat radiation and complexity of the online environment. To this end, an automatic and accurate 3D shape measurement system integrated with a fringe projection-based 3D scanner and an industrial robot is presented. To resist thermal radiation, a double filter set and an intelligent temperature control loop are employed in the system. In addition, a time-division-multiplexing trigger is implemented in the system to accelerate pattern projection and capture, and an improved multi-frequency phase-shifting method is proposed to reduce the number of patterns required for 3D reconstruction. Thus, the 3D measurement efficiency is drastically improved and the exposure to the thermal environment is reduced. To perform data alignment in a complex online environment, a view integration method is used in the system to align non-overlapping 3D data from different views based on the repeatability of the robot motion. Meanwhile, a robust 3D registration algorithm is used to align 3D data accurately in the presence of irrelevant background data. These components and algorithms were evaluated by experiments. The system was deployed in a forging factory on a production line and performed a stable online 3D quality inspection for thermal axles.

6.
Stud Health Technol Inform ; 310: 1335-1336, 2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38270031

RESUMO

Clinical studies need multi-center, long-term patient data, which are difficult to align. We present a blockchain-based approach that uses cryptographic matching and attribute-based encryption for secure data alignment, aggregation, and access. It improves efficiency, lowers data synchronization, and facilitates cross-institutional patient data association and visualization.


Assuntos
Blockchain , Humanos , Instalações de Saúde
7.
Neurobiol Lang (Camb) ; 5(1): 43-63, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38645622

RESUMO

Artificial neural networks have emerged as computationally plausible models of human language processing. A major criticism of these models is that the amount of training data they receive far exceeds that of humans during language learning. Here, we use two complementary approaches to ask how the models' ability to capture human fMRI responses to sentences is affected by the amount of training data. First, we evaluate GPT-2 models trained on 1 million, 10 million, 100 million, or 1 billion words against an fMRI benchmark. We consider the 100-million-word model to be developmentally plausible in terms of the amount of training data given that this amount is similar to what children are estimated to be exposed to during the first 10 years of life. Second, we test the performance of a GPT-2 model trained on a 9-billion-token dataset to reach state-of-the-art next-word prediction performance on the human benchmark at different stages during training. Across both approaches, we find that (i) the models trained on a developmentally plausible amount of data already achieve near-maximal performance in capturing fMRI responses to sentences. Further, (ii) lower perplexity-a measure of next-word prediction performance-is associated with stronger alignment with human data, suggesting that models that have received enough training to achieve sufficiently high next-word prediction performance also acquire representations of sentences that are predictive of human fMRI responses. In tandem, these findings establish that although some training is necessary for the models' predictive ability, a developmentally realistic amount of training (∼100 million words) may suffice.

8.
J Neural Eng ; 21(1)2024 01 31.
Artigo em Inglês | MEDLINE | ID: mdl-38232381

RESUMO

Objective. The non-stationarity of electroencephalogram (EEG) signals and the variability among different subjects present significant challenges in current Brain-Computer Interfaces (BCI) research, which requires a time-consuming specific calibration procedure to address. Transfer Learning (TL) offers a potential solution by leveraging data or models from one or more source domains to facilitate learning in the target domain, so as to address these challenges.Approach. In this paper, a novel Multi-source domain Transfer Learning Fusion (MTLF) framework is proposed to address the calibration problem. Firstly, the method transforms the source domain data with the resting state segment data, in order to decrease the differences between the source domain and the target domain. Subsequently, feature extraction is performed using common spatial pattern. Finally, an improved TL classifier is employed to classify the target samples. Notably, this method does not require the label information of target domain samples, while concurrently reducing the calibration workload.Main results. The proposed MTLF is assessed on Datasets 2a and 2b from the BCI Competition IV. Compared with other algorithms, our method performed relatively the best and achieved mean classification accuracy of 73.69% and 70.83% on Datasets 2a and 2b respectively.Significance.Experimental results demonstrate that the MTLF framework effectively reduces the discrepancy between the source and target domains and acquires better classification performance on two motor imagery datasets.


Assuntos
Interfaces Cérebro-Computador , Humanos , Imagens, Psicoterapia , Eletroencefalografia/métodos , Algoritmos , Aprendizado de Máquina , Imaginação
9.
J Neural Eng ; 21(3)2024 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-38885683

RESUMO

Objective. In brain-computer interfaces (BCIs) that utilize motor imagery (MI), minimizing calibration time has become increasingly critical for real-world applications. Recently, transfer learning (TL) has been shown to effectively reduce the calibration time in MI-BCIs. However, variations in data distribution among subjects can significantly influence the performance of TL in MI-BCIs.Approach.We propose a cross-dataset adaptive domain selection transfer learning framework that integrates domain selection, data alignment, and an enhanced common spatial pattern (CSP) algorithm. Our approach uses a huge dataset of 109 subjects as the source domain. We begin by identifying non-BCI illiterate subjects from this huge dataset, then determine the source domain subjects most closely aligned with the target subjects using maximum mean discrepancy. After undergoing Euclidean alignment processing, features are extracted by multiple composite CSP. The final classification is carried out using the support vector machine.Main results.Our findings indicate that the proposed technique outperforms existing methods, achieving classification accuracies of 75.05% and 76.82% in two cross-dataset experiments, respectively.Significance.By reducing the need for extensive training data, yet maintaining high accuracy, our method optimizes the practical implementation of MI-BCIs.


Assuntos
Interfaces Cérebro-Computador , Imaginação , Transferência de Experiência , Humanos , Imaginação/fisiologia , Transferência de Experiência/fisiologia , Máquina de Vetores de Suporte , Eletroencefalografia/métodos , Movimento/fisiologia , Algoritmos , Aprendizado de Máquina , Bases de Dados Factuais , Masculino
10.
bioRxiv ; 2023 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-37215021

RESUMO

Data integration to align cells across batches has become a cornerstone of single cell data analysis, critically affecting downstream results. Yet, how much biological signal is erased during integration? Currently, there are no guidelines for when the biological differences between samples are separable from batch effects, and thus, data integration usually involve a lot of guesswork: Cells across batches should be aligned to be "appropriately" mixed, while preserving "main cell type clusters". We show evidence that current paradigms for single cell data integration are unnecessarily aggressive, removing biologically meaningful variation. To remedy this, we present a novel statistical model and computationally scalable algorithm, CellANOVA, to recover biological signal that is lost during single cell data integration. CellANOVA utilizes a "pool-of-controls" design concept, applicable across diverse settings, to separate unwanted variation from biological variation of interest. When applied with existing integration methods, CellANOVA allows the recovery of subtle biological signals and corrects, to a large extent, the data distortion introduced by integration. Further, CellANOVA explicitly estimates cell- and gene-specific batch effect terms which can be used to identify the cell types and pathways exhibiting the largest batch variations, providing clarity as to which biological signals can be recovered. These concepts are illustrated on studies of diverse designs, where the biological signals that are recovered by CellANOVA are shown to be validated by orthogonal assays. In particular, we show that CellANOVA is effective in the challenging case of single-cell and single-nuclei data integration, where the recovered biological signals are replicated in an independent study.

11.
Math Biosci Eng ; 20(3): 4560-4573, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36896512

RESUMO

The non-stationary nature of electroencephalography (EEG) signals and individual variability makes it challenging to obtain EEG signals from users by utilizing brain-computer interface techniques. Most of the existing transfer learning methods are based on batch learning in offline mode, which cannot adapt well to the changes generated by EEG signals in the online situation. To address this problem, a multi-source online migrating EEG classification algorithm based on source domain selection is proposed in this paper. By utilizing a small number of labeled samples from the target domain, the source domain selection method selects the source domain data similar to the target data from multiple source domains. After training a classifier for each source domain, the proposed method adjusts the weight coefficients of each classifier according to the prediction results to avoid the negative transfer problem. This algorithm was applied to two publicly available motor imagery EEG datasets, namely, BCI Competition Ⅳ Dataset Ⅱa and BNCI Horizon 2020 Dataset 2, and it achieved average accuracies of 79.29 and 70.86%, respectively, which are superior to those of several multi-source online transfer algorithms, confirming the effectiveness of the proposed algorithm.


Assuntos
Algoritmos , Interfaces Cérebro-Computador , Eletroencefalografia , Aprendizagem
12.
Genome Biol ; 24(1): 163, 2023 07 11.
Artigo em Inglês | MEDLINE | ID: mdl-37434182

RESUMO

Multimodal measurements of single-cell sequencing technologies facilitate a comprehensive understanding of specific cellular and molecular mechanisms. However, simultaneous profiling of multiple modalities of single cells is challenging, and data integration remains elusive due to missing modalities and cell-cell correspondences. To address this, we developed a computational approach, Cross-Modality Optimal Transport (CMOT), which aligns cells within available multi-modal data (source) onto a common latent space and infers missing modalities for cells from another modality (target) of mapped source cells. CMOT outperforms existing methods in various applications from developing brain, cancers to immunology, and provides biological interpretations improving cell-type or cancer classifications.


Assuntos
Análise de Célula Única , Análise de Célula Única/métodos
13.
Patterns (N Y) ; 4(11): 100847, 2023 Nov 10.
Artigo em Inglês | MEDLINE | ID: mdl-38035195

RESUMO

Single-cell techniques like Patch-seq have enabled the acquisition of multimodal data from individual neuronal cells, offering systematic insights into neuronal functions. However, these data can be heterogeneous and noisy. To address this, machine learning methods have been used to align cells from different modalities onto a low-dimensional latent space, revealing multimodal cell clusters. The use of those methods can be challenging without computational expertise or suitable computing infrastructure for computationally expensive methods. To address this, we developed a cloud-based web application, MANGEM (multimodal analysis of neuronal gene expression, electrophysiology, and morphology). MANGEM provides a step-by-step accessible and user-friendly interface to machine learning alignment methods of neuronal multimodal data. It can run asynchronously for large-scale data alignment, provide users with various downstream analyses of aligned cells, and visualize the analytic results. We demonstrated the usage of MANGEM by aligning multimodal data of neuronal cells in the mouse visual cortex.

14.
J Public Health Policy ; 43(2): 185-202, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35614203

RESUMO

Widespread destruction from the Yemeni Civil War (2014-present) triggered the world's largest cholera outbreak. We compiled a comprehensive health dataset and created dynamic maps to demonstrate spatiotemporal changes in cholera infections and war conflicts. We aligned and merged daily, weekly, and monthly epidemiological bulletins of confirmed cholera infections and daily conflict events and fatality records to create a dataset of weekly time series for Yemen at the governorate level (subnational regions administered by governors) from 4 January 2016 through 29 December 2019. We demonstrated the use of dynamic mapping for tracing the onset and spread of infection and manmade factors that amplify the outbreak. We report curated data and visualization techniques to further uncover associations between infectious disease outbreaks and risk factors and to better coordinate humanitarian aid and relief efforts during complex emergencies.


Assuntos
Cólera , Cólera/epidemiologia , Surtos de Doenças , Humanos , Fatores de Risco , Fatores de Tempo , Iêmen/epidemiologia
15.
Micromachines (Basel) ; 13(6)2022 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-35744539

RESUMO

For the successful application of brain-computer interface (BCI) systems, accurate recognition of electroencephalography (EEG) signals is one of the core issues. To solve the differences in individual EEG signals and the problem of less EEG data in classification and recognition, an attention mechanism-based multi-scale convolution network was designed; the transfer learning data alignment algorithm was then introduced to explore the application of transfer learning for analyzing motor imagery EEG signals. The data set 2a of BCI Competition IV was used to verify the designed dual channel attention module migration alignment with convolution neural network (MS-AFM). Experimental results showed that the classification recognition rate improved with the addition of the alignment algorithm and adaptive adjustment in transfer learning; the average classification recognition rate of nine subjects was 86.03%.

16.
J Am Soc Mass Spectrom ; 32(4): 996-1007, 2021 Apr 07.
Artigo em Inglês | MEDLINE | ID: mdl-33666432

RESUMO

Detection of arrival time shifts between ion mobility spectrometry (IMS) separations can limit achievable resolving power (Rp), particularly when multiple separations are summed or averaged, as commonly practiced in IMS. Such variations can be apparent in higher Rp measurements and are particularly evident in long path length traveling wave structures for lossless ion manipulations (SLIM) IMS due to their typically much longer separation times. Here, we explore data processing approaches employing single value alignment (SVA) and nonlinear dynamic time warping (DTW) to correct for variations between IMS separations, such as due to pressure fluctuations, to enable more effective spectrum summation for improving Rp and detection of low-intensity species. For multipass SLIM IMS separations, where narrow mobility range measurements have arrival times that can extend to several seconds, the SVA approach effectively corrected for such variations and significantly improved Rp for summed separations. However, SVA was much less effective for broad mobility range separations, such as obtained with multilevel SLIM IMS. Changes in ions' arrival times were observed to be correlated with small pressure changes, with approximately 0.6% relative arrival time shifts being common, sufficient to result in a loss of Rp for summed separations. Comparison of the approaches showed that DTW alignment performed similarly to SVA when used over a narrow mobility range but was significantly better (providing narrower peaks and higher signal intensities) for wide mobility range data. We found that the DTW approach increased Rp by as much as 115% for measurements in which 50 IMS separations over 2 s were summed. We conclude that DTW is superior to SVA for ultra-high-resolution broad mobility range SLIM IMS separations and leads to a large improvement in effective Rp, correcting for ion arrival time shifts regardless of the cause, as well as improving the detectability of low-abundance species. Our tool is publicly available for use with universal ion mobility format (.UIMF) and text (.txt) files.

17.
Acta Crystallogr A Found Adv ; 76(Pt 4): 432-457, 2020 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-32608360

RESUMO

The general problem of finding a global rotation that transforms a given set of spatial coordinates and/or orientation frames (the `test' data) into the best possible alignment with a corresponding set (the `reference' data) is reviewed. For 3D point data, this `orthogonal Procrustes problem' is often phrased in terms of minimizing a root-mean-square deviation (RMSD) corresponding to a Euclidean distance measure relating the two sets of matched coordinates. This article focuses on quaternion eigensystem methods that have been exploited to solve this problem for at least five decades in several different bodies of scientific literature, where they were discovered independently. While numerical methods for the eigenvalue solutions dominate much of this literature, it has long been realized that the quaternion-based RMSD optimization problem can also be solved using exact algebraic expressions based on the form of the quartic equation solution published by Cardano in 1545; focusing on these exact solutions exposes the structure of the entire eigensystem for the traditional 3D spatial-alignment problem. The structure of the less-studied orientation-data context is then explored, investigating how quaternion methods can be extended to solve the corresponding 3D quaternion orientation-frame alignment (QFA) problem, noting the interesting equivalence of this problem to the rotation-averaging problem, which also has been the subject of independent literature threads. The article concludes with a brief discussion of the combined 3D translation-orientation data alignment problem. Appendices are devoted to a tutorial on quaternion frames, a related quaternion technique for extracting quaternions from rotation matrices and a review of quaternion rotation-averaging methods relevant to the orientation-frame alignment problem. The supporting information covers novel extensions of quaternion methods to the 4D Euclidean spatial-coordinate alignment and 4D orientation-frame alignment problems, some miscellaneous topics, and additional details of the quartic algebraic eigenvalue problem.

18.
PeerJ ; 6: e5843, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30842892

RESUMO

Ecology has reached the point where data science competitions, in which multiple groups solve the same problem using the same data by different methods, will be productive for advancing quantitative methods for tasks such as species identification from remote sensing images. We ran a competition to help improve three tasks that are central to converting images into information on individual trees: (1) crown segmentation, for identifying the location and size of individual trees; (2) alignment, to match ground truthed trees with remote sensing; and (3) species classification of individual trees. Six teams (composed of 16 individual participants) submitted predictions for one or more tasks. The crown segmentation task proved to be the most challenging, with the highest-performing algorithm yielding only 34% overlap between remotely sensed crowns and the ground truthed trees. However, most algorithms performed better on large trees. For the alignment task, an algorithm based on minimizing the difference, in terms of both position and tree size, between ground truthed and remotely sensed crowns yielded a perfect alignment. In hindsight, this task was over simplified by only including targeted trees instead of all possible remotely sensed crowns. Several algorithms performed well for species classification, with the highest-performing algorithm correctly classifying 92% of individuals and performing well on both common and rare species. Comparisons of results across algorithms provided a number of insights for improving the overall accuracy in extracting ecological information from remote sensing. Our experience suggests that this kind of competition can benefit methods development in ecology and biology more broadly.

19.
PeerJ ; 7: e6101, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30842894

RESUMO

To accelerate scientific progress on remote tree classification-as well as biodiversity and ecology sampling-The National Institute of Science and Technology created a community-based competition where scientists were invited to contribute informatics methods for classifying tree species and genus using crown-level images of trees. We classified tree species and genus at the pixel level using hyperspectral and LiDAR observations. We compared three algorithms that have been implemented extensively across a broad range of research applications: support vector machines, random forests, and multilayer perceptron. At the pixel level, the multilayer perceptron algorithm classified species or genus with high accuracy (92.7% and 95.9%, respectively) on the training data and performed better than the other two algorithms (85.8-93.5%). This indicates promise for the use of the multilayer perceptron (MLP) algorithm for tree-species classification based on hyperspectral and LiDAR observations and coincides with a growing body of research in which neural network-based algorithms outperform other types of classification algorithm for machine vision. To aggregate patterns across the images, we used an ensemble approach that averages the pixel-level outputs of the MLP algorithm to classify species at the crown level. The average accuracy of these classifications on the test set was 68.8% for the nine species.

20.
J Agric Food Chem ; 67(18): 5289-5302, 2019 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-30994349

RESUMO

Comprehensive two-dimensional gas chromatography coupled with mass spectrometric detection (GC × GC-MS) offers an information-rich basis for effective chemical fingerprinting of food. However, GC × GC-MS yields 2D-peak patterns (i.e., sample 2D fingerprints) whose consistency may be affected by variables related to either the analytical platform or to the experimental parameters adopted for the analysis. This study focuses on the complex volatile fraction of extra-virgin olive oil and addresses 2D-peak patterns variations, including MS signal fluctuations, as they may occur in long-term studies where pedo-climatic, harvest year, or shelf life changes are studied. The 2D-pattern misalignments are forced by changing chromatographic settings and MS acquisition. All procedural steps, preceding pattern recognition by template matching, are analyzed and a rational workflow defined to accurately realign patterns and analytes metadata. Signal-to-noise ratio (SNR) detection threshold, reference spectra extraction, and similarity match factor threshold are critical to avoid false-negative matches. Distance thresholds and polynomial transform parameters are key for effective template matching. In targeted analysis (supervised workflow) with optimized parameters, method accuracy reaches 92.5% (i.e., % of true-positive matches) while for combined untargeted and targeted ( UT) fingerprinting (unsupervised workflow), accuracy reaches 97.9%. Response normalization also is examined, evidencing good performance of multiple internal standard normalization that effectively compensates for discriminations occurring during injection of highly volatile compounds. The resulting workflow is simple, effective, and time efficient.


Assuntos
Cromatografia Gasosa-Espectrometria de Massas/métodos , Azeite de Oliva/química , Compostos Orgânicos Voláteis/química , Cromatografia Gasosa-Espectrometria de Massas/instrumentação , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA