Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 121(32): e2403449121, 2024 Aug 06.
Artigo em Inglês | MEDLINE | ID: mdl-39088394

RESUMO

Most problems within and beyond the scientific domain can be framed into one of the following three levels of complexity of function approximation. Type 1: Approximate an unknown function given input/output data. Type 2: Consider a collection of variables and functions, some of which are unknown, indexed by the nodes and hyperedges of a hypergraph (a generalized graph where edges can connect more than two vertices). Given partial observations of the variables of the hypergraph (satisfying the functional dependencies imposed by its structure), approximate all the unobserved variables and unknown functions. Type 3: Expanding on Type 2, if the hypergraph structure itself is unknown, use partial observations of the variables of the hypergraph to discover its structure and approximate its unknown functions. These hypergraphs offer a natural platform for organizing, communicating, and processing computational knowledge. While most scientific problems can be framed as the data-driven discovery of unknown functions in a computational hypergraph whose structure is known (Type 2), many require the data-driven discovery of the structure (connectivity) of the hypergraph itself (Type 3). We introduce an interpretable Gaussian Process (GP) framework for such (Type 3) problems that does not require randomization of the data, access to or control over its sampling, or sparsity of the unknown functions in a known or learned basis. Its polynomial complexity, which contrasts sharply with the super-exponential complexity of causal inference methods, is enabled by the nonlinear ANOVA capabilities of GPs used as a sensing mechanism.

2.
Polymers (Basel) ; 16(13)2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-39000705

RESUMO

Up to the 1930s, the Italian pictorialism movement dominated photography, and many handcrafted procedures started appearing. Each operator had his own working method and his own secrets to create special effects that moved away from the standard processes. Here, a methodology that combines X-ray fluorescence and infrared analysis spectroscopy with unsupervised learning techniques was developed on an unconventional Italian photographic print collection (the Piero Vanni Collection, 1889-1939) to unveil the artistic technique by the extraction of spectroscopic benchmarks. The methodology allowed the distinction of hidden elements, such as iodine and manganese in silver halide printing, or highlighted slight differences in the same printing technique and unveiled the stylistic practice. Spectroscopic benchmarks were extracted to identify the elemental and molecular fingerprint layers, as the oil-based prints were obscured by the proteinaceous binder. It was identified that the pigments used were silicates or iron oxide introduced into the solution or that they retraced the practice of reusing materials to produce completely different printing techniques. In general, four main groups were extracted, in this way recreating the 'artistic palette' of the unconventional photography of the artist. The four groups were the following: (1) Cr, Fe, K, potassium dichromate, and gum arabic bands characterized the dichromate salts; (2) Ag, Ba, Sr, Mn, Fe, S, Ba, gelatin, and albumen characterized the silver halide emulsions on the baryta layer; (3) the carbon prints were benchmarked by K, Cr, dichromate salts, and pigmented gelatin; and (4) the heterogeneous class of bromoil prints was characterized by Ba, Fe, Cr, Ca, K, Ag, Si, dichromate salts, and iron-based pigments. Some exceptions were found, such as the baryta layer being divided into gum bichromate groups or the use of albumen in silver particles suspended in gelatin, to underline the unconventional photography at the end of the 10th century.

3.
Bioprocess Biosyst Eng ; 42(2): 245-256, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30377782

RESUMO

Root cause analysis (RCA) is one of the most prominent tools used to comprehensively evaluate a biopharmaceutical production process. Despite of its widespread use in industry, the Food and Drug Administration has observed a lot of unsuitable approaches for RCAs within the last years. The reasons for those unsuitable approaches are the use of incorrect variables during the analysis and the lack in process understanding, which impede correct model interpretation. Two major approaches to perform RCAs are currently dominating the chemical and pharmaceutical industry: raw data analysis and feature-based approach. Both techniques are shown to be able to identify the significant variables causing the variance of the response. Although they are different in data unfolding, the same tools as principal component analysis and partial least square regression are used in both concepts. Within this article we demonstrate the strength and weaknesses of both approaches. We proved that a fusion of both results in a comprehensive and effective workflow, which not only increases better process understanding. We demonstrate this workflow along with an example. Hence, the presented workflow allows to save analysis time and to reduce the effort of data mining by easy detection of the most important variables within the given dataset. Subsequently, the final obtained process knowledge can be translated into new hypotheses, which can be tested experimentally and thereby lead to effectively improving process robustness.


Assuntos
Ciência de Dados/métodos , Indústria Farmacêutica/tendências , Análise de Causa Fundamental , Fluxo de Trabalho , Animais , Reatores Biológicos , Chlorocebus aethiops , Fermentação , Análise Multivariada , Poliovirus , Análise de Componente Principal , Análise de Regressão , Software , Células Vero
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA