Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
NMR Biomed ; 37(1): e5028, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37669779

RESUMEN

We propose a deep learning (DL) model and a hyperparameter optimization strategy to reconstruct T1 and T2 maps acquired with the magnetic resonance fingerprinting (MRF) methodology. We applied two different MRF sequence routines to acquire images of ex vivo rat brain phantoms using a 7-T preclinical scanner. Subsequently, the DL model was trained using experimental data, completely excluding the use of any theoretical MRI signal simulator. The best combination of the DL parameters was implemented by an automatic hyperparameter optimization strategy, whose key aspect is to include all the parameters to the fit, allowing the simultaneous optimization of the neural network architecture, the structure of the DL model, and the supervised learning algorithm. By comparing the reconstruction performances of the DL technique with those achieved from the traditional dictionary-based method on an independent dataset, the DL approach was shown to reduce the mean percentage relative error by a factor of 3 for T1 and by a factor of 2 for T2 , and to improve the computational time by at least a factor of 37. Furthermore, the proposed DL method enables maintaining comparable reconstruction performance, even with a lower number of MRF images and a reduced k-space sampling percentage, with respect to the dictionary-based method. Our results suggest that the proposed DL methodology may offer an improvement in reconstruction accuracy, as well as speeding up MRF for preclinical, and in prospective clinical, investigations.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Encéfalo/diagnóstico por imagen , Estudios Prospectivos , Imagen por Resonancia Magnética/métodos , Fantasmas de Imagen , Espectroscopía de Resonancia Magnética
2.
Sci Rep ; 13(1): 18258, 2023 Oct 25.
Artículo en Inglés | MEDLINE | ID: mdl-37880355

RESUMEN

Simulating quantum imaginary-time evolution (QITE) is a significant promise of quantum computation. However, the known algorithms are either probabilistic (repeat until success) with unpractically small success probabilities or coherent (quantum amplitude amplification) with circuit depths and ancillary-qubit numbers unrealistically large in the mid-term. Our main contribution is a new generation of deterministic, high-precision QITE algorithms that are significantly more amenable experimentally. A surprisingly simple idea is behind them: partitioning the evolution into a sequence of fragments that are run probabilistically. It causes a considerable reduction in wasted circuit depth every time a run fails. Remarkably, the resulting overall runtime is asymptotically better than in coherent approaches, and the hardware requirements are even milder than in probabilistic ones. Our findings are especially relevant for the early fault-tolerance stages of quantum hardware.

3.
Eur Radiol Exp ; 7(1): 3, 2023 01 24.
Artículo en Inglés | MEDLINE | ID: mdl-36690869

RESUMEN

BACKGROUND: To develop a pipeline for automatic extraction of quantitative metrics and radiomic features from lung computed tomography (CT) and develop artificial intelligence (AI) models supporting differential diagnosis between coronavirus disease 2019 (COVID-19) and other viral pneumonia (non-COVID-19). METHODS: Chest CT of 1,031 patients (811 for model building; 220 as independent validation set (IVS) with positive swab for severe acute respiratory syndrome coronavirus-2 (647 COVID-19) or other respiratory viruses (384 non-COVID-19) were segmented automatically. A Gaussian model, based on the HU histogram distribution describing well-aerated and ill portions, was optimised to calculate quantitative metrics (QM, n = 20) in both lungs (2L) and four geometrical subdivisions (GS) (upper front, lower front, upper dorsal, lower dorsal; n = 80). Radiomic features (RF) of first (RF1, n = 18) and second (RF2, n = 120) order were extracted from 2L using PyRadiomics tool. Extracted metrics were used to develop four multilayer-perceptron classifiers, built with different combinations of QM and RF: Model1 (RF1-2L); Model2 (QM-2L, QM-GS); Model3 (RF1-2L, RF2-2L); Model4 (RF1-2L, QM-2L, GS-2L, RF2-2L). RESULTS: The classifiers showed accuracy from 0.71 to 0.80 and area under the receiving operating characteristic curve (AUC) from 0.77 to 0.87 in differentiating COVID-19 versus non-COVID-19 pneumonia. Best results were associated with Model3 (AUC 0.867 ± 0.008) and Model4 (AUC 0.870 ± 0.011. For the IVS, the AUC values were 0.834 ± 0.008 for Model3 and 0.828 ± 0.011 for Model4. CONCLUSIONS: Four AI-based models for classifying patients as COVID-19 or non-COVID-19 viral pneumonia showed good diagnostic performances that could support clinical decisions.


Asunto(s)
COVID-19 , Neumonía Viral , Humanos , Inteligencia Artificial , Estudios Retrospectivos , SARS-CoV-2 , Tomografía Computarizada por Rayos X/métodos
4.
Rev Sci Instrum ; 93(10): 104704, 2022 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-36319343

RESUMEN

We present a control and measurement setup for superconducting qubits based on the Xilinx 16-channel radio-frequency system-on-chip (RFSoC) device. The proposed setup consists of four parts: multiple RFSoC boards, a setup to synchronize every digital to analog converter (DAC) and analog to digital converter (ADC) channel across multiple boards, a low-noise direct current supply for tuning the qubit frequency, and cloud access for remotely performing experiments. We also designed the setup to be free of physical mixers. The RFSoC boards directly generate microwave pulses using sixteen DAC channels up to the third Nyquist zone, which are directly sampled by its eight ADC channels between the fifth and the ninth zones.

5.
Phys Rev Lett ; 123(13): 132001, 2019 Sep 27.
Artículo en Inglés | MEDLINE | ID: mdl-31697558

RESUMEN

Modern global analyses of the structure of the proton include collider measurements which probe energies well above the electroweak scale. While these provide powerful constraints on the parton distribution functions (PDFs), they are also sensitive to beyond the standard model (BSM) dynamics if these affect the fitted distributions. Here we present a first simultaneous determination of the PDFs and BSM effects from deep-inelastic structure function data by means of the NNPDF framework. We consider representative four-fermion operators from the SM effective field theory (SMEFT), quantify to which extent their effects modify the fitted PDFs, and assess how the resulting bounds on the SMEFT degrees of freedom are modified. Our results demonstrate how BSM effects that might otherwise be reabsorbed into the PDFs can be systematically disentangled.

6.
Eur Phys J C Part Fields ; 78(5): 408, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30996667

RESUMEN

We present a determination of the strong coupling constant α s ( m Z ) based on the NNPDF3.1 determination of parton distributions, which for the first time includes constraints from jet production, top-quark pair differential distributions, and the Z p T distributions using exact NNLO theory. Our result is based on a novel extension of the NNPDF methodology - the correlated replica method - which allows for a simultaneous determination of α s and the PDFs with all correlations between them fully taken into account. We study in detail all relevant sources of experimental, methodological and theoretical uncertainty. At NNLO we find α s ( m Z ) = 0.1185 ± 0 . 0005 (exp) ± 0 . 0001 (meth) , showing that methodological uncertainties are negligible. We conservatively estimate the theoretical uncertainty due to missing higher order QCD corrections (N 3 LO and beyond) from half the shift between the NLO and NNLO α s values, finding Δ α s th = 0.0011 .

7.
Eur Phys J C Part Fields ; 77(8): 516, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28943800

RESUMEN

We present NNFF1.0, a new determination of the fragmentation functions (FFs) of charged pions, charged kaons, and protons/antiprotons from an analysis of single-inclusive hadron production data in electron-positron annihilation. This determination, performed at leading, next-to-leading, and next-to-next-to-leading order in perturbative QCD, is based on the NNPDF methodology, a fitting framework designed to provide a statistically sound representation of FF uncertainties and to minimise any procedural bias. We discuss novel aspects of the methodology used in this analysis, namely an optimised parametrisation of FFs and a more efficient [Formula: see text] minimisation strategy, and validate the FF fitting procedure by means of closure tests. We then present the NNFF1.0 sets, and discuss their fit quality, their perturbative convergence, and their stability upon variations of the kinematic cuts and the fitted dataset. We find that the systematic inclusion of higher-order QCD corrections significantly improves the description of the data, especially in the small-z region. We compare the NNFF1.0 sets to other recent sets of FFs, finding in general a reasonable agreement, but also important differences. Together with existing sets of unpolarised and polarised parton distribution functions (PDFs), FFs and PDFs are now available from a common fitting framework for the first time.

8.
Eur Phys J C Part Fields ; 77(10): 663, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-31997920

RESUMEN

We present a new set of parton distributions, NNPDF3.1, which updates NNPDF3.0, the first global set of PDFs determined using a methodology validated by a closure test. The update is motivated by recent progress in methodology and available data, and involves both. On the methodological side, we now parametrize and determine the charm PDF alongside the light-quark and gluon ones, thereby increasing from seven to eight the number of independent PDFs. On the data side, we now include the D0 electron and muon W asymmetries from the final Tevatron dataset, the complete LHCb measurements of W and Z production in the forward region at 7 and 8 TeV, and new ATLAS and CMS measurements of inclusive jet and electroweak boson production. We also include for the first time top-quark pair differential distributions and the transverse momentum of the Z bosons from ATLAS and CMS. We investigate the impact of parametrizing charm and provide evidence that the accuracy and stability of the PDFs are thereby improved. We study the impact of the new data by producing a variety of determinations based on reduced datasets. We find that both improvements have a significant impact on the PDFs, with some substantial reductions in uncertainties, but with the new PDFs generally in agreement with the previous set at the one-sigma level. The most significant changes are seen in the light-quark flavor separation, and in increased precision in the determination of the gluon. We explore the implications of NNPDF3.1 for LHC phenomenology at Run II, compare with recent LHC measurements at 13 TeV, provide updated predictions for Higgs production cross-sections and discuss the strangeness and charm content of the proton in light of our improved dataset and methodology. The NNPDF3.1 PDFs are delivered for the first time both as Hessian sets, and as optimized Monte Carlo sets with a compressed number of replicas.

9.
Eur Phys J C Part Fields ; 76(11): 647, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-28316495

RESUMEN

We present an unbiased determination of the charm content of the proton, in which the charm parton distribution function (PDF) is parametrized on the same footing as the light quarks and the gluon in a global PDF analysis. This determination relies on the NLO calculation of deep-inelastic structure functions in the FONLL scheme, generalized to account for massive charm-initiated contributions. When the EMC charm structure function dataset is included, it is well described by the fit, and PDF uncertainties in the fitted charm PDF are significantly reduced. We then find that the fitted charm PDF vanishes within uncertainties at a scale [Formula: see text] GeV for all [Formula: see text], independent of the value of [Formula: see text] used in the coefficient functions. We also find some evidence that the charm PDF at large [Formula: see text] and low scales does not vanish, but rather has an "intrinsic" component, very weakly scale dependent and almost independent of the value of [Formula: see text], carrying less than [Formula: see text] of the total momentum of the proton. The uncertainties in all other PDFs are only slightly increased by the inclusion of fitted charm, while the dependence of these PDFs on [Formula: see text] is reduced. The increased stability with respect to [Formula: see text] persists at high scales and is the main implication of our results for LHC phenomenology. Our results show that if the EMC data are correct, then the usual approach in which charm is perturbatively generated leads to biased results for the charm PDF, though at small x this bias could be reabsorbed if the uncertainty due to the charm mass and missing higher orders were included. We show that LHC data for processes, such as high [Formula: see text] and large rapidity charm pair production and [Formula: see text] production, have the potential to confirm or disprove the implications of the EMC data.

10.
Eur Phys J C Part Fields ; 76(4): 205, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-28260973

RESUMEN

We present a methodology for the construction of parton distribution functions (PDFs) designed to provide an accurate representation of PDF uncertainties for specific processes or classes of processes with a minimal number of PDF error sets: specialized minimal PDF sets, or SM-PDFs. We construct these SM-PDFs in such a way that sets corresponding to different input processes can be combined without losing information, specifically as regards their correlations, and that they are robust upon smooth variations of the kinematic cuts. The proposed strategy never discards information, so that the SM-PDF sets can be enlarged by the addition of new processes, until the prior PDF set is eventually recovered for a large enough set of processes. We illustrate the method by producing SM-PDFs tailored to Higgs, top-quark pair, and electroweak gauge boson physics, and we determine that, when the PDF4LHC15 combined set is used as the prior, around 11, 4, and 11 Hessian eigenvectors, respectively, are enough to fully describe the corresponding processes.

11.
Eur Phys J C Part Fields ; 75(10): 474, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26457064

RESUMEN

The current PDF4LHC recommendation to estimate uncertainties due to parton distribution functions (PDFs) in theoretical predictions for LHC processes involves the combination of separate predictions computed using PDF sets from different groups, each of which comprises a relatively large number of either Hessian eigenvectors or Monte Carlo (MC) replicas. While many fixed-order and parton shower programs allow the evaluation of PDF uncertainties for a single PDF set at no additional CPU cost, this feature is not universal, and, moreover, the a posteriori combination of the predictions using at least three different PDF sets is still required. In this work, we present a strategy for the statistical combination of individual PDF sets, based on the MC representation of Hessian sets, followed by a compression algorithm for the reduction of the number of MC replicas. We illustrate our strategy with the combination and compression of the recent NNPDF3.0, CT14 and MMHT14 NNLO PDF sets. The resulting compressed Monte Carlo PDF sets are validated at the level of parton luminosities and LHC inclusive cross sections and differential distributions. We determine that around 100 replicas provide an adequate representation of the probability distribution for the original combined PDF set, suitable for general applications to LHC phenomenology.

12.
Eur Phys J C Part Fields ; 75(8): 369, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26300690

RESUMEN

We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (MC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the MC-H PDF set.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...