Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 121
Filtrar
1.
MAGMA ; 2024 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-38393541

RESUMO

OBJECTIVE: Diffusional kurtosis imaging (DKI) extends diffusion tensor imaging (DTI), characterizing non-Gaussian diffusion effects but requires longer acquisition times. To ensure the robustness of DKI parameters, data acquisition ordering should be optimized allowing for scan interruptions or shortening. Three methodologies were used to examine how reduced diffusion MRI scans impact DKI histogram-metrics: 1) the electrostatic repulsion model (OptEEM); 2) spherical codes (OptSC); 3) random (RandomTRUNC). MATERIALS AND METHODS: Pre-acquired diffusion multi-shell data from 14 female healthy volunteers (29±5 years) were used to generate reordered data. For each strategy, subsets containing different amounts of the full dataset were generated. The subsampling effects were assessed on histogram-based DKI metrics from tract-based spatial statistics (TBSS) skeletonized maps. To evaluate each subsampling method on simulated data at different SNRs and the influence of subsampling on in vivo data, we used a 3-way and 2-way repeated measures ANOVA, respectively. RESULTS: Simulations showed that subsampling had different effects depending on DKI parameter, with fractional anisotropy the most stable (up to 5% error) and radial kurtosis the least stable (up to 26% error). RandomTRUNC performed the worst while the others showed comparable results. Furthermore, the impact of subsampling varied across distinct histogram characteristics, the peak value the least affected (OptEEM: up to 5% error; OptSC: up to 7% error) and peak height (OptEEM: up to 8% error; OptSC: up to 11% error) the most affected. CONCLUSION: The impact of truncation depends on specific histogram-based DKI metrics. The use of a strategy for optimizing the acquisition order is advisable to improve DKI robustness to exam interruptions.

2.
Microsc Microanal ; 30(1): 96-102, 2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38321738

RESUMO

Traditional image acquisition for cryo focused ion-beam scanning electron microscopy (FIB-SEM) tomography often sees thousands of images being captured over a period of many hours, with immense data sets being produced. When imaging beam sensitive materials, these images are often compromised by additional constraints related to beam damage and the devitrification of the material during imaging, which renders data acquisition both costly and unreliable. Subsampling and inpainting are proposed as solutions for both of these aspects, allowing fast and low-dose imaging to take place in the Focused ion-beam scanning electron microscopy FIB-SEM without an appreciable loss in image quality. In this work, experimental data are presented which validate subsampling and inpainting as a useful tool for convenient and reliable data acquisition in a FIB-SEM, with new methods of handling three-dimensional data being employed in the context of dictionary learning and inpainting algorithms using a newly developed microscope control software and data recovery algorithm.

3.
Sensors (Basel) ; 24(9)2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38732937

RESUMO

In this article, we address the problem of synchronizing multiple analog-to-digital converter (ADC) and digital-to-analog converter (DAC) chains in a multi-channel system, which is constrained by the sampling frequency and inconsistencies among the components during system integration. To evaluate and compensate for the synchronization differences, we propose a pulse compression shape-based algorithm to measure the entire delay parameter of the ADC/DAC chain, which achieves sub-sampling resolution by mapping the shape of the discrete pulse compression peak to the signal propagation delay. Moreover, owing to the matched filtering in the pulse compression process, the algorithm exhibits good noise performance and is suitable for wireless scenarios. Experiments verified that the algorithm can achieve precise measurements with sub-sampling resolution in scenarios where the signal-to-noise ratio (SNR) is greater than -10 dB.

4.
Mil Psychol ; 36(1): 96-113, 2024 01 02.
Artigo em Inglês | MEDLINE | ID: mdl-38193872

RESUMO

Measurement invariance of psychological test batteries is an essential quality criterion when the test batteries are administered in different cultural and language contexts. The purpose of this study was to examine to what extent measurement model fit and measurement invariance across the two largest language groups in Switzerland (i.e., German and French speakers) can be assumed for selected general mental ability and personality tests used in the Swiss Armed Forces' cadre selection process. For the model fit and invariance testing, we used Bayesian structural equation modeling (BSEM). Because the sizes of the language group samples were unbalanced, we reran the invariance testing with the subsampling procedure as a robustness check. The results showed that at least partial approximate scalar invariance can be assumed for the constructs. However, comparisons in the full sample and subsamples also showed that certain test items function differently across the language groups. The results are discussed regarding the three following issues: First, we critically discuss the applied criterion and alternative effect size measures for assessing the practical importance of non-invariances. Second, we highlight potential remedies and further testing options, that can be applied, once certain items have been detected to function differently. Third, we discuss alternative modeling and invariance testing approaches to BSEM and outline future research avenues.


Assuntos
Transtornos da Personalidade , Personalidade , Humanos , Teorema de Bayes , Análise de Classes Latentes , Suíça
5.
Magn Reson Med ; 90(1): 166-176, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-36961093

RESUMO

PURPOSE: To characterize the mechanism of formation and the removal of aliasing artifacts and edge ghosts in spatiotemporally encoded (SPEN) MRI within a k-space theoretical framework. METHODS: SPEN's quadratic phase modulation can be described in k-space by a convolution matrix whose coefficients derive from Fourier relations. This k-space model allows us to pose SPEN's reconstruction as a deconvolution process from which aliasing and edge ghost artifacts can be quantified by estimating the difference between a full sampling and reconstructions resulting from undersampled SPEN data. RESULTS: Aliasing artifacts in SPEN MRI reconstructions can be traced to image contributions corresponding to high-frequency k-space signals. The k-space picture provides the spatial displacements, phase offsets, and linear amplitude modulations associated to these artifacts, as well as routes to removing these from the reconstruction results. These new ways to estimate the artifact priors were applied to reduce SPEN reconstruction artifacts on simulated, phantom, and human brain MRI data. CONCLUSION: A k-space description of SPEN's reconstruction helps to better understand the signal characteristics of this MRI technique, and to improve the quality of its resulting images.


Assuntos
Algoritmos , Encéfalo , Humanos , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Imagens de Fantasmas , Artefatos , Processamento de Imagem Assistida por Computador/métodos , Proteínas de Ligação a DNA , Proteínas de Ligação a RNA
6.
J Microsc ; 290(1): 53-66, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36800515

RESUMO

Scanning transmission electron microscopy images can be complex to interpret on the atomic scale as the contrast is sensitive to multiple factors such as sample thickness, composition, defects and aberrations. Simulations are commonly used to validate or interpret real experimental images, but they come at a cost of either long computation times or specialist hardware such as graphics processing units. Recent works in compressive sensing for experimental STEM images have shown that it is possible to significantly reduce the amount of acquired signal and still recover the full image without significant loss of image quality, and therefore it is proposed here that similar methods can be applied to STEM simulations. In this paper, we demonstrate a method that can significantly increase the efficiency of STEM simulations through a targeted sampling strategy, along with a new approach to independently subsample each frozen phonon layer. We show the effectiveness of this method by simulating a SrTiO3 grain boundary and monolayer 2H-MoS2 containing a sulphur vacancy using the abTEM software. We also show how this method is not limited to only traditional multislice methods, but also increases the speed of the PRISM simulation method. Furthermore, we discuss the possibility for STEM simulations to seed the acquisition of real data, to potentially lead the way to self-driving (correcting) STEM.

7.
Stat Med ; 42(8): 1207-1232, 2023 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-36690474

RESUMO

We consider the design and analysis of two-phase studies aiming to assess the relation between a fixed (eg, genetic) marker and an event time under current status observation. We consider a common setting in which a phase I sample is comprised of a large cohort of individuals with outcome (ie, current status) data and a vector of inexpensive covariates. Stored biospecimens for individuals in the phase I sample can be assayed to record the marker of interest for individuals selected in a phase II sub-sample. The design challenge is then to select the phase II sub-sample in order to maximize the precision of the marker effect on the time of interest under a proportional hazards model. This problem has not been examined before for current status data and the role of the assessment time is highlighted. Inference based on likelihood and inverse probability weighted estimating functions are considered, with designs centered on score-based residuals, extreme current status observations, or stratified sampling schemes. Data from a registry of patients with psoriatic arthritis is used in an illustration where we study the risk of diabetes as a comorbidity.


Assuntos
Artrite Psoriásica , Projetos de Pesquisa , Humanos , Simulação por Computador , Modelos de Riscos Proporcionais , Probabilidade
8.
J Surg Res ; 283: 733-742, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36463812

RESUMO

INTRODUCTION: Magnetic resonance angiography (MRA) with the differential subsampling with cartesian ordering (DISCO) imaging technique is rarely used in anterolateral thigh (ALT) flap. In our series, MRA DISCO imaging technique is used as a tool to customize ALT flaps. The aim of this study was to report the accuracy of cutaneous perforators identified by the MRA DISCO imaging. METHODS: Nineteen patients underwent the MRA DISCO imaging for perforator mapping before the ALT flap transfer. A total of 38 ALT regions were studied on the MRA DISCO images. Flap thinning was performed under the guidance of MRA DISCO imaging. RESULTS: The lateral circumflex femoral artery (LCFA) most commonly stems from the deep femoral artery (84.2%), followed by the common femoral artery (15.8%). The average number of perforator vessels per LCFA was 10.2 ± 1.7. The distinct oblique branch was observed in 16 out of the 38 ALT regions (42.1%). Among the 19 ALT flaps harvested, 5 were septocutaneous perforator flaps and 14 musculocutaneous perforator flaps. Ten were harvested based on the descending branch, and 3 used the oblique branch as the flap vascular pedicle. In addition, the displayed course and types of perforator vessels on the DISCO images of the 18 skin flaps were consistent with the intraoperative findings, with an accuracy of 94.7%. CONCLUSIONS: The state of the cutaneous perforators of LCFA can be identified on the MRA DISCO images. The 3D-CE-MRA DISCO imaging is a practical method, which can ameliorate the design and customization of ALT flap for an individualized reconstruction.


Assuntos
Retalho Perfurante , Coxa da Perna , Humanos , Coxa da Perna/cirurgia , Angiografia por Ressonância Magnética , Extremidade Inferior , Pele
9.
Artigo em Inglês | MEDLINE | ID: mdl-37090139

RESUMO

A novel variable selection method for low-dimensional generalized linear models is introduced. The new approach called AIC OPTimization via STABility Selection (OPT-STABS) repeatedly subsamples the data, minimizes Akaike's Information Criterion (AIC) over a sequence of nested models for each subsample, and includes in the final model those predictors selected in the minimum AIC model in a large fraction of the subsamples. New methods are also introduced to establish an optimal variable selection cutoff over repeated subsamples. An extensive simulation study examining a variety of proposec variable selection methods shows that, although no single method uniformly outperforms the others in all the scenarios considered, OPT-STABS is consistently among the best-performing methods in most settings while it performs competitively for the rest. This is in contrast to other candidate methods which either have poor performance across the board or exhibit good performance in some settings, but very poor in others. In addition, the asymptotic properties of the OPT-STABS estimator are derived, and its root-n consistency and asymptotic normality are proved. The methods are applied to two datasets involving logistic and Poisson regressions.

10.
J Environ Manage ; 331: 117260, 2023 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-36681029

RESUMO

The scope of this study consists of setting up of an integrated cost-effective sampling & laboratory analyses procedure which delineates sampling, sub-sampling and analytical uncertainties in case of fine-grained extractive waste deposits. This procedure is designed to support the decision makers towards fine-grained waste deposits upcycling and land reclamation. This procedure consists of a balanced replicated sampling design (BRSD) coupled with a three split levels ANOVA data processing. The paper provides the readership with the mathematical backgrounds of the three split level ANOVA analysis (3L-ANOVA) and an Excel algorithm for its implementation. Also, the paper presents an example of implementation of the developed methods in the case of a Romanian iron ore tailings (IOT) old pond. The findings of the paper consist of: a) argues, based on OM, SEM-EDS, XRFS and XRD observations, that classical TOS is ineffective for fine-grained waste deposits; b) BRSD in conjunction with 3L-ANOVA analysis is the only approach fit for reliable characterization of the fine-grained stockpiles; c) sampling uncertainty is the critical factor of the uncertainty budget of the analyte concentration; d) Lilliefors approach is adequate for the hypothesis testing where or not the measurand is normal distributed; e) The outcomes of the BRDSD&3L-ANOVA investigations carried on Teliuc tailings, estimated at circa 5.5* 106 m3, consist mainly of mineral quantification at lot level i.e. quartz ∼54% (±7%), hematite ∼15% (±3%), calcite ∼11% (±3%), MgO 3% (±1%), Al2O3 9% (±2%). The concentrations of some CRMs like Ti, V, Ba, Y, W were found at ACE limits and their associated relative expanded uncertainties overpass 50%. Thus, the expanded uncertainties clearly depict the reliability of acquired data for the decision makers regarding waste valorization. f) The IOT into Teliuc can be upcycled as minerals for cement and ceramic industries as well as for geopolymer manufacture. Also, IOT can be downcycles as filler in road construction and mine closure. Finally, the Teliuc yard can be rehabilitated with zero-waste left behind. The data exactness provided by this procedure can be increased to any desirable level through increasing the number of collected items, but the cost of sampling and analyses increases proportionally. In such circumstances, the posted approach can be tailored at the stakeholder request as to safely underpin the decision to turn finegrained by-products into valuable secondary resources, facilitating a greater circularity of the mining industry.


Assuntos
Compostos de Ferro , Lagoas , Romênia , Reprodutibilidade dos Testes , Minerais/análise
11.
Mol Phylogenet Evol ; 168: 107378, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34968680

RESUMO

Excepting a handful of nodes, phylogenetic relationships between chelicerate orders remain poorly resolved, due to both the incidence of long branch attraction artifacts and the limited sampling of key lineages. It has recently been shown that increasing representation of basal nodes plays an outsized role in resolving the higher-level placement of long-branch chelicerate orders. Two lineages have been consistently undersampled in chelicerate phylogeny. First, sampling of the miniaturized order Palpigradi has been restricted to a fragmentary transcriptome of a single species. Second, sampling of Opilioacariformes, a rarely encountered and key group of Parasitiformes, has been restricted to a single exemplar. These two lineages exhibit dissimilar properties with respect to branch length; Opilioacariformes shows relatively low evolutionary rate compared to other Parasitiformes, whereas Palpigradi possibly acts as another long-branch order (an effect that may be conflated with the degree of missing data). To assess these properties and their effects on tree stability, we constructed a phylogenomic dataset of Chelicerata wherein both lineages were sampled with three terminals, increasing the representation of these taxa per locus. We examined the effect of subsampling phylogenomic matrices using (1) taxon occupancy, (2) evolutionary rate, and (3) a principal components-based approach. We further explored the impact of taxon deletion experiments that mitigate the effect of long branches. Here, we show that Palpigradi constitutes a fourth long-branch chelicerate order (together with Acariformes, Parasitiformes, and Pseudoscorpiones), which further destabilizes the chelicerate backbone topology. By contrast, the slow-evolving Opilioacariformes were consistently recovered within Parasitiformes, with certain subsampling practices recovering their placement as the sister group to the remaining Parasitiformes. Whereas the inclusion of Opilioacariformes always resulted in the non-monophyly of Acari with support, deletion of Opilioacariformes from datasets consistently incurred the monophyly of Acari except in matrices constructed on the basis of evolutionary rate. Our results strongly suggest that Acari is an artifact of long- branch attraction.


Assuntos
Ácaros e Carrapatos , Aracnídeos , Animais , Evolução Biológica , Filogenia
12.
Ecol Appl ; 32(1): e02462, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34614257

RESUMO

Conservation introductions to islands and fenced enclosures are increasing as in situ mitigations fail to keep pace with population declines. Few studies consider the potential loss of genetic diversity and increased inbreeding if released individuals breed disproportionately. As funding is limited and post-release monitoring expensive for conservation programs, understanding how sampling effort influences estimates of reproductive variance is useful. To investigate this relationship, we used a well-studied population of Tasmanian devils (Sarcophilus harrisii) introduced to Maria Island, Tasmania, Australia. Pedigree reconstruction based on molecular data revealed high variance in number of offspring per breeder and high proportions of unsuccessful individuals. Computational subsampling of 20%, 40%, 60%, and 80% of observed offspring resulted in inaccurate estimates of reproductive variance compared to the pedigree reconstructed with all sampled individuals. With decreased sampling effort, the proportion of inferred unsuccessful individuals was overestimated and the variance in number of offspring per breeder was underestimated. To accurately estimate reproductive variance, we recommend sampling as many individuals as logistically possible during the early stages of population establishment. Further, we recommend careful selection of colonizing individuals as they may be disproportionately represented in subsequent generations. Within the conservation management context, our results highlight important considerations for sample collection and post-release monitoring during population establishment.


Assuntos
Marsupiais , Animais , Austrália , Cruzamento , Humanos , Marsupiais/genética , Reprodução , Tasmânia
13.
Sensors (Basel) ; 23(1)2022 Dec 21.
Artigo em Inglês | MEDLINE | ID: mdl-36616660

RESUMO

A frequency downscaling technique for enhancing the accuracy of analog lock-in amplifier (LIA) architectures in giant magneto-impedance (GMI) sensor applications is presented in this paper. As a proof of concept, the proposed method is applied to two different LIA topologies using, respectively, analog and switching-based multiplication for phase-sensitive detection. Specifically, the operation frequency of both the input and the reference signals of the phase-sensitive detector (PSD) block of the LIA is reduced through a subsampling process using sample-and-hold (SH) circuits. A frequency downscaling from 200 kHz, which is the optimal operating frequency of the employed GMI sensor, to 1 kHz has been performed. In this way, the proposed technique exploits the inherent advantages of analog signal multiplication at low frequencies, while the principle of operation of the PSD remains unaltered. The circuits were assembled using discrete components, and the frequency downscaling proposal was experimentally validated by comparing the measurement accuracy with the equivalent conventional circuits. The experimental results revealed that the error in the signal magnitude measurements was reduced by a factor of 8 in the case of the analog multipliers and by a factor of 21 when a PSD based on switched multipliers was used. The error in-phase detection using a two-phase LIA was also reduced by more than 25%.

14.
Sensors (Basel) ; 22(21)2022 Oct 31.
Artigo em Inglês | MEDLINE | ID: mdl-36366059

RESUMO

Bayer color filter array (CFA) images are captured by a single-chip image sensor covered with a Bayer CFA pattern which has been widely used in modern digital cameras. In the past two decades, many compression methods have been proposed to compress Bayer CFA images. These compression methods can be roughly divided into the compression-first-based (CF-based) scheme and the demosaicing-first-based (DF-based) scheme. However, in the literature, no review article for the two compression schemes and their compression performance is reported. In this article, the related CF-based and DF-based compression works are reviewed first. Then, the testing Bayer CFA images created from the Kodak, IMAX, screen content images, videos, and classical image datasets are compressed on the Joint Photographic Experts Group-2000 (JPEG-2000) and the newly released Versatile Video Coding (VVC) platform VTM-16.2. In terms of the commonly used objective quality, perceptual quality metrics, the perceptual effect, and the quality-bitrate tradeoff metric, the compression performance comparison of the CF-based compression methods, in particular the reversible color transform-based compression methods and the DF-based compression methods, is reported and discussed.

15.
Entropy (Basel) ; 25(1)2022 Dec 31.
Artigo em Inglês | MEDLINE | ID: mdl-36673225

RESUMO

The optimal subsampling is an statistical methodology for generalized linear models (GLMs) to make inference quickly about parameter estimation in massive data regression. Existing literature only considers bounded covariates. In this paper, the asymptotic normality of the subsampling M-estimator based on the Fisher information matrix is obtained. Then, we study the asymptotic properties of subsampling estimators of unbounded GLMs with nonnatural links, including conditional asymptotic properties and unconditional asymptotic properties.

16.
Mol Biol Evol ; 37(10): 3061-3075, 2020 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-32492139

RESUMO

Next-generation sequencing of pathogen quasispecies within a host yields data sets of tens to hundreds of unique sequences. However, the full data set often contains thousands of sequences, because many of those unique sequences have multiple identical copies. Data sets of this size represent a computational challenge for currently available Bayesian phylogenetic and phylodynamic methods. Through simulations, we explore how large data sets with duplicate sequences affect the speed and accuracy of phylogenetic and phylodynamic analysis within BEAST 2. We show that using unique sequences only leads to biases, and using a random subset of sequences yields imprecise parameter estimates. To overcome these shortcomings, we introduce PIQMEE, a BEAST 2 add-on that produces reliable parameter estimates from full data sets with increased computational efficiency as compared with the currently available methods within BEAST 2. The principle behind PIQMEE is to resolve the tree structure of the unique sequences only, while simultaneously estimating the branching times of the duplicate sequences. Distinguishing between unique and duplicate sequences allows our method to perform well even for very large data sets. Although the classic method converges poorly for data sets of 6,000 sequences when allowed to run for 7 days, our method converges in slightly more than 1 day. In fact, PIQMEE can handle data sets of around 21,000 sequences with 20 unique sequences in 14 days. Finally, we apply the method to a real, within-host HIV sequencing data set with several thousand sequences per patient.


Assuntos
Teorema de Bayes , Técnicas Genéticas , Modelos Genéticos , Filogenia , Software , Conjuntos de Dados como Assunto
17.
Eur J Neurosci ; 53(2): 485-498, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32794296

RESUMO

The analysis of real-world networks of neurons is biased by the current ability to measure just a subsample of the entire network. It is thus relevant to understand if the information gained in the subsamples can be extended to the global network to improve functional interpretations. Here we showed how average clustering coefficient (CC), average path length (PL), and small-world propensity (SWP) scale when spatial sampling is applied to small-world networks. This extraction mimics the measurement of physical neighbors by means of electrical and optical techniques, both used to study neuronal networks. We applied this method to in silico and in vivo data and we found that the analyzed properties scale with the size of the sampled network and the global network topology. By means of mathematical manipulations, the topology dependence was reduced during scaling. We highlighted the behaviors of the descriptors that, qualitatively, are shared by all the analyzed networks and that allowed an approximated prediction of those descriptors in the global graph using the subgraph information. In contrast, below a spatial threshold, any extrapolation failed; the subgraphs no longer contain enough information to make predictions. In conclusion, the size of the chosen subgraphs is critical to extend the findings to the global network.


Assuntos
Neurônios , Análise por Conglomerados , Simulação por Computador
18.
Stat Med ; 40(2): 369-381, 2021 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-33089538

RESUMO

Statistical models are often fitted to obtain a concise description of the association of an outcome variable with some covariates. Even if background knowledge is available to guide preselection of covariates, stepwise variable selection is commonly applied to remove irrelevant ones. This practice may introduce additional variability and selection is rarely certain. However, these issues are often ignored and model stability is not questioned. Several resampling-based measures were proposed to describe model stability, including variable inclusion frequencies (VIFs), model selection frequencies, relative conditional bias (RCB), and root mean squared difference ratio (RMSDR). The latter two were recently proposed to assess bias and variance inflation induced by variable selection. Here, we study the consistency and accuracy of resampling estimates of these measures and the optimal choice of the resampling technique. In particular, we compare subsampling and bootstrapping for assessing stability of linear, logistic, and Cox models obtained by backward elimination in a simulation study. Moreover, we exemplify the estimation and interpretation of all suggested measures in a study on cardiovascular risk. The VIF and the model selection frequency are only consistently estimated in the subsampling approach. By contrast, the bootstrap is advantageous in terms of bias and precision for estimating the RCB as well as the RMSDR. Though, unbiased estimation of the latter quantity requires independence of covariates, which is rarely encountered in practice. Our study stresses the importance of addressing model stability after variable selection and shows how to cope with it.


Assuntos
Modelos Estatísticos , Simulação por Computador , Humanos , Modelos de Riscos Proporcionais
19.
Stat Med ; 40(2): 441-450, 2021 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-33145780

RESUMO

For massive survival data, we propose a subsampling algorithm to efficiently approximate the estimates of regression parameters in the additive hazards model. We establish consistency and asymptotic normality of the subsample-based estimator given the full data. The optimal subsampling probabilities are obtained via minimizing asymptotic variance of the resulting estimator. The subsample-based procedure can largely reduce the computational cost compared with the full data method. In numerical simulations, our method has low bias and satisfactory coverage probabilities. We provide an illustrative example on the survival analysis of patients with lymphoma cancer from the Surveillance, Epidemiology, and End Results Program.


Assuntos
Algoritmos , Viés , Humanos , Probabilidade , Modelos de Riscos Proporcionais , Análise de Sobrevida
20.
Sensors (Basel) ; 21(20)2021 Oct 14.
Artigo em Inglês | MEDLINE | ID: mdl-34696037

RESUMO

Sampling-based PLLs have become a new research trend due to the possibility of removing the frequency divider (FDIV) from the feedback path, where the FDIV increases the contribution of in-band noise by the factor of dividing ratio square (N2). Between two possible sampling methods, sub-sampling and reference-sampling, the latter provides a relatively wide locking range, as the slower input reference signal is sampled with the faster VCO output signal. However, removal of FDIV makes the PLL not feasible to implement fractional-N operation based on varying divider ratios through random sequence generators, such as a Delta-Sigma-Modulator (DSM). To address the above design challenges, we propose a reference-sampling-based calibration-free fractional-N PLL (RSFPLL) with a phase-interpolator-linked sampling clock generator (PSCG). The proposed RSFPLL achieves fractional-N operations through phase-interpolator (PI)-based multi-phase generation instead of a typical frequency divider or digital-to-time converter (DTC). In addition, to alleviate the power burden arising from VCO-rated sampling, a flexible mask window generation method has been used that only passes a few sampling clocks near the point of interest. The prototype PLL system is designed with a 65 nm CMOS process with a chip size of 0.42 mm2. It achieves 322 fs rms jitter, -240.7 dB figure-of-merit (FoM), and -44.06 dBc fractional spurs with 8.17 mW power consumption.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA