Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 158
Filtrar
1.
Network ; : 1-57, 2024 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-38913877

RESUMO

The purpose of this paper is to test the performance of the recently proposed weighted superposition attraction-repulsion algorithms (WSA and WSAR) on unconstrained continuous optimization test problems and constrained optimization problems. WSAR is a successor of weighted superposition attraction algorithm (WSA). WSAR is established upon the superposition principle from physics and mimics attractive and repulsive movements of solution agents (vectors). Differently from the WSA, WSAR also considers repulsive movements with updated solution move equations. WSAR requires very few algorithm-specific parameters to be set and has good convergence and searching capability. Through extensive computational tests on many benchmark problems including CEC'2015 and CEC'2020 performance of the WSAR is compared against WSA and other metaheuristic algorithms. It is statistically shown that the WSAR algorithm is able to produce good and competitive results in comparison to its predecessor WSA and other metaheuristic algorithms.

2.
J Comput Chem ; 44(30): 2358-2368, 2023 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-37635671

RESUMO

With the rise of quantum mechanical/molecular mechanical (QM/MM) methods, the interest in the calculation of molecular assemblies has increased considerably. The structures and dynamics of such assemblies are usually governed to a large extend by intermolecular interactions. As a result, the corresponding potential energy surfaces are topological rich and possess many shallow minima. Therefore, local structure optimizations of QM/MM molecular assemblies can be challenging, in particular if optimization constraints are imposed. To overcome this problem, structure optimization in normal coordinate space is advocated. To do so, the external degrees of freedom of a molecule are separated from the internal ones by a projector matrix in the space of the Cartesian coordinates. Here we extend this approach to Cartesian constraints. To this end, we devise an algorithm that adds the Cartesian constraints directly to the projector matrix and in this way eliminates them from the reduced coordinate space in which the molecule is optimized. To analyze the performance and stability of the constrained optimization algorithm in normal coordinate space, we present constrained minimizations of small molecular systems and amino acids in gas phase as well as water employing QM/MM constrained optimizations. All calculations are performed in the framework of auxiliary density functional theory as implemented in the program deMon2k.

3.
Biometrics ; 79(3): 1646-1656, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-36124563

RESUMO

The additive hazards model specifies the effect of covariates on the hazard in an additive way, in contrast to the popular Cox model, in which it is multiplicative. As the non-parametric model, additive hazards offer a very flexible way of modeling time-varying covariate effects. It is most commonly estimated by ordinary least squares. In this paper, we consider the case where covariates are bounded, and derive the maximum likelihood estimator under the constraint that the hazard is non-negative for all covariate values in their domain. We show that the maximum likelihood estimator may be obtained by separately maximizing the log-likelihood contribution of each event time point, and we show that the maximizing problem is equivalent to fitting a series of Poisson regression models with an identity link under non-negativity constraints. We derive an analytic solution to the maximum likelihood estimator. We contrast the maximum likelihood estimator with the ordinary least-squares estimator in a simulation study and show that the maximum likelihood estimator has smaller mean squared error than the ordinary least-squares estimator. An illustration with data on patients with carcinoma of the oropharynx is provided.


Assuntos
Modelos de Riscos Proporcionais , Humanos , Funções Verossimilhança , Análise dos Mínimos Quadrados , Simulação por Computador
4.
Biometrics ; 79(3): 2260-2271, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-36063542

RESUMO

A dynamic treatment regime (DTR) is a sequence of decision rules that provide guidance on how to treat individuals based on their static and time-varying status. Existing observational data are often used to generate hypotheses about effective DTRs. A common challenge with observational data, however, is the need for analysts to consider "restrictions" on the treatment sequences. Such restrictions may be necessary for settings where (1) one or more treatment sequences that were offered to individuals when the data were collected are no longer considered viable in practice, (2) specific treatment sequences are no longer available, or (3) the scientific focus of the analysis concerns a specific type of treatment sequences (eg, "stepped-up" treatments). To address this challenge, we propose a restricted tree-based reinforcement learning (RT-RL) method that searches for an interpretable DTR with the maximum expected outcome, given a (set of) user-specified restriction(s), which specifies treatment options (at each stage) that ought not to be considered as part of the estimated tree-based DTR. In simulations, we evaluate the performance of RT-RL versus the standard approach of ignoring the partial data for individuals not following the (set of) restriction(s). The method is illustrated using an observational data set to estimate a two-stage stepped-up DTR for guiding the level of care placement for adolescents with substance use disorder.


Assuntos
Tomada de Decisão Clínica , Aprendizado de Máquina , Terapêutica , Humanos
5.
BMC Med Res Methodol ; 23(1): 4, 2023 01 07.
Artigo em Inglês | MEDLINE | ID: mdl-36611135

RESUMO

Clinical information collected in electronic health records (EHRs) is becoming an essential source to emulate randomized experiments. Since patients do not interact with the healthcare system at random, the longitudinal information in large observational databases must account for irregular visits. Moreover, we need to also account for subject-specific unmeasured confounders which may act as a common cause for treatment assignment mechanism (e.g. glucose-lowering medications) while also influencing the outcome (e.g. Hemoglobin A1c). We used the calibration of longitudinal weights to improve the finite sample properties and to account for subject-specific unmeasured confounders. A Monte Carlo simulation study is conducted to evaluate the performance of calibrated inverse probability estimators using time-dependent treatment assignment and irregular visits with subject-specific unmeasured confounders. The simulation study showed that the longitudinal weights with calibrated restrictions improved the finite sample bias when compared to the stabilized weights. The application of the calibrated weights is demonstrated using the exposure of glucose lowering medications and the longitudinal outcome of Hemoglobin A1c. Our results support the effectiveness of glucose lowering medications in reducing Hemoglobin A1c among type II diabetes patients with elevated glycemic index ([Formula: see text]) using stabilized and calibrated weights.


Assuntos
Diabetes Mellitus Tipo 2 , Modelos Estatísticos , Humanos , Diabetes Mellitus Tipo 2/tratamento farmacológico , Hemoglobinas Glicadas , Probabilidade , Simulação por Computador , Glucose/uso terapêutico , Modelos Estruturais
6.
Health Econ ; 32(6): 1244-1255, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36922365

RESUMO

This study demonstrates how the linear constrained optimization approach can be used to design a health benefits package (HBP) which maximises the net disability adjusted life years (DALYs) averted given the health system constraints faced by a country, and how the approach can help assess the marginal value of relaxing health system constraints. In the analysis performed for Uganda, 45 interventions were included in the HBP in the base scenario, resulting in a total of 26.7 million net DALYs averted. When task shifting of pharmacists' and nutrition officers' tasks to nurses is allowed, 73 interventions were included in the HBP resulting in a total of 32 million net DALYs averted (a 20% increase). Further, investing only $58 towards hiring additional nutrition officers' time could avert one net DALY; this increased to $60 and $64 for pharmacists and nurses respectively, and $100,000 for expanding the consumable budget, since human resources present the main constraint to the system.


Assuntos
Orçamentos , Humanos , Análise Custo-Benefício , Uganda , Recursos Humanos
7.
Clin Trials ; 20(3): 242-251, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36825509

RESUMO

BACKGROUND/AIMS: The stepped-wedge design has been extensively studied in the setting of the cluster randomized trial, but less so for the individually randomized trial. This article derives the optimal allocation of individuals to treatment sequences. The focus is on designs where all individuals start in the control condition and at the beginning of each time period some of them cross over to the intervention, so that at the end of the trial all of them receive the intervention. METHODS: The statistical model that takes into account the nesting of repeated measurements within subjects is presented. It is also shown how possible attrition is taken into account. The effect of the intervention is assumed to be sustained so that it does not change after the treatment switch. An exponential decay correlation structure is assumed, implying that the correlation between any two time point decreases with the time lag. Matrix algebra is used to derive the relation between the allocation of units to treatment sequences and the variance of the treatment effect estimator. The optimal allocation is the one that results in smallest variance. RESULTS: Results are presented for three to six treatment sequences. It is shown that the optimal allocation highly depends on the correlation parameter ρ and attrition rate r between any two adjacent time points. The uniform allocation, where each treatment sequence has the same number of individuals, is often not the most efficient. For 0.1≤ρ≤0.9 and r=0,0.05,0.2, its efficiency relative to the optimal allocation is at least 0.8. It is furthermore shown how a constrained optimal allocation can be derived in case the optimal allocation is not feasible from a practical point of view. CONCLUSION: This article provides the methodology for designing individually randomized stepped-wedge designs, taking into account the possibility of attrition. As such it helps researchers to plan their trial in an efficient way. To use the methodology, prior estimates of the degree of attrition and intraclass correlation coefficient are needed. It is advocated that researchers clearly report the estimates of these quantities to help facilitate planning future trials.


Assuntos
Modelos Estatísticos , Projetos de Pesquisa , Humanos , Tamanho da Amostra , Fatores de Tempo , Análise por Conglomerados
8.
Sensors (Basel) ; 23(16)2023 Aug 18.
Artigo em Inglês | MEDLINE | ID: mdl-37631802

RESUMO

In this paper, a procedure for experimental optimization under safety constraints, to be denoted as constraint-aware Bayesian Optimization, is presented. The basic ingredients are a performance objective function and a constraint function; both of them will be modeled as Gaussian processes. We incorporate a prior model (transfer learning) used for the mean of the Gaussian processes, a semi-parametric Kernel, and acquisition function optimization under chance-constrained requirements. In this way, experimental fine-tuning of a performance objective under experiment-model mismatch can be safely carried out. The methodology is illustrated in a case study on a line-follower application in a CoppeliaSim environment.

9.
Biom J ; 65(2): e2200129, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36104213

RESUMO

We propose a likelihood ratio test to assess that sampling has been completed in closed population size estimation studies. More precisely, we assess if the expected number of subjects that have never been sampled is below a user-specified threshold. The likelihood ratio test statistic has a nonstandard distribution under the null hypothesis. Critical values can be easily approximated and tabulated, and they do not depend on model specification. We illustrate in a simulation study and three real data examples, one of which involves ascertainment bias of amyotrophic lateral sclerosis in Gulf War veterans.


Assuntos
Esclerose Lateral Amiotrófica , Projetos de Pesquisa , Humanos , Funções Verossimilhança , Densidade Demográfica , Simulação por Computador , Esclerose Lateral Amiotrófica/epidemiologia
10.
Magn Reson Med ; 88(2): 945-961, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35381107

RESUMO

PURPOSE: The orientation distribution function (ODF), which is obtained from the radial integral of the probability density function weighted by rn$$ {r}^n $$ ( r$$ r $$ is the radial length), has been used to estimate fiber orientations of white matter tissues. Currently, there is no general expression of the ODF that is suitable for any n value in the HARDI methods. THEORY AND METHODS: A novel methodology is proposed to calculate the ODF for any n>-1$$ n>-1 $$ through the Taylor series expansion and a generalized expression for -1

Assuntos
Substância Branca , Algoritmos , Encéfalo/diagnóstico por imagem , Imagem de Difusão por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Substância Branca/diagnóstico por imagem
11.
Stat Med ; 2022 Dec 30.
Artigo em Inglês | MEDLINE | ID: mdl-36585040

RESUMO

Time-varying covariates can be important predictors when model based predictions are considered. A Cox model that includes time-varying covariates is usually referred to as an extended Cox model. When only right censoring is presented in the observed survival times, the conventional partial likelihood method is still applicable to estimate the regression coefficients of an extended Cox model. However, if there are interval-censored survival times, then the partial likelihood method is not directly available unless an imputation, such as the middle point imputation, is used to replaced the left- and interval-censored data. However, such imputation methods are well known for causing biases. This paper considers fitting of the extended Cox models using the maximum penalised likelihood method allowing observed survival times to be partly interval censored, where a penalty function is used to regularise the baseline hazard estimate. We present simulation studies to demonstrate the performance of our proposed method, and illustrate our method with applications to two real datasets from medical research.

12.
Stat Med ; 41(17): 3260-3280, 2022 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-35474515

RESUMO

Time-to-event data in medical studies may involve some patients who are cured and will never experience the event of interest. In practice, those cured patients are right censored. However, when data contain a cured fraction, standard survival methods such as Cox proportional hazards models can produce biased results and therefore misleading interpretations. In addition, for some outcomes, the exact time of an event is not known; instead an interval of time in which the event occurred is recorded. This article proposes a new computational approach that can deal with both the cured fraction issues and the interval censoring challenge. To do so, we extend the traditional mixture cure Cox model to accommodate data with partly interval censoring for the observed event times. The traditional method for estimation of the model parameters is based on the expectation-maximization (EM) algorithm, where the log-likelihood is maximized through an indirect complete data log-likelihood function. We propose in this article an alternative algorithm that directly optimizes the log-likelihood function. Extensive Monte Carlo simulations are conducted to demonstrate the performance of the new method over the EM algorithm. The main advantage of the new algorithm is the generation of asymptotic variance matrices for all the estimated parameters. The new method is applied to a thin melanoma dataset to predict melanoma recurrence. Various inferences, including survival and hazard function plots with point-wise confidence intervals, are presented. An R package is now available at Github and will be uploaded to R CRAN.


Assuntos
Melanoma , Algoritmos , Simulação por Computador , Humanos , Funções Verossimilhança , Melanoma/tratamento farmacológico , Modelos Estatísticos , Método de Monte Carlo , Modelos de Riscos Proporcionais , Análise de Sobrevida
13.
Value Health ; 25(5): 810-823, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35221205

RESUMO

OBJECTIVES: Illustrate 3 economic evaluation methods whose value measures may be useful to decision makers considering vaccination programs. METHODS: Keyword searches identified example publications of cost-effectiveness analysis (CEA), fiscal health modeling (FHM), and constrained optimization (CO) for economic evaluation of a vaccination program in countries where at least 2 of the methods had been used. We examined the extent to which different value measures may be useful for decision makers considering adoption of a new vaccination program. With these findings, we created a guide for selecting modeling approaches illustrating the decision-maker contexts and policy objectives for which each method may be useful. RESULTS: We identified 8 countries with published evaluations for vaccination programs using >1 method for 4 infections: influenza, human papilloma virus, rotavirus, and malaria. CEA studies targeted health system decision makers using a threshold to determine the efficiency of a new vaccination program. FHM studies targeted public sector spending decision makers estimating lifetime changes in government tax revenue net of transfer payments. CO studies targeted decision makers selecting from a mix of options for preventing an infectious disease within budget and feasibility constraints. Cost and utility inputs, epidemiologic models, comparators, and constraints varied by modeling method. CONCLUSIONS: Although CEAs measures of incremental cost-effectiveness ratios are critical for understanding vaccination program efficiency for all decision makers determining access and reimbursement, FHMs provide measures of the program's impact on public spending for government officials, and COs provide measures of the optimal mix of all prevention interventions for public health officials.


Assuntos
Programas de Imunização , Vacinação , Orçamentos , Análise Custo-Benefício , Humanos
14.
Proc Natl Acad Sci U S A ; 116(12): 5341-5343, 2019 03 19.
Artigo em Inglês | MEDLINE | ID: mdl-30833385

RESUMO

The control of time-dependent, energy beam manufacturing processes has been achieved in the past through trial-and-error approaches. We identify key research gaps and generic challenges related to inverse problems for these processes that require a multidisciplinary problem-solving approach to tackle them. The generic problems that we identify have a wide range of applications in the algorithmic control of modern manufacturing processes.

15.
Sensors (Basel) ; 22(17)2022 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-36081128

RESUMO

This study discusses a nonlinear electrical impedance tomography (EIT) technique under different analysis conditions to propose its optimal implementation parameters. The forward problem for calculating electric potential is defined by the complete electrode model. The inverse problem for reconstructing the target electrical conductivity profile is presented based on a partial-differential-equation-constrained optimization approach. The electrical conductivity profile is iteratively updated by solving the Karush-Kuhn-Tucker optimality conditions and using the conjugate gradient method with an inexact line search. Various analysis conditions such as regularization scheme, number of electrodes, current input patterns, and electrode arrangement were set differently, and the corresponding results were compared. It was found from this study that the proposed EIT method yielded appropriate inversion results with various parameter settings, and the optimal implementation parameters of the EIT method are presented. This study is expected to expand the utility and applicability of EIT for the non-destructive evaluation of structures.

16.
J Environ Manage ; 310: 114753, 2022 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-35228165

RESUMO

The design of groundwater exploitation schedules with constraints on pumping-induced land subsidence is a computationally intensive task. Physical process-based groundwater flow and land subsidence simulations are high-dimensional, nonlinear, dynamic and computationally demanding, as they require solving large systems of partial differential equations (PDEs). This work is the first application of a parallelized surrogate-based global optimization algorithm to mitigate land subsidence issues by controlling the pumping schedule of multiple groundwater wellfields over space and time. The application was demonstrated in a 6500 km2 region in China, involving a large-scale coupled groundwater flow-land subsidence model that is computationally expensive in terms of computational resources, including runtime and CPU memory for one single evaluation. In addition, the optimization problem contains 50 decision variables and up to 13 constraints, which adds to the computational effort, thus an efficient optimization is required. The results show that parallel DYSOC (dynamic search with surrogate-based constrained optimization) can achieve an approximately 100% parallel efficiency when upscaling computing resources. Compared with two other widely used optimization algorithms, DYSOC is 2-6 times faster, achieving computational cost savings of at least 50%. The findings demonstrate that the integration of surrogate constraints and dynamic search process can aid in the exploration and exploitation of the search space and accelerate the search for optimal solutions to complicated problems.


Assuntos
Água Subterrânea , Algoritmos , China
17.
Appl Intell (Dordr) ; 52(11): 12630-12667, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36161208

RESUMO

A novel optimization algorithm called hybrid salp swarm algorithm with teaching-learning based optimization (HSSATLBO) is proposed in this paper to solve reliability redundancy allocation problems (RRAP) with nonlinear resource constraints. Salp swarm algorithm (SSA) is one of the newest meta-heuristic algorithms which mimic the swarming behaviour of salps. It is an efficient swarm optimization technique that has been used to solve various kinds of complex optimization problems. However, SSA suffers a slow convergence rate due to its poor exploitation ability. In view of this inadequacy and resulting in a better balance between exploration and exploitation, the proposed hybrid method HSSATLBO has been developed where the searching procedures of SSA are renovated based on the TLBO algorithm. The good global search ability of SSA and fast convergence of TLBO help to maximize the system reliability through the choices of redundancy and component reliability. The performance of the proposed HSSATLBO algorithm has been demonstrated by seven well-known benchmark problems related to reliability optimization that includes series system, complex (bridge) system, series-parallel system, overspeed protection system, convex system, mixed series-parallel system, and large-scale system with dimensions 36, 38, 40, 42 and 50. After illustration, the outcomes of the proposed HSSATLBO are compared with several recently developed competitive meta-heuristic algorithms and also with three improved variants of SSA. Additionally, the HSSATLBO results are statistically investigated with the wilcoxon sign-rank test and multiple comparison test to show the significance of the results. The experimental results suggest that HSSATLBO significantly outperforms other algorithms and has become a remarkable and promising tool for solving RRAP.

18.
J Comput Chem ; 42(7): 492-504, 2021 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-33347643

RESUMO

A local optimization algorithm for solving the Kohn-Sham equations is presented. It is based on a direct minimization of the energy functional under the equality constraints representing the Grassmann Manifold. The algorithm does not require an eigendecomposition, which may be advantageous in large-scale computations. It is optimized to reduce the number of Kohn-Sham matrix evaluations to one per iteration to be competitive with standard self-consistent field (SCF) approach accelerated by direct inversion of the iterative subspace (DIIS). Numerical experiments include a comparison of the algorithm with DIIS. A high reliability of the algorithm is observed in configurations where SCF iterations fail to converge or find a wrong solution corresponding to a stationary point different from the global minimum. The local optimization algorithm itself does not guarantee that the found minimum is global. However, a randomization of the initial approximation shows a convergence to the right minimum in the vast majority of cases.

19.
Magn Reson Med ; 86(3): 1573-1585, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33733495

RESUMO

PURPOSE: To develop a general framework for parallel imaging (PI) with the use of Maxwell regularization for the estimation of the sensitivity maps (SMs) and constrained optimization for the parameter-free image reconstruction. THEORY AND METHODS: Certain characteristics of both the SMs and the images are routinely used to regularize the otherwise ill-posed optimization-based joint reconstruction from highly accelerated PI data. In this paper, we rely on a fundamental property of SMs-they are solutions of Maxwell equations-we construct the subspace of all possible SM distributions supported in a given field-of-view, and we promote solutions of SMs that belong in this subspace. In addition, we propose a constrained optimization scheme for the image reconstruction, as a second step, once an accurate estimation of the SMs is available. The resulting method, dubbed Maxwell parallel imaging (MPI), works for both 2D and 3D, with Cartesian and radial trajectories, and minimal calibration signals. RESULTS: The effectiveness of MPI is illustrated for various undersampling schemes, including radial, variable-density Poisson-disc, and Cartesian, and is compared against the state-of-the-art PI methods. Finally, we include some numerical experiments that demonstrate the memory footprint reduction of the constructed Maxwell basis with the help of tensor decomposition, thus allowing the use of MPI for full 3D image reconstructions. CONCLUSION: The MPI framework provides a physics-inspired optimization method for the accurate and efficient image reconstruction from arbitrary accelerated scans.


Assuntos
Algoritmos , Imageamento por Ressonância Magnética , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Imagens de Fantasmas
20.
Magn Reson Med ; 85(1): 531-543, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32857424

RESUMO

PURPOSE: To describe and implement a strategy for dynamic slice-by-slice and multiband B0 shimming using spherical harmonic shims in the human brain at 7T. THEORY: For thin axial slices, spherical harmonic shims can be divided into pairs of shims (z-degenerate and non-z-degenerate) that are spatially degenerate, such that only ½ of the shims (non-z-degenerate) are required for single slice optimizations. However, when combined, the pairs of shims can be used to simultaneously generate the same in-plane symmetries but with different amplitudes as a function of their z location. This enables multiband shimming equivalent to that achievable by single slice-by-slice optimization. METHODS: All data were acquired at 7T using a spherical harmonic shim insert enabling shimming up through 4th order with two additional 5th order shims (1st-4th+). Dynamic shim updating was achieved using a 10A shim power supply with 2 ms ramps and constrained optimizations to minimize eddy currents. RESULTS: In groups of eight subjects, we demonstrated that: 1) dynamic updating using 1st-4th+ order shims reduced the SD of the B0 field over the whole brain from 32.4 ± 2.6 and 24.9 ± 2 Hz with 1st-2nd and 1st-4th+ static global shimming to 15.1 ± 1.7 Hz; 2) near equivalent performance was achieved when dynamically updating only the non-z-degenerate shims (14.3 ± 1.5 Hz), or when a using multiband shim factor of 2, MBs = 2, and all shims (14.4 ± 2.0 Hz). CONCLUSION: High order spherical harmonics provide substantial improvements over static global shimming and enable dynamic multiband shimming with near equivalent performance to that of dynamic slice-by-slice shimming. This reduces distortion in echo planar imaging.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Imagem Ecoplanar , Humanos
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa