Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 63
Filtrar
1.
Entropy (Basel) ; 26(4)2024 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-38667873

RESUMO

In the acquisition process of 3D cultural relics, it is common to encounter noise. To facilitate the generation of high-quality 3D models, we propose an approach based on graph signal processing that combines color and geometric features to denoise the point cloud. We divide the 3D point cloud into patches based on self-similarity theory and create an appropriate underlying graph with a Markov property. The features of the vertices in the graph are represented using 3D coordinates, normal vectors, and color. We formulate the point cloud denoising problem as a maximum a posteriori (MAP) estimation problem and use a graph Laplacian regularization (GLR) prior to identifying the most probable noise-free point cloud. In the denoising process, we moderately simplify the 3D point to reduce the running time of the denoising algorithm. The experimental results demonstrate that our proposed approach outperforms five competing methods in both subjective and objective assessments. It requires fewer iterations and exhibits strong robustness, effectively removing noise from the surface of cultural relic point clouds while preserving fine-scale 3D features such as texture and ornamentation. This results in more realistic 3D representations of cultural relics.

2.
Entropy (Basel) ; 25(4)2023 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-37190456

RESUMO

The error probability of block codes sent under a non-uniform input distribution over the memoryless binary symmetric channel (BSC) and decoded via the maximum a posteriori (MAP) decoding rule is investigated. It is proved that the ratio of the probability of MAP decoder ties to the probability of error grows most linearly in blocklength when no MAP decoding ties occur, thus showing that decoder ties do not affect the code's error exponent. This result generalizes a similar recent result shown for the case of block codes transmitted over the BSC under a uniform input distribution.

3.
Sensors (Basel) ; 22(3)2022 Jan 28.
Artigo em Inglês | MEDLINE | ID: mdl-35161766

RESUMO

Blind modulation classification (MC) is an integral part of designing an adaptive or intelligent transceiver for future wireless communications. Blind MC has several applications in the adaptive and automated systems of sixth generation (6G) communications to improve spectral efficiency and power efficiency, and reduce latency. It will become a integral part of intelligent software-defined radios (SDR) for future communication. In this paper, we provide various MC techniques for orthogonal frequency division multiplexing (OFDM) signals in a systematic way. We focus on the most widely used statistical and machine learning (ML) models and emphasize their advantages and limitations. The statistical-based blind MC includes likelihood-based (LB), maximum a posteriori (MAP) and feature-based methods (FB). The ML-based automated MC includes k-nearest neighbors (KNN), support vector machine (SVM), decision trees (DTs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long short-term memory (LSTM) based MC methods. This survey will help the reader to understand the main characteristics of each technique, their advantages and disadvantages. We have also simulated some primary methods, i.e., statistical- and ML-based algorithms, under various constraints, which allows a fair comparison among different methodologies. The overall system performance in terms bit error rate (BER) in the presence of MC is also provided. We also provide a survey of some practical experiment works carried out through National Instrument hardware over an indoor propagation environment. In the end, open problems and possible directions for blind MC research are briefly discussed.


Assuntos
Algoritmos , Redes Neurais de Computação , Funções Verossimilhança , Aprendizado de Máquina , Máquina de Vetores de Suporte
4.
Entropy (Basel) ; 25(1)2022 Dec 27.
Artigo em Inglês | MEDLINE | ID: mdl-36673191

RESUMO

To derive a latent trait (for instance ability) in a computer adaptive testing (CAT) framework, the obtained results from a model must have a direct relationship to the examinees' response to a set of items presented. The set of items is previously calibrated to decide which item to present to the examinee in the next evaluation question. Some useful models are more naturally based on conditional probability in order to involve previously obtained hits/misses. In this paper, we integrate an experimental part, obtaining the information related to the examinee's academic performance, with a theoretical contribution of maximum entropy. Some academic performance index functions are built to support the experimental part and then explain under what conditions one can use constrained prior distributions. Additionally, we highlight that heuristic prior distributions might not properly work in all likely cases, and when to use personalized prior distributions instead. Finally, the inclusion of the performance index functions, arising from current experimental studies and historical records, are integrated into a theoretical part based on entropy maximization and its relationship with a CAT process.

5.
Sensors (Basel) ; 21(12)2021 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-34208036

RESUMO

Object tracking is one of the most challenging problems in the field of computer vision. In challenging object tracking scenarios such as illumination variation, occlusion, motion blur and fast motion, existing algorithms can present decreased performances. To make better use of the various features of the image, we propose an object tracking method based on the self-adaptive feature selection (SAFS) algorithm, which can select the most distinguishable feature sub-template to guide the tracking task. The similarity of each feature sub-template can be calculated by the histogram of the features. Then, the distinguishability of the feature sub-template can be measured by their similarity matrix based on the maximum a posteriori (MAP). The selection task of the feature sub-template is transformed into the classification task between feature vectors by the above process and adopt modified Jeffreys' entropy as the discriminant metric for classification, which can complete the update of the sub-template. Experiments with the eight video sequences in the Visual Tracker Benchmark dataset evaluate the comprehensive performance of SAFS and compare them with five baselines. Experimental results demonstrate that SAFS can overcome the difficulties caused by scene changes and achieve robust object tracking.

6.
Sensors (Basel) ; 21(2)2021 Jan 11.
Artigo em Inglês | MEDLINE | ID: mdl-33440714

RESUMO

Detection of multiple lane markings on road surfaces is an important aspect of autonomous vehicles. Although a number of approaches have been proposed to detect lanes, detecting multiple lane markings, particularly across a large number of frames and under varying lighting conditions, in a consistent manner is still a challenging problem. In this paper, we propose a novel approach for detecting multiple lanes across a large number of frames and under various lighting conditions. Instead of resorting to the conventional approach of processing each frame to detect lanes, we treat the overall problem as a multitarget tracking problem across space and time using the integrated probabilistic data association filter (IPDAF) as our basis filter. We use the intensity of the pixels as an augmented feature to correctly group multiple lane markings using the Hough transform. By representing these extracted lane markings as splines, we then identify a set of control points, which becomes a set of targets to be tracked over a period of time, and thus across a large number of frames. We evaluate our approach on two different fronts, covering both model- and machine-learning-based approaches, using two different datasets, namely the Caltech and TuSimple lane detection datasets, respectively. When tested against model-based approach, the proposed approach can offer as much as 5%, 12%, and 3% improvements on the true positive, false positive, and false positives per frame rates compared to the best alternative approach, respectively. When compared against a state-of-the-art machine learning technique, particularly against a supervised learning method, the proposed approach offers 57%, 31%, 4%, and 9× improvements on the false positive, false negative, accuracy, and frame rates. Furthemore, the proposed approach retains the explainability, or in other words, the cause of actions of the proposed approach can easily be understood or explained.

7.
Sensors (Basel) ; 21(16)2021 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-34450793

RESUMO

This paper addresses the main crucial aspects of physical (PHY) layer channel coding in uplink NB-IoT systems. In uplink NB-IoT systems, various channel coding algorithms are deployed due to the nature of the adopted Long-Term Evolution (LTE) channel coding which presents a great challenge at the expense of high decoding complexity, power consumption, error floor phenomena, while experiencing performance degradation for short block lengths. For this reason, such a design considerably increases the overall system complexity, which is difficult to implement. Therefore, the existing LTE turbo codes are not recommended in NB-IoT systems and, hence, new channel coding algorithms need to be employed for LPWA specifications. First, LTE-based turbo decoding and frequency-domain turbo equalization algorithms are proposed, modifying the simplified maximum a posteriori probability (MAP) decoder and minimum mean square error (MMSE) Turbo equalization algorithms were appended to different Narrowband Physical Uplink Shared Channel (NPUSCH) subcarriers for interference cancellation. These proposed methods aim to minimize the complexity of realizing the traditional MAP turbo decoder and MMSE estimators in the newly NB-IoT PHY layer features. We compare the system performance in terms of block error rate (BLER) and computational complexity.

8.
Entropy (Basel) ; 23(1)2021 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-33401583

RESUMO

Uncertainty is at the heart of decision-making processes in most real-world applications. Uncertainty can be broadly categorized into two types: aleatory and epistemic. Aleatory uncertainty describes the variability in the physical system where sensors provide information (hard) of a probabilistic type. Epistemic uncertainty appears when the information is incomplete or vague such as judgments or human expert appreciations in linguistic form. Linguistic information (soft) typically introduces a possibilistic type of uncertainty. This paper is concerned with the problem of classification where the available information, concerning the observed features, may be of a probabilistic nature for some features, and of a possibilistic nature for some others. In this configuration, most encountered studies transform one of the two information types into the other form, and then apply either classical Bayesian-based or possibilistic-based decision-making criteria. In this paper, a new hybrid decision-making scheme is proposed for classification when hard and soft information sources are present. A new Possibilistic Maximum Likelihood (PML) criterion is introduced to improve classification rates compared to a classical approach using only information from hard sources. The proposed PML allows to jointly exploit both probabilistic and possibilistic sources within the same probabilistic decision-making framework, without imposing to convert the possibilistic sources into probabilistic ones, and vice versa.

9.
Mol Biol Evol ; 36(9): 2069-2085, 2019 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-31127303

RESUMO

The reconstruction of ancestral scenarios is widely used to study the evolution of characters along phylogenetic trees. One commonly uses the marginal posterior probabilities of the character states, or the joint reconstruction of the most likely scenario. However, marginal reconstructions provide users with state probabilities, which are difficult to interpret and visualize, whereas joint reconstructions select a unique state for every tree node and thus do not reflect the uncertainty of inferences. We propose a simple and fast approach, which is in between these two extremes. We use decision-theory concepts (namely, the Brier score) to associate each node in the tree to a set of likely states. A unique state is predicted in tree regions with low uncertainty, whereas several states are predicted in uncertain regions, typically around the tree root. To visualize the results, we cluster the neighboring nodes associated with the same states and use graph visualization tools. The method is implemented in the PastML program and web server. The results on simulated data demonstrate the accuracy and robustness of the approach. PastML was applied to the phylogeography of Dengue serotype 2 (DENV2), and the evolution of drug resistances in a large HIV data set. These analyses took a few minutes and provided convincing results. PastML retrieved the main transmission routes of human DENV2 and showed the uncertainty of the human-sylvatic DENV2 geographic origin. With HIV, the results show that resistance mutations mostly emerge independently under treatment pressure, but resistance clusters are found, corresponding to transmissions among untreated patients.


Assuntos
Biologia Computacional/métodos , Filogenia , Software , Teoria da Decisão , Vírus da Dengue/genética , HIV/genética
10.
Risk Anal ; 40(9): 1706-1722, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32602232

RESUMO

Model averaging for dichotomous dose-response estimation is preferred to estimate the benchmark dose (BMD) from a single model, but challenges remain regarding implementing these methods for general analyses before model averaging is feasible to use in many risk assessment applications, and there is little work on Bayesian methods that include informative prior information for both the models and the parameters of the constituent models. This article introduces a novel approach that addresses many of the challenges seen while providing a fully Bayesian framework. Furthermore, in contrast to methods that use Monte Carlo Markov Chain, we approximate the posterior density using maximum a posteriori estimation. The approximation allows for an accurate and reproducible estimate while maintaining the speed of maximum likelihood, which is crucial in many applications such as processing massive high throughput data sets. We assess this method by applying it to empirical laboratory dose-response data and measuring the coverage of confidence limits for the BMD. We compare the coverage of this method to that of other approaches using the same set of models. Through the simulation study, the method is shown to be markedly superior to the traditional approach of selecting a single preferred model (e.g., from the U.S. EPA BMD software) for the analysis of dichotomous data and is comparable or superior to the other approaches.


Assuntos
Teorema de Bayes , Medição de Risco , Incerteza , Relação Dose-Resposta a Droga , Isocianatos/administração & dosagem , Nitrosaminas/administração & dosagem
11.
Sensors (Basel) ; 20(13)2020 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-32630011

RESUMO

Tracking individual animals in a group setting is a exigent task for computer vision and animal science researchers. When the objective is months of uninterrupted tracking and the targeted animals lack discernible differences in their physical characteristics, this task introduces significant challenges. To address these challenges, a probabilistic tracking-by-detection method is proposed. The tracking method uses, as input, visible keypoints of individual animals provided by a fully-convolutional detector. Individual animals are also equipped with ear tags that are used by a classification network to assign unique identification to instances. The fixed cardinality of the targets is leveraged to create a continuous set of tracks and the forward-backward algorithm is used to assign ear-tag identification probabilities to each detected instance. Tracking achieves real-time performance on consumer-grade hardware, in part because it does not rely on complex, costly, graph-based optimizations. A publicly available, human-annotated dataset is introduced to evaluate tracking performance. This dataset contains 15 half-hour long videos of pigs with various ages/sizes, facility environments, and activity levels. Results demonstrate that the proposed method achieves an average precision and recall greater than 95% across the entire dataset. Analysis of the error events reveals environmental conditions and social interactions that are most likely to cause errors in real-world deployments.


Assuntos
Algoritmos , Sistemas de Identificação Animal , Abrigo para Animais , Gado , Animais , Conjuntos de Dados como Assunto , Suínos
12.
Sensors (Basel) ; 19(1)2018 Dec 25.
Artigo em Inglês | MEDLINE | ID: mdl-30585222

RESUMO

Localization is a critical issue for Underwater Acoustic Sensor Networks (UASNs). Existing localization algorithms mainly focus on localizing unknown nodes (location-unaware) by measuring their distances to beacon nodes (location-aware), whereas ignoring additional challenges posed by harsh underwater environments. Especially, underwater nodes move constantly with ocean currents and measurement noises vary with distances. In this paper, we consider a special drifting-restricted UASN and propose a novel beacon-free algorithm, called MAP-PSO. It consists of two steps: MAP estimation and PSO localization. In MAP estimation, we analyze nodes' mobility patterns, which provide the priori knowledge for localization, and characterize distance measurements under the assumption of additive and multiplicative noises, which serve as the likelihood information for localization. Then the priori and likelihood information are fused to derive the localization objective function. In PSO localization, a swarm of particles are used to search the best location solution from local and global views simultaneously. Moreover, we eliminate the localization ambiguity using a novel reference selection mechanism and improve the convergence speed using a bound constraint mechanism. In the simulations, we evaluate the performance of the proposed algorithm under different settings and determine the optimal values for tunable parameters. The results show that our algorithm outperforms the benchmark method with high localization accuracy and low energy consumption.

13.
J Xray Sci Technol ; 26(5): 853-864, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30124464

RESUMO

Development of spectral X-ray computer tomography (CT) equipped with photon counting detector has been recently attracting great research interest. This work aims to improve the quality of spectral X-ray CT image. Maximum a posteriori (MAP) expectation-maximization (EM) algorithm is applied for reconstructing image-based weighting spectral X-ray CT images. A spectral X-ray CT system based on the cadmium zinc telluride photon counting detector and a fat cylinder phantom were simulated. Comparing with the commonly used filtered back projection (FBP) method, the proposed method reduced noise in the final weighting images at 2, 4, 6 and 9 energy bins up to 85.2%, 87.5%, 86.7% and 85%, respectively. CNR improvement ranged from 6.53 to 7.77. Compared with the prior image constrained compressed sensing (PICCS) method, the proposed method could reduce noise in the final weighting images by 36.5%, 44.6%, 27.3% and 18% at 2, 4, 6 and 9 energy bins, respectively, and improve the contrast-to-noise ratio (CNR) by 1.17 to 1.81. The simulation study also showed that comparing with the FBP and PICCS algorithms, image-based weighting imaging using MAP-EM statistical algorithm yielded significant improvement of the CNR and reduced the noise of the final weighting image.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Imagens de Fantasmas , Fótons
14.
Biomed Eng Online ; 16(1): 25, 2017 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-28173816

RESUMO

BACKGROUND: Within this manuscript a noise filtering technique for magnetic resonance image stack is presented. Magnetic resonance images are usually affected by artifacts and noise due to several reasons. Several denoising approaches have been proposed in literature, with different trade-off between computational complexity, regularization and noise reduction. Most of them is supervised, i.e. requires the set up of several parameters. A completely unsupervised approach could have a positive impact on the community. RESULTS: The method exploits Markov random fields in order to implement a 3D maximum a posteriori estimator of the image. Due to the local nature of the considered model, the algorithm is able do adapt the smoothing intensity to the local characteristics of the images by analyzing the 3D neighborhood of each voxel. The effect is a combination of details preservation and noise reduction. The algorithm has been compared to other widely adopted denoising methodologies in MRI. Both simulated and real datasets have been considered for validation. Real datasets have been acquired at 1.5 and 3 T. The methodology is able to provide interesting results both in terms of noise reduction and edge preservation without any supervision. CONCLUSIONS: A novel method for regularizing 3D MR image stacks is presented. The approach exploits Markov random fields for locally adapt filter intensity. Compared to other widely adopted noise filters, the method has provided interesting results without requiring the tuning of any parameter by the user.


Assuntos
Algoritmos , Artefatos , Encéfalo/anatomia & histologia , Aumento da Imagem/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Teorema de Bayes , Interpretação Estatística de Dados , Humanos , Imageamento por Ressonância Magnética/instrumentação , Cadeias de Markov , Reconhecimento Automatizado de Padrão/métodos , Imagens de Fantasmas , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Aprendizado de Máquina não Supervisionado
15.
Sensors (Basel) ; 17(11)2017 Nov 21.
Artigo em Inglês | MEDLINE | ID: mdl-29160797

RESUMO

This work addresses the problem of tracking a signal-emitting mobile target in wireless sensor networks (WSNs) with navigated mobile sensors. The sensors are properly equipped to acquire received signal strength (RSS) and angle of arrival (AoA) measurements from the received signal, while the target transmit power is assumed not known. We start by showing how to linearize the highly non-linear measurement model. Then, by employing a Bayesian approach, we combine the linearized observation model with prior knowledge extracted from the state transition model. Based on the maximum a posteriori (MAP) principle and the Kalman filtering (KF) framework, we propose new MAP and KF algorithms, respectively. We also propose a simple and efficient mobile sensor navigation procedure, which allows us to further enhance the estimation accuracy of our algorithms with a reduced number of sensors. Model flaws, which result in imperfect knowledge about the path loss exponent (PLE) and the true mobile sensors' locations, are taken into consideration. We have carried out an extensive simulation study, and our results confirm the superiority of the proposed algorithms, as well as the effectiveness of the proposed navigation routine.

16.
Biomed Eng Online ; 15(1): 94, 2016 Aug 11.
Artigo em Inglês | MEDLINE | ID: mdl-27516085

RESUMO

BACKGROUND: We describe the first automatic algorithm designed to estimate the pulse pressure variation ([Formula: see text]) from arterial blood pressure (ABP) signals under spontaneous breathing conditions. While currently there are a few publicly available algorithms to automatically estimate [Formula: see text] accurately and reliably in mechanically ventilated subjects, at the moment there is no automatic algorithm for estimating [Formula: see text] on spontaneously breathing subjects. The algorithm utilizes our recently developed sequential Monte Carlo method (SMCM), which is called a maximum a-posteriori adaptive marginalized particle filter (MAM-PF). We report the performance assessment results of the proposed algorithm on real ABP signals from spontaneously breathing subjects. RESULTS: Our assessment results indicate good agreement between the automatically estimated [Formula: see text] and the gold standard [Formula: see text] obtained with manual annotations. All of the automatically estimated [Formula: see text] index measurements ([Formula: see text]) were in agreement with manual gold standard measurements ([Formula: see text]) within ±4 % accuracy. CONCLUSION: The proposed automatic algorithm is able to give reliable estimations of [Formula: see text] given ABP signals alone during spontaneous breathing.


Assuntos
Algoritmos , Determinação da Pressão Arterial , Respiração , Processamento de Sinais Assistido por Computador , Humanos , Método de Monte Carlo , Respiração Artificial , Estatística como Assunto
17.
J Digit Imaging ; 29(3): 394-402, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-26714680

RESUMO

Emission tomographic image reconstruction is an ill-posed problem due to limited and noisy data and various image-degrading effects affecting the data and leads to noisy reconstructions. Explicit regularization, through iterative reconstruction methods, is considered better to compensate for reconstruction-based noise. Local smoothing and edge-preserving regularization methods can reduce reconstruction-based noise. However, these methods produce overly smoothed images or blocky artefacts in the final image because they can only exploit local image properties. Recently, non-local regularization techniques have been introduced, to overcome these problems, by incorporating geometrical global continuity and connectivity present in the objective image. These techniques can overcome drawbacks of local regularization methods; however, they also have certain limitations, such as choice of the regularization function, neighbourhood size or calibration of several empirical parameters involved. This work compares different local and non-local regularization techniques used in emission tomographic imaging in general and emission computed tomography in specific for improved quality of the resultant images.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons , Tomografia Computadorizada de Emissão de Fóton Único , Humanos , Imagens de Fantasmas
18.
J Struct Biol ; 190(2): 200-14, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-25839831

RESUMO

Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E-M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300.


Assuntos
Algoritmos , Microscopia Crioeletrônica/métodos , Processamento de Imagem Assistida por Computador/métodos , Substâncias Macromoleculares/química , Modelos Moleculares , Modelos Teóricos , Funções Verossimilhança
19.
J Struct Biol ; 191(3): 318-31, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26193484

RESUMO

In the single particle reconstruction, the initial 3D structure often suffers from the limited angular sampling artifact. Selecting 2D class averages of particle images generally improves the accuracy and efficiency of the reference-free 3D angle estimation, but causes an insufficient angular sampling to fill the information of the target object in the 3D frequency space. Similarly, the initial 3D structure by the random-conical tilt reconstruction has the well-known "missing cone" artifact. Here, we attempted to solve the limited angular sampling problem by sequentially applying maximum a posteriori estimate with expectation maximization algorithm (sMAP-EM). Using both simulated and experimental cryo-electron microscope images, the sMAP-EM was compared to the direct Fourier method on the basis of reconstruction error and resolution. To establish selection criteria of the final regularization weight for the sMAP-EM, the effects of noise level and sampling sparseness on the reconstructions were examined with evenly distributed sampling simulations. The frequency information filled in the missing cone of the conical tilt sampling simulations was assessed by developing new quantitative measurements. All the results of visual and numerical evaluations showed the sMAP-EM performed better than the direct Fourier method, regardless of the sampling method, noise level, and sampling sparseness. Furthermore, the frequency domain analysis demonstrated that the sMAP-EM can fill the meaningful information in the unmeasured angular space without detailed a priori knowledge of the objects. The current research demonstrated that the sMAP-EM has a high potential to facilitate the determination of 3D protein structures at near atomic-resolution.


Assuntos
Microscopia Crioeletrônica/métodos , Imageamento Tridimensional/métodos , Proteínas/química , Algoritmos , Artefatos , Teorema de Bayes , Processamento de Imagem Assistida por Computador/métodos
20.
J Pharmacokinet Pharmacodyn ; 42(6): 735-50, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26452548

RESUMO

Inter occasion variability (IOV) is of importance to consider in the development of a design where individual pharmacokinetic or pharmacodynamic parameters are of interest. IOV may adversely affect the precision of maximum a posteriori (MAP) estimated individual parameters, yet the influence of inclusion of IOV in optimal design for estimation of individual parameters has not been investigated. In this work two methods of including IOV in the maximum a posteriori Fisher information matrix (FIMMAP) are evaluated: (i) MAP occ-the IOV is included as a fixed effect deviation per occasion and individual, and (ii) POP occ-the IOV is included as an occasion random effect. Sparse sampling schedules were designed for two test models and compared to a scenario where IOV is ignored, either by omitting known IOV (Omit) or by mimicking a situation where unknown IOV has inflated the IIV (Inflate). Accounting for IOV in the FIMMAP markedly affected the designs compared to ignoring IOV and, as evaluated by stochastic simulation and estimation, resulted in superior precision in the individual parameters. In addition MAPocc and POP occ accurately predicted precision and shrinkage. For the investigated designs, the MAP occ method was on average slightly superior to POP occ and was less computationally intensive.


Assuntos
Antibacterianos/farmacocinética , Colistina/análogos & derivados , Modelos Biológicos , Modelos Estatísticos , Pró-Fármacos/farmacocinética , Projetos de Pesquisa/estatística & dados numéricos , Animais , Antibacterianos/administração & dosagem , Teorema de Bayes , Biotransformação , Colistina/administração & dosagem , Colistina/farmacocinética , Interpretação Estatística de Dados , Esquema de Medicação , Humanos , Pró-Fármacos/administração & dosagem , Reprodutibilidade dos Testes , Distribuição Tecidual
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA