首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Values of pharmacokinetic parameters may seem to vary randomly between dosing occasions. An accurate explanation of the pharmacokinetic behaviour of a particular drug within a population therefore requires two major sources of variability to be accounted for, namely interoccasion variability and intersubject variability. A hierarchical model that recognizes these two sources of variation has been developed. Standard Bayesian techniques were applied to this statistical model, and a mathematical algorithm based on a Gibbs sampling strategy was derived. The accuracy of this algorithm's determination of the interoccasion and intersubject variation in pharmacokinetic parameters was evaluated from various population analyses of several sets of simulated data. A comparison of results from these analyses with those obtained from parallel maximum likelihood analyses (NONMEM) showed that, for simple problems, the outputs from the two algorithms agreed well, whereas for more complex situations the NONMEM approach may be less accurate. Statistical analyses of a multioccasion data set of pharmacokinetic measurements on the drug metoprolol (the measurements being of concentrations of drug in blood plasma from human subjects) revealed substantial interoccasion variability for all structural model parameters. For some parameters, interoccasion variability appears to be the primary source of pharmacokinetic variation.  相似文献   

2.
Approximate Bayesian inference on the basis of summary statistics is well-suited to complex problems for which the likelihood is either mathematically or computationally intractable. However the methods that use rejection suffer from the curse of dimensionality when the number of summary statistics is increased. Here we propose a machine-learning approach to the estimation of the posterior density by introducing two innovations. The new method fits a nonlinear conditional heteroscedastic regression of the parameter on the summary statistics, and then adaptively improves estimation using importance sampling. The new algorithm is compared to the state-of-the-art approximate Bayesian methods, and achieves considerable reduction of the computational burden in two examples of inference in statistical genetics and in a queueing model.  相似文献   

3.
Among innovations and improvements that occurred in the past two decades on the techniques and tools used for statistical process control (SPC), adaptive control charts have shown to substantially improve the statistical and/or economical performances. Variable sampling intervals (VSI) control charts are one of the most applied types of the adaptive control charts and have shown to be faster than traditional Shewhart control charts in identifying small changes of concerned quality characteristics. While in the designing procedure of the VSI control charts the data or measurements are assumed independent normal observations, in real situations the validity of these assumptions is under question in many processes. This article develops an economic-statistical design of a VSI X-bar control chart under non-normality and correlation. Since the proposed design consists of a complex nonlinear cost model that cannot be solved using a classical optimization method, a genetic algorithm (GA) is employed to solve it. Moreover, to improve the performances, response surface methodology (RSM) is employed to calibrate GA parameters. The solution procedure, efficiency, and sensitivity analysis of the proposed design are demonstrated through a numerical illustration at the end.  相似文献   

4.
This paper demonstrates that well-known parameter estimation methods for Gaussian fields place different emphasis on the high and low frequency components of the data. As a consequence, the relative importance of the frequencies under the objective of the analysis should be taken into account when selecting an estimation method, in addition to other considerations such as statistical and computational efficiency. The paper also shows that when noise is added to the Gaussian field, maximum pseudolikelihood automatically sets the smoothing parameter of the model equal to one. A simulation study then indicates that generalised cross-validation is more robust than maximum likelihood un-

der model misspecification in smoothing and image restoration problems. This has implications for Bayesian procedures since these use the same weightings of the frequencies as the likelihood.  相似文献   

5.
ABSTRACT

Economic statistical designs aim at minimizing the cost of process monitoring when a specific scenario or a set of estimated process and cost parameters is given. But, in practice the process may be affected by more than one scenario which may lead to severe cost penalties if the wrong design is used. Here, we investigate the robust economic statistical design (RESD) of the T2 chart in an attempt to reduce these cost penalties when there are multiple scenarios. Our method is to employ the genetic algorithm (GA) optimization method to minimize the total expected monitoring cost across all distinct scenarios. We illustrate the effectiveness of the method using two numerical examples. Simulation studies indicate that robust economic statistical designs should be encouraged in practice.  相似文献   

6.
Composite likelihood inference has gained much popularity thanks to its computational manageability and its theoretical properties. Unfortunately, performing composite likelihood ratio tests is inconvenient because of their awkward asymptotic distribution. There are many proposals for adjusting composite likelihood ratio tests in order to recover an asymptotic chi-square distribution, but they all depend on the sensitivity and variability matrices. The same is true for Wald-type and score-type counterparts. In realistic applications, sensitivity and variability matrices usually need to be estimated, but there are no comparisons of the performance of composite likelihood-based statistics in such an instance. A comparison of the accuracy of inference based on the statistics considering two methods typically employed for estimation of sensitivity and variability matrices, namely an empirical method that exploits independent observations, and Monte Carlo simulation, is performed. The results in two examples involving the pairwise likelihood show that a very large number of independent observations should be available in order to obtain accurate coverages using empirical estimation, while limited simulation from the full model provides accurate results regardless of the availability of independent observations. This suggests the latter as a default choice, whenever simulation from the model is possible.  相似文献   

7.
The analysis of survival endpoints subject to right-censoring is an important research area in statistics, particularly among econometricians and biostatisticians. The two most popular semiparametric models are the proportional hazards model and the accelerated failure time (AFT) model. Rank-based estimation in the AFT model is computationally challenging due to optimization of a non-smooth loss function. Previous work has shown that rank-based estimators may be written as solutions to linear programming (LP) problems. However, the size of the LP problem is O(n 2+p) subject to n 2 linear constraints, where n denotes sample size and p denotes the dimension of parameters. As n and/or p increases, the feasibility of such solution in practice becomes questionable. Among data mining and statistical learning enthusiasts, there is interest in extending ordinary regression coefficient estimators for low-dimensions into high-dimensional data mining tools through regularization. Applying this recipe to rank-based coefficient estimators leads to formidable optimization problems which may be avoided through smooth approximations to non-smooth functions. We review smooth approximations and quasi-Newton methods for rank-based estimation in AFT models. The computational cost of our method is substantially smaller than the corresponding LP problem and can be applied to small- or large-scale problems similarly. The algorithm described here allows one to couple rank-based estimation for censored data with virtually any regularization and is exemplified through four case studies.  相似文献   

8.
We describe applications of computational algebra to statistical problems of parameter identifiability, sufficiency, and estimation. The methods work for a family of statistical models that includes Poisson and binomial examples in network tomography.  相似文献   

9.
最大后验估计(MAPE)和最大似然估计(MLE)都是重要的参数点估计方法。在介绍一般分层线性模型(HLM)MAPE方法的基础上,给出这种方法的期望最大化算法(EM)的具体步骤,运用对数似然函数的二阶导数推导了MAPE估计的方差估计量。同时运用数据模拟比较了EM算法下的MAPE和MLE。对于固定效应的估计,两种方法得到的估计量是一致的。当组数较少时,EM计算的MAPE的方差协方差成分比MLE的更靠近真实值,而且MAPE的迭代次数明显小于MLE。  相似文献   

10.
Hierarchical spatio-temporal models allow for the consideration and estimation of many sources of variability. A general spatio-temporal model can be written as the sum of a spatio-temporal trend and a spatio-temporal random effect. When spatial locations are considered to be homogeneous with respect to some exogenous features, the groups of locations may share a common spatial domain. Differences between groups can be highlighted both in the large-scale, spatio-temporal component and in the spatio-temporal dependence structure. When these differences are not included in the model specification, model performance and spatio-temporal predictions may be weak. This paper proposes a method for evaluating and comparing models that progressively include group differences. Hierarchical modeling under a Bayesian perspective is followed, allowing flexible models and the statistical assessment of results based on posterior predictive distributions. This procedure is applied to tropospheric ozone data in the Italian Emilia–Romagna region for 2001, where 30 monitoring sites are classified according to environmental laws into two groups by their relative position with respect to traffic emissions.  相似文献   

11.
This paper investigates on the problem of parameter estimation in statistical model when observations are intervals assumed to be related to underlying crisp realizations of a random sample. The proposed approach relies on the extension of likelihood function in interval setting. A maximum likelihood estimate of the parameter of interest may then be defined as a crisp value maximizing the generalized likelihood function. Using the expectation-maximization (EM) to solve such maximizing problem therefore derives the so-called interval-valued EM algorithm (IEM), which makes it possible to solve a wide range of statistical problems involving interval-valued data. To show the performance of IEM, the following two classical problems are illustrated: univariate normal mean and variance estimation from interval-valued samples, and multiple linear/nonlinear regression with crisp inputs and interval output.  相似文献   

12.
13.
We consider a general class of prior distributions for nonparametric Bayesian estimation which uses finite random series with a random number of terms. A prior is constructed through distributions on the number of basis functions and the associated coefficients. We derive a general result on adaptive posterior contraction rates for all smoothness levels of the target function in the true model by constructing an appropriate ‘sieve’ and applying the general theory of posterior contraction rates. We apply this general result on several statistical problems such as density estimation, various nonparametric regressions, classification, spectral density estimation and functional regression. The prior can be viewed as an alternative to the commonly used Gaussian process prior, but properties of the posterior distribution can be analysed by relatively simpler techniques. An interesting approximation property of B‐spline basis expansion established in this paper allows a canonical choice of prior on coefficients in a random series and allows a simple computational approach without using Markov chain Monte Carlo methods. A simulation study is conducted to show that the accuracy of the Bayesian estimators based on the random series prior and the Gaussian process prior are comparable. We apply the method on Tecator data using functional regression models.  相似文献   

14.
In the survey sampling estimation or prediction of both population’s and subopulation’s (domain’s) characteristics is one of the key issues. In the case of the estimation or prediction of domain’s characteristics one of the problems is looking for additional sources of information that can be used to increase the accuracy of estimators or predictors. One of these sources may be spatial and temporal autocorrelation. Due to the mean squared error (MSE) estimation, the standard assumption is that random variables are independent for population elements from different domains. If the assumption is taken into account, spatial correlation may be assumed only inside domains. In the paper, we assume some special case of the linear mixed model with two random components that obey assumptions of the first-order spatial autoregressive model SAR(1) (but inside groups of domains instead of domains) and first-order temporal autoregressive model AR(1). Based on the model, the empirical best linear unbiased predictor will be proposed together with an estimator of its MSE taking the spatial correlation between domains into account.  相似文献   

15.
We present an application study which exemplifies a cutting edge statistical approach for detecting climate regime shifts. The algorithm uses Bayesian computational techniques that make time‐efficient analysis of large volumes of climate data possible. Output includes probabilistic estimates of the number and duration of regimes, the number and probability distribution of hidden states, and the probability of a regime shift in any year of the time series. Analysis of the Pacific Decadal Oscillation (PDO) index is provided as an example. Two states are detected: one is associated with positive values of the PDO and presents lower interannual variability, while the other corresponds to negative values of the PDO and greater variability. We compare this approach with existing alternatives from the literature and highlight the potential for ours to unlock features hidden in climate data.  相似文献   

16.
Biomarkers play an increasingly important role in many aspects of pharmaceutical discovery and development, including personalized medicine and the assessment of safety data, with heavy reliance being placed on their delivery. Statisticians have a fundamental role to play in ensuring that biomarkers and the data they generate are used appropriately and to address relevant objectives such as the estimation of biological effects or the forecast of outcomes so that claims of predictivity or surrogacy are only made based upon sound scientific arguments. This includes ensuring that studies are designed to answer specific and pertinent questions, that the analyses performed account for all levels and sources of variability and that the conclusions drawn are robust in the presence of multiplicity and confounding factors, especially as many biomarkers are multidimensional or may be an indirect measure of the clinical outcome. In all of these areas, as in any area of drug development, statistical best practice incorporating both scientific rigor and a practical understanding of the situation should be followed. This article is intended as an introduction for statisticians embarking upon biomarker-based work and discusses these issues from a practising statistician's perspective with reference to examples.  相似文献   

17.
The estimation of variance-covariance matrices through optimization of an objective function, such as a log-likelihood function, is usually a difficult numerical problem. Since the estimates should be positive semi-definite matrices, we must use constrained optimization, or employ a parametrization that enforces this condition. We describe here five different parametrizations for variance-covariance matrices that ensure positive definiteness, thus leaving the estimation problem unconstrained. We compare the parametrizations based on their computational efficiency and statistical interpretability. The results described here are particularly useful in maximum likelihood and restricted maximum likelihood estimation in linear and non-linear mixed-effects models, but are also applicable to other areas of statistics.  相似文献   

18.
Users of statistical packages need to be aware of the influence that outlying data points can have on their statistical analyses. Robust procedures provide formal methods to spot these outliers and reduce their influence. Although a few robust procedures are mentioned in this article, one is emphasized; it is motivated by maximum likelihood estimation to make it seem more natural. Use of this procedure in regression problems is considered in some detail, and an approximate error structure is stated for the robust estimates of the regression coefficients. A few examples are given. A suggestion of how these techniques should be implemented in practice is included.  相似文献   

19.
Generalized aberration (GA) is one of the most frequently used criteria to quantify the suitability of an orthogonal array (OA) to be used as an experimental design. The two main motivations for GA are that it quantifies bias in a main-effects only model and that it is a good surrogate for estimation efficiencies of models with all the main effects and some two-factor interaction components. We demonstrate that these motivations are not appropriate for three-level OAs of strength 3 and we propose a direct classification with other criteria instead. To illustrate, we classified complete series of three-level strength-3 OAs with 27, 54 and 81 runs using the GA criterion, the rank of the matrix with two-factor interaction contrasts, the estimation efficiency of two-factor interactions, the projection estimation capacity, and a new model robustness criterion. For all of the series, we provide a list of admissible designs according to these criteria.  相似文献   

20.
This article advocates the following perspective: When confronting a scientific problem, the field of statistics enters by viewing the problem as one where the scientific answer could be calculated if some missing data, hypothetical or real, were available. Thus, statistical effort should be devoted to three steps:
  1. formulate the missing data that would allow this calculation,
  2. stochastically fill in these missing data, and
  3. do the calculations as if the filled-in data were available.
This presentation discusses: conceptual benefits, such as for causal inference using potential outcomes; computational benefits, such as afforded by using the EM algorithm and related data augmentation methods based on MCMC; and inferential benefits, such as valid interval estimation and assessment of assumptions based on multiple imputation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号