首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Two stopping rules are defined for the purpose of minimizing the number of iterations needed to provide simulated percentile points with a certain precision: one stopping rule is a result of defining precision relative to the scale of the random variable while the other is a result of defining precision relative to the tail area of the distribution. A simulation experiment is conducted to investigate the effects of the stopping rules as well as the effects of changes in scale. The effects of interest are the precision of the simulated percentile point and the number of iterations needed to achieve that precision. It is shown that the stopping rules are effective in reducing the number of iterations while providing an acceptable precision in the percentile points. Also, increases in scale produce increases in the number of iterations and/or decreases in certain measures of precision  相似文献   

2.
This paper examines the use of bootstrapping for bias correction and calculation of confidence intervals (CIs) for a weighted nonlinear quantile regression estimator adjusted to the case of longitudinal data. Different weights and types of CIs are used and compared by computer simulation using a logistic growth function and error terms following an AR(1) model. The results indicate that bias correction reduces the bias of a point estimator but fails for CI calculations. A bootstrap percentile method and a normal approximation method perform well for two weights when used without bias correction. Taking both coverage and lengths of CIs into consideration, a non-bias-corrected percentile method with an unweighted estimator performs best.  相似文献   

3.
Cut-off sampling has been widely used for business survey which has the right-skewed population with a long tail. Several methods are suggested to obtain the optimal cut-off point. The LH algorithm suggested by Lavallee and Hidiroglou [6] is commonly used to get the optimum boundaries by minimizing the total sample size with a given precision. In this paper, we suggest a new cut-off point determination method which minimizes a cost function. And that leads to reducing the size of take-all stratum. Also we investigate an optimal cut-off point using a typical parametric estimation method under the assumptions of underlying distributions. Small Monte-Carlo simulation studies are performed in order to compare the new cut-off point method to the LH algorithm. The Korea Transportation Origin – Destination data are used for real data analysis.  相似文献   

4.
We respond to criticism leveled at bootstrap confidence intervals for the correlation coefficient by recent authors by arguing that in the correlation coefficient case, non–standard methods should be employed. We propose two such methods. The first is a bootstrap coverage coorection algorithm using iterated bootstrap techniques (Hall, 1986; Beran, 1987a; Hall and Martin, 1988) applied to ordinary percentile–method intervals (Efron, 1979), giving intervals with high coverage accuracy and stable lengths and endpoints. The simulation study carried out for this method gives results for sample sizes 8, 10, and 12 in three parent populations. The second technique involves the construction of percentile–t bootstrap confidence intervals for a transformed correlation coefficient, followed by an inversion of the transformation, to obtain “transformed percentile–t” intervals for the correlation coefficient. In particular, Fisher's z–transformation is used, and nonparametric delta method and jackknife variance estimates are used to Studentize the transformed correlation coefficient, with the jackknife–Studentized transformed percentile–t interval yielding the better coverage accuracy, in general. Percentile–t intervals constructed without first using the transformation perform very poorly, having large expected lengths and erratically fluctuating endpoints. The simulation study illustrating this technique gives results for sample sizes 10, 15 and 20 in four parent populations. Our techniques provide confidence intervals for the correlation coefficient which have good coverage accuracy (unlike ordinary percentile intervals), and stable lengths and endpoints (unlike ordinary percentile–t intervals).  相似文献   

5.
One of the indicators for evaluating the capability of a process is the process capability index. In this article, bootstrap confidence intervals of the generalized process capability index (GPCI) proposed by Maiti et al. are studied through simulation, when the underlying distributions are Lindley and Power Lindley distributions. The maximum likelihood method is used to estimate the parameters of the models. Three bootstrap confidence intervals namely, standard bootstrap (SB), percentile bootstrap (PB), and bias-corrected percentile bootstrap (BCPB) are considered for obtaining confidence intervals of GPCI. A Monte Carlo simulation has been used to investigate the estimated coverage probabilities and average width of the bootstrap confidence intervals. Simulation results show that the estimated coverage probabilities of the percentile bootstrap confidence interval and the bias-corrected percentile bootstrap confidence interval get closer to the nominal confidence level than those of the standard bootstrap confidence interval. Finally, three real datasets are analyzed for illustrative purposes.  相似文献   

6.
Shelf life is a specified percentile of the time-until-spoilage distribution of a food product. This paper investigates statistical properties of various estimators of shelf life and develops a genetic algorithm for finding near-optimal staggered designs for estimation of shelf life. MLEs and their associated confidence intervals for shelf life have smaller bias, better performance, and better coverage than the corresponding ad hoc regression-based estimates. However, performance of MLEs for common sample sizes must be evaluated by simulation. The genetic algorithm, coded as an SAS macro, searched the design space well and generated near-optimal designs as measured by improvement to a simulation-based performance measure.  相似文献   

7.
Abstract

In literature, Lindley distribution is considered as an alternative to exponential distribution to fit lifetime data. In the present work, a Lindley step-stress model with independent causes of failure is proposed. An algorithm to generate random samples from the proposed model under type 1 censoring scheme is developed. Point and interval estimation of the model parameters is carried out using maximum likelihood method and percentile bootstrap approach. To understand the effectiveness of the resulting estimates, numerical illustration is provided based on simulated and real-life data sets.  相似文献   

8.
Summary. The classical approach to statistical analysis is usually based upon finding values for model parameters that maximize the likelihood function. Model choice in this context is often also based on the likelihood function, but with the addition of a penalty term for the number of parameters. Though models may be compared pairwise by using likelihood ratio tests for example, various criteria such as the Akaike information criterion have been proposed as alternatives when multiple models need to be compared. In practical terms, the classical approach to model selection usually involves maximizing the likelihood function associated with each competing model and then calculating the corresponding criteria value(s). However, when large numbers of models are possible, this quickly becomes infeasible unless a method that simultaneously maximizes over both parameter and model space is available. We propose an extension to the traditional simulated annealing algorithm that allows for moves that not only change parameter values but also move between competing models. This transdimensional simulated annealing algorithm can therefore be used to locate models and parameters that minimize criteria such as the Akaike information criterion, but within a single algorithm, removing the need for large numbers of simulations to be run. We discuss the implementation of the transdimensional simulated annealing algorithm and use simulation studies to examine its performance in realistically complex modelling situations. We illustrate our ideas with a pedagogic example based on the analysis of an autoregressive time series and two more detailed examples: one on variable selection for logistic regression and the other on model selection for the analysis of integrated recapture–recovery data.  相似文献   

9.
The problems of estimating the mean and an upper percentile of a lognormal population with nonnegative values are considered. For estimating the mean of a such population based on data that include zeros, a simple confidence interval (CI) that is obtained by modifying Tian's [Inferences on the mean of zero-inflated lognormal data: the generalized variable approach. Stat Med. 2005;24:3223—3232] generalized CI, is proposed. A fiducial upper confidence limit (UCL) and a closed-form approximate UCL for an upper percentile are developed. Our simulation studies indicate that the proposed methods are very satisfactory in terms of coverage probability and precision, and better than existing methods for maintaining balanced tail error rates. The proposed CI and the UCL are simple and easy to calculate. All the methods considered are illustrated using samples of data involving airborne chlorine concentrations and data on diagnostic test costs.  相似文献   

10.
Duplicate analysis is a strategy commonly used to assess precision of bioanalytical methods. In some cases, duplicate analysis may rely on pooling data generated across organizations. Despite being generated under comparable conditions, organizations may produce duplicate measurements with different precision. Thus, these pooled data consist of a heterogeneous collection of duplicate measurements. Precision estimates are often expressed as relative difference indexes (RDI), such as relative percentage difference (RPD). Empirical evidence indicates that the frequency distribution of RDI values from heterogeneous data exhibits sharper peaks and heavier tails than normal distributions. Therefore, traditional normal-based models may yield faulty or unreliable estimates of precision from heterogeneous duplicate data. In this paper, we survey application of the mixture models that satisfactorily represent the distribution of RDI values from heterogeneous duplicate data. A simulation study was conducted to compare the performance of the different models in providing reliable estimates and inferences of percentile calculated from RDI values. These models are readily accessible to practitioners for study implementation through the use of modern statistical software. The utility of mixture models are explained in detail using a numerical example.  相似文献   

11.
Inverse Gamma-Pareto composite distribution is considered as a model for heavy tailed data. The maximum likelihood (ML), smoothed empirical percentile (SM), and Bayes estimators (informative and non-informative) for the parameter θ, which is the boundary point for the supports of the two distributions are derived. A Bayesian predictive density is derived via a gamma prior for θ and the density is used to estimate risk measures. Accuracy of estimators of θ and the risk measures are assessed via simulation studies. It is shown that the informative Bayes estimator is consistently more accurate than ML, Smoothed, and the non-informative Bayes estimators.  相似文献   

12.
We consider two problems concerning locating change points in a linear regression model. One involves jump discontinuities (change-point) in a regression model and the other involves regression lines connected at unknown points. We compare four methods for estimating single or multiple change points in a regression model, when both the error variance and regression coefficients change simultaneously at the unknown point(s): Bayesian, Julious, grid search, and the segmented methods. The proposed methods are evaluated via a simulation study and compared via some standard measures of estimation bias and precision. Finally, the methods are illustrated and compared using three real data sets. The simulation and empirical results overall favor both the segmented and Bayesian methods of estimation, which simultaneously estimate the change point and the other model parameters, though only the Bayesian method is able to handle both continuous and dis-continuous change point problems successfully. If it is known that regression lines are continuous then the segmented method ranked first among methods.  相似文献   

13.
Several authors discuss how the simulated tempering scheme provides a very simple mechanism for introducing regenerations within a Markov chain. In this article we explain how regenerative simulated tempering schemes provide a very natural mechanism for perfect simulation. We use this to provide a perfect simulation algorithm, which uses a single-sweep forward-simulation without the need for recursively searching through negative times. We demonstrate this algorithm in the context of several examples.  相似文献   

14.
An up-and-down (UD) experiment for estimating a given quantile of a binary response curve is a sequential procedure whereby at each step a given treatment level is used and, according to the outcome of the observations, a decision is made (deterministic or randomized) as to whether to maintain the same treatment or increase it by one level or else to decrease it by one level. The design points of such UD rules generate a Markov chain and the mode of its invariant distribution is an approximation to the quantile of interest. The main area of application of UD algorithms is in Phase I clinical trials, where it is of greatest importance to be able to attain reliable results in small-size experiments. In this paper we address the issues of the speed of convergence and the precision of quantile estimates of such procedures, both in theory and by simulation. We prove that the version of UD designs introduced in 1994 by Durham and Flournoy can in a large number of cases be regarded as optimal among all UD rules. Furthermore, in order to improve on the convergence properties of this algorithm, we propose a second-order UD experiment which, instead of making use of just the most recent observation, bases the next step on the outcomes of the last two. This procedure shares a number of desirable properties with the corresponding first-order designs, and also allows greater flexibility. With a suitable choice of the parameters, the new scheme is at least as good as the first-order one and leads to an improvement of the quantile estimates when the starting point of the algorithm is low relative to the target quantile.  相似文献   

15.
The adjusted r2 algorithm is a popular automated method for selecting the start time of the terminal disposition phase (tz) when conducting a noncompartmental pharmacokinetic data analysis. Using simulated data, the performance of the algorithm was assessed in relation to the ratio of the slopes of the preterminal and terminal disposition phases, the point of intercept of the terminal disposition phase with the preterminal disposition phase, the length of the terminal disposition phase captured in the concentration‐time profile, the number of data points present in the terminal disposition phase, and the level of variability in concentration measurement. The adjusted r2 algorithm was unable to identify tz accurately when there were more than three data points present in a profile's terminal disposition phase. The terminal disposition phase rate constant (λz) calculated based on the value of tz selected by the algorithm had a positive bias in all simulation data conditions. Tolerable levels of bias (median bias less than 5%) were achieved under conditions of low measurement variability. When measurement variability was high, tolerable levels of bias were attained only when the terminal phase time span was 4 multiples of t1/2 or longer. A comparison of the performance of the adjusted r2 algorithm, a simple r2 algorithm, and tz selection by visual inspection was conducted using a subset of the simulation data. In the comparison, the simple r2 algorithm performed as well as the adjusted r2 algorithm and the visual inspection method outperformed both algorithms. Recommendations concerning the use of the various tz selection methods are presented.  相似文献   

16.
Based on progressive Type-I hybrid censored data, statistical analysis in constant-stress accelerated life test (CS-ALT) for generalized exponential (GE) distribution is discussed. The maximum likelihood estimates (MLEs) of the parameters and the reliability function are obtained with EM algorithm, as well as the observed Fisher information matrix, the asymptotic variance-covariance matrix of the MLEs, and the asymptotic unbiased estimate (AUE) of the scale parameter. Confidence intervals (CIs) for the parameters are derived using asymptotic normality of MLEs and percentile bootstrap (Boot-p) method. Finally, the point estimates and interval estimates of the parameters are compared separately through the Monte-Carlo method.  相似文献   

17.
We present the parallel and interacting stochastic approximation annealing (PISAA) algorithm, a stochastic simulation procedure for global optimisation, that extends and improves the stochastic approximation annealing (SAA) by using population Monte Carlo ideas. The efficiency of standard SAA algorithm crucially depends on its self-adjusting mechanism which presents stability issues in high dimensional or rugged optimisation problems. The proposed algorithm involves simulating a population of SAA chains that interact each other in a manner that significantly improves the stability of the self-adjusting mechanism and the search for the global optimum in the sampling space, as well as it inherits SAA desired convergence properties when a square-root cooling schedule is used. It can be implemented in parallel computing environments in order to mitigate the computational overhead. As a result, PISAA can address complex optimisation problems that it would be difficult for SAA to satisfactory address. We demonstrate the good performance of the proposed algorithm on challenging applications including Bayesian network learning and protein folding. Our numerical comparisons suggest that PISAA outperforms the simulated annealing, stochastic approximation annealing, and annealing evolutionary stochastic approximation Monte Carlo.  相似文献   

18.
In this article, static light scattering (SLS) measurements are processed to estimate the particle size distribution of particle systems incorporating prior information obtained from an alternative experimental technique: scanning electron microscopy (SEM). For this purpose we propose two Bayesian schemes (one parametric and another non-parametric) to solve the stated light scattering problem and take advantage of the obtained results to summarize some features of the Bayesian approach within the context of inverse problems. The features presented in this article include the improvement of the results when some useful prior information from an alternative experiment is considered instead of a non-informative prior as it occurs in a deterministic maximum likelihood estimation. This improvement will be shown in terms of accuracy and precision in the corresponding results and also in terms of minimizing the effect of multiple minima by including significant information in the optimization. Both Bayesian schemes are implemented using Markov Chain Monte Carlo methods. They have been developed on the basis of the Metropolis–Hastings (MH) algorithm using Matlab® and are tested with the analysis of simulated and experimental examples of concentrated and semi-concentrated particles. In the simulated examples, SLS measurements were generated using a rigorous model, while the inversion stage was solved using an approximate model in both schemes and also using the rigorous model in the parametric scheme. Priors from SEM micrographs were also simulated and experimented, where the simulated ones were obtained using a Monte Carlo routine. In addition to the presentation of these features of the Bayesian approach, some other topics will be discussed, such as regularization and some implementation issues of the proposed schemes, among which we remark the selection of the parameters used in the MH algorithm.  相似文献   

19.
The marginal likelihood can be notoriously difficult to compute, and particularly so in high-dimensional problems. Chib and Jeliazkov employed the local reversibility of the Metropolis–Hastings algorithm to construct an estimator in models where full conditional densities are not available analytically. The estimator is free of distributional assumptions and is directly linked to the simulation algorithm. However, it generally requires a sequence of reduced Markov chain Monte Carlo runs which makes the method computationally demanding especially in cases when the parameter space is large. In this article, we study the implementation of this estimator on latent variable models which embed independence of the responses to the observables given the latent variables (conditional or local independence). This property is employed in the construction of a multi-block Metropolis-within-Gibbs algorithm that allows to compute the estimator in a single run, regardless of the dimensionality of the parameter space. The counterpart one-block algorithm is also considered here, by pointing out the difference between the two approaches. The paper closes with the illustration of the estimator in simulated and real-life data sets.  相似文献   

20.
 文章讨论了含有随机效应的面板数据模型,利用非对称Laplace分布与分位回归之间的关系,文章建立了一种贝叶斯分层分位回归模型。通过对非对称Laplace分布的分解,文章给出了Gibbs抽样算法下模型参数的点估计及区间估计,模拟结果显示,在处理含随机效应的面板数据模型中,特别是在误差非正态的情况下,本文的方法优于传统的均值模型方法。文章最后利用新方法对我国各地区经济与就业面板数据进行了实证研究,得到了有利于宏观调控的有用信息。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号