首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 428 毫秒
1.
In response surface methodology, one is usually interested in estimating the optimal conditions based on a small number of experimental runs which are designed to optimally sample the experimental space. Typically, regression models are constructed from the experimental data and interrogated in order to provide a point estimate of the independent variable settings predicted to optimize the response. Unfortunately, these point estimates are rarely accompanied with uncertainty intervals. Though classical frequentist confidence intervals can be constructed for unconstrained quadratic models, higher order, constrained or nonlinear models are often encountered in practice. Existing techniques for constructing uncertainty estimates in such situations have not been implemented widely, due in part to the need to set adjustable parameters or because of limited or difficult applicability to constrained or nonlinear problems. To address these limitations a Bayesian method of determining credible intervals for response surface optima was developed. The approach shows good coverage probabilities on two test problems, is straightforward to implement and is readily applicable to the kind of constrained and/or nonlinear problems that frequently appear in practice.  相似文献   

2.
This paper derives a procedure for efficiently allocating the number of units in multi‐level designs given prespecified power levels. The derivation of the procedure is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. The procedure makes use of variance component estimates to optimize designs during the budget formulating stages. The method provides more general closed form solutions than other currently available formulae. As such, the proposed procedure allows for the determination of the optimal numbers of units for studies that involve more complex designs. A method is also described for optimizing designs when variance component estimates are not available. Case studies are provided to demonstrate the method.  相似文献   

3.
There exist primarily three different types of algorithms for computing nonparametric maximum likelihood estimates (NPMLEs) of mixing distributions in the literature, which are the EM-type algorithms, the vertex direction algorithms such as VDM and VEM, and the algorithms based on general constrained optimization techniques such as the projected gradient method. It is known that the projected gradient algorithm may run into stagnation during iterations. When a stagnation occurs, VDM steps need to be added. We argue that the abrupt switch to VDM steps can significantly reduce the efficiency of the projected gradient algorithm, and is usually unnecessary. In this paper, we define a group of partially projected directions, which can be regarded as hybrids of ordinary projected gradient directions and VDM directions. Based on these directions, four new algorithms are proposed for computing NPMLEs of mixing distributions. The properties of the algorithms are discussed and their convergence is proved. Extensive numerical simulations show that the new algorithms outperform the existing methods, especially when a NPMLE has a large number of support points or when high accuracy is required.  相似文献   

4.
In this paper, multisample analyses of exactand stochastic constraints with identified structural equation models are investigated using a Bayesian approach. Asymptotic properties of the estimates are developed and a multiplier method is employed to obtain the solution. A numerical example is also included as an illustration.  相似文献   

5.
The balanced half-sample, jackknife and linearization methods are used to estimate the variance of the slope of a linear regression under a variety of computer generated situations. The basic sampling design is one in which two PSU's are selected from each of a number of strata . The variance estimation techniques are compared with a Monte Carlo experiment. Results show that variance estimates may be highly biased and variable unless sizeable numbers of observations are available from each stratum. The jackknife and linearization estimates appear superior to the balanced half sample method - particularly when the number of strata or the number of available observations from each stratum is small.  相似文献   

6.
In this paper, a small-sample asymptotic method is proposed for higher order inference in the stress–strength reliability model, R=P(Y<X), where X and Y are distributed independently as Burr-type X distributions. In a departure from the current literature, we allow the scale parameters of the two distributions to differ, and the likelihood-based third-order inference procedure is applied to obtain inference for R. The difficulty of the implementation of the method is in obtaining the the constrained maximum likelihood estimates (MLE). A penalized likelihood method is proposed to handle the numerical complications of maximizing the constrained likelihood model. The proposed procedures are illustrated using a sample of carbon fibre strength data. Our results from simulation studies comparing the coverage probabilities of the proposed small-sample asymptotic method with some existing large-sample asymptotic methods show that the proposed method is very accurate even when the sample sizes are small.  相似文献   

7.
This paper is concerned with methods of reducing variability and computer time in a simulation study. The Monte Carlo swindle, through mathematical manipulations, has been shown to yield more precise estimates than the “naive” approach. In this study computer time is considered in conjunction with the variance estimates. It is shown that by this measure the naive method is often a viable alternative to the swindle. This study concentrates on the problem of estimating the variance of an estimator of location. The advantage of one technique over another depends upon the location estimator, the sample size, and the underlying distribution. For a fixed number of samples, while the naive method gives a less precise estimate than the swindle, it requires fewer computations. In addition, for certain location estimators and distributions, the naive method is able to take advantage of certain shortcuts in the generation of each sample. The small amount of time required by this “enlightened” naive method often more than compensates for its relative lack of precision.  相似文献   

8.
文章研究了半参数变系数EV模型在线性约束条件下的估计和检验问题,当响应变量缺失、非参数部分协变量带有测量误差时,利用局部纠偏的Profile最小二乘估计、Lagrange乘子方法和借补技术构造了回归模型参数分量两类纠偏约束估计量。此外,为了检验线性约束条件,构造了借补的Profile Lagrange乘子检验统计量,并通过蒙特卡洛数值模拟验证估计量和检验统计量的有效性。  相似文献   

9.
The power function distribution is often used to study the electrical component reliability. In this paper, we model a heterogeneous population using the two-component mixture of the power function distribution. A comprehensive simulation scheme including a large number of parameter points is followed to highlight the properties and behavior of the estimates in terms of sample size, censoring rate, parameters size and the proportion of the components of the mixture. The parameters of the power function mixture are estimated and compared using the Bayes estimates. A simulated mixture data with censored observations is generated by probabilistic mixing for the computational purposes. Elegant closed form expressions for the Bayes estimators and their variances are derived for the censored sample as well as for the complete sample. Some interesting comparison and properties of the estimates are observed and presented. The system of three non-linear equations, required to be solved iteratively for the computations of maximum likelihood (ML) estimates, is derived. The complete sample expressions for the ML estimates and for their variances are also given. The components of the information matrix are constructed as well. Uninformative as well as informative priors are assumed for the derivation of the Bayes estimators. A real-life mixture data example has also been discussed. The posterior predictive distribution with the informative Gamma prior is derived, and the equations required to find the lower and upper limits of the predictive intervals are constructed. The Bayes estimates are evaluated under the squared error loss function.  相似文献   

10.
Summary. On the basis of serological data from prevalence studies of rubella, mumps and hepatitis A, the paper describes a flexible local maximum likelihood method for the estimation of the rate at which susceptible individuals acquire infection at different ages. In contrast with parametric models that have been used before in the literature, the local polynomial likelihood method allows this age-dependent force of infection to be modelled without making any assumptions about the parametric structure. Moreover, this method allows for simultaneous nonparametric estimation of age-specific incidence and prevalence. Unconstrained models may lead to negative estimates for the force of infection at certain ages. To overcome this problem and to guarantee maximal flexibility, the local smoother can be constrained to be monotone. It turns out that different parametric and nonparametric estimates of the force of infection can exhibit considerably different qualitative features like location and the number of maxima, emphasizing the importance of a well-chosen flexible statistical model.  相似文献   

11.
The established general results on convergence properties of the EM algorithm require the sequence of EM parameter estimates to fall in the interior of the parameter space over which the likelihood is being maximized. This paper presents convergence properties of the EM sequence of likelihood values and parameter estimates in constrained parameter spaces for which the sequence of EM parameter estimates may converge to the boundary of the constrained parameter space contained in the interior of the unconstrained parameter space. Examples of the behavior of the EM algorithm applied to such parameter spaces are presented.  相似文献   

12.
Statistical models are sometimes incorporated into computer software for making predictions about future observations. When the computer model consists of a single statistical model this corresponds to estimation of a function of the model parameters. This paper is concerned with the case that the computer model implements multiple, individually-estimated statistical sub-models. This case frequently arises, for example, in models for medical decision making that derive parameter information from multiple clinical studies. We develop a method for calculating the posterior mean of a function of the parameter vectors of multiple statistical models that is easy to implement in computer software, has high asymptotic accuracy, and has a computational cost linear in the total number of model parameters. The formula is then used to derive a general result about posterior estimation across multiple models. The utility of the results is illustrated by application to clinical software that estimates the risk of fatal coronary disease in people with diabetes.  相似文献   

13.
A simple least squares method for estimating a change in mean of a sequence of independent random variables is studied. The method first tests for a change in mean based on the regression principle of constrained and unconstrained sums of squares. Conditionally on a decision by this test that a change has occurred, least squares estimates are used to estimate the change point, the initial mean level (prior to the change point) and the change itself. The estimates of the initial level and change are functions of the change point estimate. All estimates are shown to be consistent, and those for the initial level and change are shown to be asymptotically jointly normal. The method performs well for moderately large shifts (one standard deviation or more), but the estimates of the initial level and change are biased in a predictable way for small shifts. The large sample theory is helpful in understanding this problem. The asymptotic distribution of the change point estimator is obtained for local shifts in mean, but the case of non-local shifts appears analytically intractable.  相似文献   

14.
In 2008, Marsan and Lengliné presented a nonparametric way to estimate the triggering function of a Hawkes process. Their method requires an iterative and computationally intensive procedure which ultimately produces only approximate maximum likelihood estimates (MLEs) whose asymptotic properties are poorly understood. Here, we note a mathematical curiosity that allows one to compute, directly and extremely rapidly, exact MLEs of the nonparametric triggering function. The method here requires that the number q of intervals on which the nonparametric estimate is sought equals the number n of observed points. The resulting estimates have very high variance but may be smoothed to form more stable estimates. The performance and computational efficiency of the proposed method is verified in two disparate, highly challenging simulation scenarios: first to estimate the triggering functions, with simulation-based 95% confidence bands, for earthquakes and their aftershocks in Loma Prieta, California, and second, to characterise triggering in confirmed cases of plague in the United States over the last century. In both cases, the proposed estimator can be used to describe the rate of contagion of the processes in detail, and the computational efficiency of the estimator facilitates the construction of simulation-based confidence intervals.  相似文献   

15.
This paper investigates the block total response method proposed by Raghavarao and Federer for providing accurate estimates of the base rates of sensitive characteristics during surveys. It determines the best balanced incomplete block design to use to estimate the base rates for three, four, five and six sensitive attributes respectively, given a maximum total number of 13 questions. The estimates obtained from this method have smaller variance than estimates obtained using the similar, but more popular, unmatched count technique.  相似文献   

16.
Universal generators for absolutely-continuous and integer-valued random variables are introduced. The proposal is based on a generalization of the rejection technique proposed by Devroye [The computer generation of random variables with a given characteristic function. Computers and Mathematics with Applications. 1981;7:547–552]. The method involves a dominating function solely requiring the evaluation of integrals which depend on the characteristic function of the underlying random variable. The proposal gives rise to simple algorithms which may be implemented in a few code lines and which may show noticeable performance even if some classical families of distributions are considered.  相似文献   

17.
We propose a sequential method to estimate monotone convex functions that consists of: (i) monotone regression via solving a constrained least square (LS) problem and (ii) convexification of the monotone regression estimate via solving a uniform approximation problem with associated constraints. We show that this method is faster than the constrained LS method. The ratio of computation time increases as data size increases. Moreover, we show that, under an appropriate smoothness condition, the uniform convergence rate achieved by the proposed method is nearly comparable to the best achievable rate for a non-parametric estimate which ignores the shape constraint. Simulation studies show that our method is comparable to the constrained LS method in estimation error. We illustrate our method by analysing ground water level data of wells in Korea.  相似文献   

18.
Exponential distribution has an extensive application in reliability. Introducing shape parameter to this distribution have produced various distribution functions. In their study in 2009, Gupta and Kundu brought another distribution function using Azzalini's method, which is applicable in reliability and named as weighted exponential (WE) distribution. The parameters of this distribution function have been recently estimated by the above two authors in classical statistics. In this paper, Bayesian estimates of the parameters are derived. To achieve this purpose we use Lindley's approximation method for the integrals that cannot be solved in closed form. Furthermore, a Gibbs sampling procedure is used to draw Markov chain Monte Carlo samples from the posterior distribution indirectly and then the Bayes estimates of parameters are derived. The estimation of reliability and hazard functions are also discussed. At the end of the paper, some comparisons between classical and Bayesian estimation methods are studied by using Monte Carlo simulation study. The simulation study incorporates complete and Type-II censored samples.  相似文献   

19.
We propose a method that uses a sequential design instead of a space filling design for estimating tuning parameters of a complex computer model. The goal is to bring the computer model output closer to the real system output. The method fits separate Gaussian process (GP) models to the available data from the physical experiment and the computer experiment and minimizes the discrepancy between the predictions from the GP models to obtain estimates of the tuning parameters. A criterion based on the discrepancy between the predictions from the two GP models and the standard error of prediction for the computer experiment output is then used to obtain a design point for the next run of the computer experiment. The tuning parameters are re-estimated using the augmented data set. The steps are repeated until the budget for the computer experiment data is exhausted. Simulation studies show that the proposed method performs better in bringing a computer model closer to the real system than methods that use a space filling design.  相似文献   

20.
Maximum likelihood estimation of a mean and a covariance matrix whose structure is constrained only to general positive semi-definiteness is treated in this paper. Necessary and sufficient conditions for the local optimality of mean and covariance matrix estimates are given. Observations are assumed to be independent. When the observations are also assumed to be identically distributed, the optimality conditions are used to obtain the mean and covariance matrix solutions in closed form. For the nonidentically distributed observation case, a general numerical technique which integrates scoring and Newton's iterations to solve the optimality condition equations is presented, and convergence performance is examined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号