首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The method of steepest ascent direction has been widely accepted for process optimization in the applications of response surface methodology (RSM). The procedure of steepest ascent direction is performed on experiments run along the gradient of a fitted linear model. Therefore, the RSM practitioner needs to decide a suitable stopping rule such that the optimum point estimate in the search direction can be determined. However, the details of how to deflect and then halt a search in the steepest ascent direction are not thoroughly described in the literature. In common practice, it is convenient to use the simple stopping rules after one to three response deteriorations in a row after a series of fitted linear models used for exploration. In the literature, there are two formal stopping rules proposed, that is, Myers and Khuri's [A new procedure for steepest ascent, Comm. Statist. Theory Methods A 8(14) (1979), pp. 1359–1376] stopping rule and del Castillo's [Stopping rules for steepest ascent in experimental optimization, Comm. Statist. Simul. Comput. 26(4) (1997), pp. 1599–1615] stopping rule. This paper develops a new procedure for determining how to adjust and then when to stop a steepest ascent search in response surface exploration. This proposal wishes to provide the RSM practitioner with a clear-cut and easy-to-implement procedure that can attain the optimum mean response more accurately than the existing procedures. Through the study of simulation optimization, it shows that the average optimum point and response returned by using the new search procedure are considerably improved when compared with two existing stopping rules. The number of experimental trials required for convergence is greatly reduced as well.  相似文献   

2.
In several stochastic programming models and statistical problems the computation of probabilities of n-dimensional rectangles is required in case of n-dimensional normal distribution. A recent simulation technique presented by the author for computing values of the distribution function can be modified to yield appropriate procedure for computing probabilities of rectangles. Some numerical work is provided to illustrate the use of the new algorithm.  相似文献   

3.
The signal issued by a control chart triggers the process professionals to investigate the special cause. Change point methods simplify the efforts to search for and identify the special cause. In this study, using maximum likelihood estimation, a multivariate joint change point estimation procedure for monitoring both location and dispersion simultaneously is proposed. After a signal is generated by the simultaneously used Hotelling's T 2 and/or generalized variance control charts, the procedure starts detecting the time of the change. The performance of the proposed method for several structural changes for the mean vector and covariance matrix is discussed.  相似文献   

4.
Two strategies for specifying additional data to be included with the data of a non-orthogonal design are presented. The additional data increase the magnitude of the information matrix XX and the orthogonality of the design matrix. Sequentially, the new points are augmented to the original design, such that each new point optimally increases the smallest eigenvalue of XX. The new runs are created in a predefined spherical region and a rectangular region. Optimum number of additional observations is presented in order to orthogonalize the design matrix X and optimize some functions of the information matrix XX. Comparisons of the results acquired with the proposed methods versus the most commonly used procedures for data augmentation are carried out. In addition, the advantages of the use of our techniques over the studied methods to solve the augmenting data problems are discussed.  相似文献   

5.
In this article we consider a test procedure which is useful in the situations where data are given by n independent blocks and the experimental conditions differ between blocks. The basic idea is very simple. The significance of the sample for each block is calculated and then standardized by its null mean and variance. The sum of standardized significances is proposed as a test statistic. The normal approximation for large n and the exact method for small n are applied in the continuous case. For the discrete case, some devices are also proposed. Several examples are given in order to explain how to apply the procedure.  相似文献   

6.
In this paper, we are interested in the estimation of the reliability parameter R = P(X > Y) where X, a component strength, and Y, a component stress, are independent power Lindley random variables. The point and interval estimation of R, based on maximum likelihood, nonparametric and parametric bootstrap methods, are developed. The performance of the point estimate and confidence interval of R under the considered estimation methods is studied through extensive simulation. A numerical example, based on a real data, is presented to illustrate the proposed procedure.  相似文献   

7.
Dabuxilatu Wang 《Statistics》2013,47(2):167-181
Some asymptotic properties of point estimation with n-dimensional fuzzy data with respect to a special L 2-metric ρ are investigated in this article. It is shown that the collection of all n-dimensional fuzzy data endowed with the ρ-metric is a complete and separable space. Some criterions for point estimation in such fuzzy environments are proposed, and the sample mean, variance and covariance with n-dimensional fuzzy data under these criteria are further studied.  相似文献   

8.
Maximum-likelihood estimation is interpreted as a procedure for generating approximate pivotal quantities, that is, functions u(X;θ) of the data X and parameter θ that have distributions not involving θ. Further, these pivotals should be efficient in the sense of reproducing approximately the likelihood function of θ based on X, and they should be approximately linear in θ. To this end the effect of replacing θ by a parameter ϕ = ϕ(θ) is examined. The relationship of maximum-likelihood estimation interpreted in this way to conditional inference is discussed. Examples illustrating this use of maximum-likelihood estimation on small samples are given.  相似文献   

9.
Bryant, Hartley & Jessen (1960) presented a two‐way stratification sampling design when the sample size n is less than the number of strata. Their design was extended to a three‐way stratification case by Chaudhary & Kumar (1988) , but this design does not take into account serial correlation, which might be present as a result of the presence of a time variable. In this paper, a new sampling procedure is presented for three‐way stratification when one of the stratifying variables is time. The purpose of such a design is to take into account serial correlation. The variance of the unweighted estimator of the population mean with respect to a super population model is used as the basis for comparison. Simulation results show that the suggested design is more efficient than the Chaudhary & Kumar (1988) design.  相似文献   

10.
Approximate confidence intervals are given for the lognormal regression problem. The error in the nominal level can be reduced to O(n ?2), where n is the sample size. An alternative procedure is given which avoids the non-robust assumption of lognormality. This amounts to finding a confidence interval based on M-estimates for a general smooth function of both ? and F, where ? are the parameters of the general (possibly nonlinear) regression problem and F is the unknown distribution function of the residuals. The derived intervals are compared using theory, simulation and real data sets.  相似文献   

11.
The purpose of this article is to present the optimal designs based on D-, G-, A-, I-, and D β-optimality criteria for random coefficient regression (RCR) models with heteroscedastic errors. A sufficient condition for the heteroscedastic structure is given to make sure that the search of optimal designs can be confined at extreme settings of the design region when the criteria satisfy the assumption of the real valued monotone design criteria. Analytical solutions of D-, G-, A-, I-, and D β-optimal designs for the RCR models are derived. Two examples are presented for random slope models with specific heteroscedastic errors.  相似文献   

12.
Point process models are a natural approach for modelling data that arise as point events. In the case of Poisson counts, these may be fitted easily as a weighted Poisson regression. Point processes lack the notion of sample size. This is problematic for model selection, because various classical criteria such as the Bayesian information criterion (BIC) are a function of the sample size, n, and are derived in an asymptotic framework where n tends to infinity. In this paper, we develop an asymptotic result for Poisson point process models in which the observed number of point events, m, plays the role that sample size does in the classical regression context. Following from this result, we derive a version of BIC for point process models, and when fitted via penalised likelihood, conditions for the LASSO penalty that ensure consistency in estimation and the oracle property. We discuss challenges extending these results to the wider class of Gibbs models, of which the Poisson point process model is a special case.  相似文献   

13.
Suppose the observations (ti,yi), i = 1,… n, follow the model where gj are unknown functions. The estimation of the additive components can be done by approximating gj, with a function made up of the sum of a linear fit and a truncated Fourier series of cosines and minimizing a penalized least-squares loss function over the coefficients. This finite-dimensional basis approximation, when fitting an additive model with r predictors, has the advantage of reducing the computations drastically, since it does not require the use of the backfitting algorithm. The cross-validation (CV) [or generalized cross-validation (GCV)] for the additive fit is calculated in a further 0(n) operations. A search path in the r-dimensional space of degrees of freedom is proposed along which the CV (GCV) continuously decreases. The path ends when an increase in the degrees of freedom of any of the predictors yields an increase in CV (GCV). This procedure is illustrated on a meteorological data set.  相似文献   

14.
In the present article, we discuss the regression of a point on the surface of a unit sphere in d dimensions given a point on the surface of a unit sphere in p dimensions, where p may not be equal to d. Point projection is added to the rotation and linear transformation for regression link function. The identifiability of the model is proved. Then, parameter estimation in this set up is discussed. Simulation studies and data analyses are done to illustrate the model.  相似文献   

15.
This paper concerns designed experiments involving observations of orientations following the models of Prentice (1989) and Rivest &Chang (2006). The authors state minimal conditions on the designs for consistent least squares estimation of the matrix parameters in these models. The conditions are expressed in terms of the axes and rotation angles of the design orientations. The authors show that designs satisfying U1 + … + Un = 0 are optimal in the sense of minimizing the estimation error average angular distance. The authors give constructions of optimal n‐point designs when n ≥ 4 and they compare the performance of several designs through approximations and simulation.  相似文献   

16.
EMPIRICAL LIKELIHOOD-BASED KERNEL DENSITY ESTIMATION   总被引:2,自引:0,他引:2  
This paper considers the estimation of a probability density function when extra distributional information is available (e.g. the mean of the distribution is known or the variance is a known function of the mean). The standard kernel method cannot exploit such extra information systematically as it uses an equal probability weight n-1 at each data point. The paper suggests using empirical likelihood to choose the probability weights under constraints formulated from the extra distributional information. An empirical likelihood-based kernel density estimator is given by replacing n-1 by the empirical likelihood weights, and has these advantages: it makes systematic use of the extra information, it is able to reflect the extra characteristics of the density function, and its variance is smaller than that of the standard kernel density estimator.  相似文献   

17.
In this paper, we investigate the problem of estimating a function g(p), where p is the probability of success in a sequential sample of independent identically Bernoulli distributed random variables. As a loss associated with estimation we introduce a generalized LINEX loss function. We construct a sequential procedure possessing some asymptotically optimal properties in the case when p tends to zero. In this approach to the problem, the conditions are given, under which the stopping time is asymptotically efficient and normal, and the corresponding sequential estimator is asymptotically normal. The procedure constructed guarantees that its sequential risk is asymptotically equal to a prescribed constant.  相似文献   

18.
It is well known that the search direction plays a main role in the line search method. In this paper, we propose a new search direction together with the Wolfe line search technique and one nonmonotone line search technique for solving unconstrained optimization problems. The given methods possess sufficiently descent property without carrying out any line search rule. The convergent results are established under suitable conditions. For numerical results, analysis of one probability shows that the new methods are more effective, robust, and stable, than other similar methods. Numerical results of two statistical problems also show that the presented methods are more interesting than other normal methods.  相似文献   

19.
The paper introduces and discusses different estimation methods for multi-index models where the indices are parametric and the link function is nonparametric. We provide a new algorithm that extends the ideas of Hristache and colleagues by an additional penalization within the search space. We concentrate on an intuitive presentation of the procedure. We provide a comparative simulation study of the proposed algorithm, the original algorithm from Hristache et al. [M. Hristache, A. Juditski, and V. Spokoiny, Structure adaptive approach for dimension reduction, Ann. Stat. 29(6) (2001), pp. 1537–1566.] and a modification of this algorithm. Finally the procedure is illustrated by an analysis of the Boston housing data. All computations are performed using the effective dimension reduction (EDR) package that we make available within the R statistical system.  相似文献   

20.
By means of a search design one is able to search for and estimate a small set of non‐zero elements from the set of higher order factorial interactions in addition to estimating the lower order factorial effects. One may be interested in estimating the general mean and main effects, in addition to searching for and estimating a non‐negligible effect in the set of 2‐ and 3‐factor interactions, assuming 4‐ and higher‐order interactions are all zero. Such a search design is called a ‘main effect plus one plan’ and is denoted by MEP.1. Construction of such a plan, for 2m factorial experiments, has been considered and developed by several authors and leads to MEP.1 plans for an odd number m of factors. These designs are generally determined by two arrays, one specifying a main effect plan and the other specifying a follow‐up. In this paper we develop the construction of search designs for an even number of factors m, m≠6. The new series of MEP.1 plans is a set of single array designs with a well structured form. Such a structure allows for flexibility in arriving at an appropriate design with optimum properties for search and estimation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号