首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
Parameter design or robust parameter design (RPD) is an engineering methodology intended as a cost-effective approach for improving the quality of products and processes. The goal of parameter design is to choose the levels of the control variables that optimize a defined quality characteristic. An essential component of RPD involves the assumption of well estimated models for the process mean and variance. Traditionally, the modeling of the mean and variance has been done parametrically. It is often the case, particularly when modeling the variance, that nonparametric techniques are more appropriate due to the nature of the curvature in the underlying function. Most response surface experiments involve sparse data. In sparse data situations with unusual curvature in the underlying function, nonparametric techniques often result in estimates with problematic variation whereas their parametric counterparts may result in estimates with problematic bias. We propose the use of semi-parametric modeling within the robust design setting, combining parametric and nonparametric functions to improve the quality of both mean and variance model estimation. The proposed method will be illustrated with an example and simulations.  相似文献   

2.
Two common experimental designs used in robust parameter design (RPD) are crossed array and mixed resolution designs. However, the prohibited number of runs, constraints in the design space or special model requirements render some of these designs inadequate. This paper presents the application of an evolutionary strategy to produce nearly optimal design matrices for RPD. The designs are derived by solving a nonlinear optimization problem involving both 𝒟- and 𝒢-efficiency simultaneously. The methodology presented allows the user to obtain new exact designs for a specific number of runs, and a particular experimental region. The combination of 𝒟- and 𝒢-efficiency results in experimental designs that outperform the corresponding benchmarks.  相似文献   

3.
针对复杂产品质量设计阶段的静态多响应稳健参数设计中的模型参数不确定性问题,现有的多响应优化方法大都对响应的样本均值与样本方差采用双响应曲面法分别建模以考虑多响应的最优性与稳健性。文章在此基础上分析构建了均方根误差响应这一新的稳健性度量指标,提出了考虑模型参数不确定性的满意度函数方法,结合置信区间思想分析了模型参数不确定性对均方根误差响应的影响,并依据复杂产品质量设计阶段的实例进行分析研究,验证了该方法能够得到多响应系统在模型参数不确定性情况下更为稳健的全局最优解。  相似文献   

4.
Taguchi's robust design technique, also known as parameter design, focuses on making product and process designs insensitive (i.e., robust) to hard to control variations. In some applications, however, his approach of modeling expected loss and the resulting “product array” experimental format leads to unnecessarily expensive and less informative experiments. The response model approach to robust design proposed by Welch, Ku, Yang, and Sacks (1990), Box and Jones (1990), Lucas (1989), and Shoemaker, Tsui and Wu (1991) offers more flexibility and economy in experiment planning and more informative modeling. This paper develops a formal basis for the graphical data-analytic approach presented in Shoemaker et al. In particular, we decompose overall response variation into components representing the variability contributed by each noise factor, and show when this decomposition allows us to use individual control-by-noise interaction plots to minimize response variation. We then generalize the control-by-noise interaction plots to extend their usefulness, and develop a formal analysis strategy using these plots to minimize response variation.  相似文献   

5.

Ordinal data are often modeled using a continuous latent response distribution, which is partially observed through windows of adjacent intervals defined by cutpoints. In this paper we propose the beta distribution as a model for the latent response. The beta distribution has several advantages over the other common distributions used, e.g. , normal and logistic. In particular, it enables separate modeling of location and dispersion effects which is essential in the Taguchi method of robust design. First, we study the problem of estimating the location and dispersion parameters of a single beta distribution (representing a single treatment) from ordinal data assuming known equispaced cutpoints. Two methods of estimation are compared: the maximum likelihood method and the method of moments. Two methods of treating the data are considered: in raw discrete form and in smoothed continuousized form. A large scale simulation study is carried out to compare the different methods. The mean square errors of the estimates are obtained under a variety of parameter configurations. Comparisons are made based on the ratios of the mean square errors (called the relative efficiencies). No method is universally the best, but the maximum likelihood method using continuousized data is found to perform generally well, especially for estimating the dispersion parameter. This method is also computationally much faster than the other methods and does not experience convergence difficulties in case of sparse or empty cells. Next, the problem of estimating unknown cutpoints is addressed. Here the multiple treatments setup is considered since in an actual application, cutpoints are common to all treatments, and must be estimated from all the data. A two-step iterative algorithm is proposed for estimating the location and dispersion parameters of the treatments, and the cutpoints. The proposed beta model and McCullagh's (1980) proportional odds model are compared by fitting them to two real data sets.  相似文献   

6.
Abstract

The availability of some extra information, along with the actual variable of interest, may be easily accessible in different practical situations. A sensible use of the additional source may help to improve the properties of statistical techniques. In this study, we focus on the estimators for calibration and intend to propose a setup where we reply only on first two moments instead of modeling the whole distributional shape. We have proposed an estimator for linear calibration problems and investigated it under normal and skewed environments. We have partitioned its mean squared error into intrinsic and estimation components. We have observed that the bias and mean squared error of the proposed estimator are function of four dimensionless quantities. It is to be noticed that both the classical and the inverse estimators become the special cases of the proposed estimator. Moreover, the mean squared error of the proposed estimator and the exact mean squared error of the inverse estimator coincide. We have also observed that the proposed estimator performs quite well for skewed errors as well. The real data applications are also included in the study for practical considerations.  相似文献   

7.
The use of robust measures helps to increase the precision of the estimators, especially for the estimation of extremely skewed distributions. In this article, a generalized ratio estimator is proposed by using some robust measures with single auxiliary variable under the adaptive cluster sampling (ACS) design. We have incorporated tri-mean (TM), mid-range (MR) and Hodges-Lehman (HL) of the auxiliary variable as robust measures together with some conventional measures. The expressions of bias and mean square error (MSE) of the proposed generalized ratio estimator are derived. Two types of numerical study have been conducted using artificial clustered population and real data application to examine the performance of the proposed estimator over the usual mean per unit estimator under simple random sampling (SRS). Related results of the simulation study show that the proposed estimators provide better estimation results on both real and artificial population over the competing estimators.  相似文献   

8.
In the context of nonlinear regression models, we propose an optimal experimental design criterion for estimating the parameters that account for the intrinsic and parameter-effects nonlinearity. The optimal design criterion proposed in this article minimizes the determinant of the mean squared error matrix of the parameter estimator that is quadratically approximated using the curvature array. The design criterion reduces to the D-optimal design criterion if there are no intrinsic and parameter-effects nonlinearity in the model, and depends on the scale parameter estimator and on the reparameterization used. Some examples, using a well known nonlinear kinetics model, demonstrate the application of the proposed criterion to nonsequential design of experiments as compared with the D-optimal criterion.  相似文献   

9.
ABSTRACT

A vast majority of the literature on the design of sampling plans by variables assumes that the distribution of the quality characteristic variable is normal, and that only its mean varies while its variance is known and remains constant. But, for many processes, the quality variable is nonnormal, and also either one or both of the mean and the variance of the variable can vary randomly. In this paper, an optimal economic approach is developed for design of plans for acceptance sampling by variables having Inverse Gaussian (IG) distributions. The advantage of developing an IG distribution based model is that it can be used for diverse quality variables ranging from highly skewed to almost symmetrical. We assume that the process has two independent assignable causes, one of which shifts the mean of the quality characteristic variable of a product and the other shifts the variance. Since a product quality variable may be affected by any one or both of the assignable causes, three different likely cases of shift (mean shift only, variance shift only, and both mean and variance shift) have been considered in the modeling process. For all of these likely scenarios, mathematical models giving the cost of using a variable acceptance sampling plan are developed. The cost models are optimized in selecting the optimal sampling plan parameters, such as the sample size, and the upper and lower acceptance limits. A large set of numerical example problems is solved for all the cases. Some of these numerical examples are also used in depicting the consequences of: 1) using the assumption that the quality variable is normally distributed when the true distribution is IG, and 2) using sampling plans from the existing standards instead of the optimal plans derived by the methodology developed in this paper. Sensitivities of some of the model input parameters are also studied using the analysis of variance technique. The information obtained on the parameter sensitivities can be used by the model users on prudently allocating resources for estimation of input parameters.  相似文献   

10.
ABSTRACT

In this paper, we study a novelly robust variable selection and parametric component identification simultaneously in varying coefficient models. The proposed estimator is based on spline approximation and two smoothly clipped absolute deviation (SCAD) penalties through rank regression, which is robust with respect to heavy-tailed errors or outliers in the response. Furthermore, when the tuning parameter is chosen by modified BIC criterion, we show that the proposed procedure is consistent both in variable selection and the separation of varying and constant coefficients. In addition, the estimators of varying coefficients possess the optimal convergence rate under some assumptions, and the estimators of constant coefficients have the same asymptotic distribution as their counterparts obtained when the true model is known. Simulation studies and a real data example are undertaken to assess the finite sample performance of the proposed variable selection procedure.  相似文献   

11.
Inference for a generalized linear model is generally performed using asymptotic approximations for the bias and the covariance matrix of the parameter estimators. For small experiments, these approximations can be poor and result in estimators with considerable bias. We investigate the properties of designs for small experiments when the response is described by a simple logistic regression model and parameter estimators are to be obtained by the maximum penalized likelihood method of Firth [Firth, D., 1993, Bias reduction of maximum likelihood estimates. Biometrika, 80, 27–38]. Although this method achieves a reduction in bias, we illustrate that the remaining bias may be substantial for small experiments, and propose minimization of the integrated mean square error, based on Firth's estimates, as a suitable criterion for design selection. This approach is used to find locally optimal designs for two support points.  相似文献   

12.
The Zero-inflated Poisson distribution has been used in the modeling of count data in different contexts. This model tends to be influenced by outliers because of the excessive occurrence of zeroes, thus outlier identification and robust parameter estimation are important for such distribution. Some outlier identification methods are studied in this paper, and their applications and results are also presented with an example. To eliminate the effect of outliers, two robust parameter estimates are proposed based on the trimmed mean and the Winsorized mean. Simulation results show the robustness of our proposed parameter estimates.  相似文献   

13.
Highly skewed and non-negative data can often be modeled by the delta-lognormal distribution in fisheries research. However, the coverage probabilities of extant interval estimation procedures are less satisfactory in small sample sizes and highly skewed data. We propose a heuristic method of estimating confidence intervals for the mean of the delta-lognormal distribution. This heuristic method is an estimation based on asymptotic generalized pivotal quantity to construct generalized confidence interval for the mean of the delta-lognormal distribution. Simulation results show that the proposed interval estimation procedure yields satisfactory coverage probabilities, expected interval lengths and reasonable relative biases. Finally, the proposed method is employed in red cod densities data for a demonstration.  相似文献   

14.
Abstract

Model misspecification in generalized linear models (GLMs) occurs usually when the linear predictor and/or the link function assumed are incorrect. This article discusses the effect of such misspecification on design selection for multinomial GLMs and proposes the use of quantile dispersion graphs to select robust designs. Due to misspecification in the model, parameter estimates are usually biased and the designs are compared on the basis of their mean squared error of prediction. Several numerical examples including a real data set are presented to illustrate the proposed methodology.  相似文献   

15.
Although regression estimates are quite robust to slight departure from normality, symmetric prediction intervals assuming normality can be highly unsatisfactory and problematic if the residuals have a skewed distribution. For data with distributions outside the class covered by the Generalized Linear Model, a common way to handle non-normality is to transform the response variable. Unfortunately, transforming the response variable often destroys the theoretical or empirical functional relationship connecting the mean of the response variable to the explanatory variables established on the original scale. Further complication arises if a single transformation cannot both stabilize variance and attain normality. Furthermore, practitioners also find the interpretation of highly transformed data not obvious and often prefer an analysis on the original scale. The present paper presents an alternative approach for handling simultaneously heteroscedasticity and non-normality without resorting to data transformation. Unlike classical approaches, the proposed modeling allows practitioners to formulate the mean and variance relationships directly on the original scale, making data interpretation considerably easier. The modeled variance relationship and form of non-normality in the proposed approach can be easily examined through a certain function of the standardized residuals. The proposed method is seen to remain consistent for estimating the regression parameters even if the variance function is misspecified. The method along with some model checking techniques is illustrated with a real example.  相似文献   

16.
Abstract

Augmented mixed beta regression models are suitable choices for modeling continuous response variables on the closed interval [0, 1]. The random eeceeects in these models are typically assumed to be normally distributed, but this assumption is frequently violated in some applied studies. In this paper, an augmented mixed beta regression model with skew-normal independent distribution for random effects are used. Next, we adopt a Bayesian approach for parameter estimation using the MCMC algorithm. The methods are then evaluated using some intensive simulation studies. Finally, the proposed models have applied to analyze a dataset from an Iranian Labor Force Survey.  相似文献   

17.
Summary.  An important question within industrial statistics is how to find operating conditions that achieve some goal for the mean of a characteristic of interest while simultaneously minimizing the characteristic's process variance. Often, people refer to this kind of situation as the robust parameter design problem. The robust parameter design literature is rich with ways to create separate models for the mean and variance from this type of experiment. Many times time and/or cost constraints force certain factors of interest to be much more difficult to change than others. An appropriate approach to such an experiment restricts the randomization, which leads to a split-plot structure. The paper modifies the central composite design to allow the estimation of separate models for the characteristic's mean and variances under a split-plot structure. The paper goes on to discuss an appropriate analysis of the experimental results. It illustrates the methodology with an industrial experiment involving a chemical vapour deposition process for the manufacture of silicon wafers. The methodology was used to achieve a silicon layer thickness value of 485 Å while minimizing the process variation.  相似文献   

18.
We present a new experimental design procedure that divides a set of experimental units into two groups in order to minimize error in estimating a treatment effect. One concern is the elimination of large covariate imbalance between the two groups before the experiment begins. Another concern is robustness of the design to misspecification in response models. We address both concerns in our proposed design: we first place subjects into pairs using optimal nonbipartite matching, making our estimator robust to complicated nonlinear response models. Our innovation is to keep the matched pairs extant, take differences of the covariate values within each matched pair, and then use the greedy switching heuristic of Krieger et al. (2019) or rerandomization on these differences. This latter step greatly reduces covariate imbalance. Furthermore, our resultant designs are shown to be nearly as random as matching, which is robust to unobserved covariates. When compared to previous designs, our approach exhibits significant improvement in the mean squared error of the treatment effect estimator when the response model is nonlinear and performs at least as well when the response model is linear. Our design procedure can be found as a method in the open source R package available on CRAN called GreedyExperimentalDesign .  相似文献   

19.
Development of algorithms that estimate the offset between two clocks has received a lot of attention, with the motivating force being data networking applications that require synchronous communication protocols. Recently, statistical modeling techniques have been used to develop improved estimation algorithms with the focus being obtaining robust estimators in terms of mean squared error. In this paper, we extend the use of statistical modeling techniques to address the construction of confidence intervals for the offset parameter. We consider the case where the distributions of network delays are members of a scale family. Our results include an asymptotic confidence interval and a generalized confidence interval in the sense of [S. Weerahandi, Generalized confidence intervals, Journal of the American Statistical Association 88 (1993) 899–905. Correction in vol. 89, p. 726, 1994]. We compare and contrast the two approaches for obtaining a confidence interval, and illustrate specific applications using exponential, Rayleigh and heavy-tailed Weibull network delays as concrete examples.  相似文献   

20.
A novel method is proposed for choosing the tuning parameter associated with a family of robust estimators. It consists of minimising estimated mean squared error, an approach that requires pilot estimation of model parameters. The method is explored for the family of minimum distance estimators proposed by [Basu, A., Harris, I.R., Hjort, N.L. and Jones, M.C., 1998, Robust and efficient estimation by minimising a density power divergence. Biometrika, 85, 549–559.] Our preference in that context is for a version of the method using the L 2 distance estimator [Scott, D.W., 2001, Parametric statistical modeling by minimum integrated squared error. Technometrics, 43, 274–285.] as pilot estimator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号