首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
Selective assembly is an effective approach for improving a quality of a product assembled from two types of components, when the quality characteristic is the clearance between the mating components. Mease et al. (2004 Mease , D. , Nair , V. N. , Sudjianto , A. ( 2004 ). Selective assembly in manufacturing: statistical issues and optimal binning strategies . Technometrics 46 : 165175 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) have extensively studied optimal binning strategies under squared error loss in selective assembly, especially for the case when two types of component dimensions are identically distributed. However, the presence of measurement error in component dimensions has not been addressed. Here we study optimal binning strategies under squared error loss when measurement error is present. We give the equations for the optimal partition limits minimizing expected squared error loss, and show that the solution to them is unique when the component dimensions and the measurement errors are normally distributed. We then compare the expected losses of the optimal binning strategies with and without measurement error for normal distribution, and also evaluate the influence of the measurement error.  相似文献   

2.
Minimax squared error risk estimators of the mean of a multivariate normal distribution are characterized which have smallest Bayes risk with respect to a spherically symmetric prior distribution for (i) squared error loss, and (ii) zero-one loss depending on whether or not estimates are consistent with the hypothesis that the mean is null. In (i), the optimal estimators are the usual Bayes estimators for prior distributions with special structure. In (ii), preliminary test estimators are optimal. The results are obtained by applying the theory of minimax-Bayes-compromise decision problems.  相似文献   

3.
This paper develops alternatives to maximum likelihood estimators (MLE) for logistic regression models and compares the mean squared error (MSE) of the estimators. The MLE for the vector of underlying success probabilities has low MSE only when the true probabilities are extreme (i.e., near 0 or 1). Extreme probabilities correspond to logistic regression parameter vectors which are large in norm. A competing “restricted” MLE and an empirical version of it are suggested as estimators with better performance than the MLE for central probabilities. An approximate EM-algorithm for estimating the restriction is described. As in the case of normal theory ridge estimators, the proposed estimators are shown to be formally derivable by Bayes and empirical Bayes arguments. The small sample operating characteristics of the proposed estimators are compared to the MLE via a simulation study; both the estimation of individual probabilities and of logistic parameters are considered.  相似文献   

4.
In this article, we consider the Bayes and empirical Bayes problem of the current population mean of a finite population when the sample data is available from other similar (m-1) finite populations. We investigate a general class of linear estimators and obtain the optimal linear Bayes estimator of the finite population mean under a squared error loss function that considered the cost of sampling. The optimal linear Bayes estimator and the sample size are obtained as a function of the parameters of the prior distribution. The corresponding empirical Bayes estimates are obtained by replacing the unknown hyperparameters with their respective consistent estimates. A Monte Carlo study is conducted to evaluate the performance of the proposed empirical Bayes procedure.  相似文献   

5.
This paper introduces two estimators, a boundary corrected minimum variance kernel estimator based on a uniform kernel and a discrete frequency polygon estimator, for the cell probabilities of ordinal contingency tables. Simulation results show that the minimum variance boundary kernel estimator has a smaller average sum of squared error than the existing boundary kernel estimators. The discrete frequency polygon estimator is simple and easy to interpret, and it is competitive with the minimum variance boundary kernel estimator. It is proved that both estimators have an optimal rate of convergence in terms of mean sum of squared error, The estimators are also defined for high-dimensional tables.  相似文献   

6.
In this paper properties of two estimators of Cpm are investigated in terms of changes in the process mean and variance. The bias and mean squared error of these estimators are derived. It can be shown that the estimate of Cpm proposed by Chan, Cheng and Spiring (1988) has smaller bias than the one proposed by Boyles (1991) and also has a smaller mean squared error under certain conditions. Various approximate confidence intervals for Cpm are obtained and are compared in terms of coverage probabilities, missed rate and average interval width.  相似文献   

7.
It has been recognized that counting the objects allocated by a rule of classification to several unknown classes often does not provide good estimates of the true class proportions of the objects to be classified. We propose a linear transformation of these classification estimates, which minimizes the mean squared error of the transformed estimates over all possible sets of true proportions. This so-called best-linear-corrector (BLC) transformation is a function of the confusion (classification-error) matrix and of the first and second moments of the prior distribution of the vector of proportions. When the number of objects to be classified increases, the BLC tends to the inverse of the confusion matrix. The estimates that are obtained directly by this inverse-confusion corrector (ICC) are also the maximum-likelihood unbiased estimates of the probabilities that the objects originate from one or the other class, had the objects been preselected with those probabilities. But for estimating the actual proportions, the ICC estimates behave less well than the raw classification estimates for some collections. In that situation, the BLC is substantially superior to the ICC even for some large collections of objects and is always substantially superior to the raw estimates. The statistical model is applied concretely to the measure of forest covers in remote sensing.  相似文献   

8.
The article considers nonparametric inference for quantile regression models with time-varying coefficients. The errors and covariates of the regression are assumed to belong to a general class of locally stationary processes and are allowed to be cross-dependent. Simultaneous confidence tubes (SCTs) and integrated squared difference tests (ISDTs) are proposed for simultaneous nonparametric inference of the latter models with asymptotically correct coverage probabilities and Type I error rates. Our methodologies are shown to possess certain asymptotically optimal properties. Furthermore, we propose an information criterion that performs consistent model selection for nonparametric quantile regression models of nonstationary time series. For implementation, a wild bootstrap procedure is proposed, which is shown to be robust to the dependent and nonstationary data structure. Our method is applied to studying the asymmetric and time-varying dynamic structures of the U.S. unemployment rate since the 1940s. Supplementary materials for this article are available online.  相似文献   

9.
In this paper, a nonparametric Bayesian approach to the analysis of binary response data is considered. Using a Dirichlet process prior, and squared error loss, the Bayes estimators of response probabilities are obtained. Finally, the results obtained are employed to analyze the ARC 090 Trial data.  相似文献   

10.
This paper considers the problem of selecting optimal bandwidths for variable (sample‐point adaptive) kernel density estimation. A data‐driven variable bandwidth selector is proposed, based on the idea of approximating the log‐bandwidth function by a cubic spline. This cubic spline is optimized with respect to a cross‐validation criterion. The proposed method can be interpreted as a selector for either integrated squared error (ISE) or mean integrated squared error (MISE) optimal bandwidths. This leads to reflection upon some of the differences between ISE and MISE as error criteria for variable kernel estimation. Results from simulation studies indicate that the proposed method outperforms a fixed kernel estimator (in terms of ISE) when the target density has a combination of sharp modes and regions of smooth undulation. Moreover, some detailed data analyses suggest that the gains in ISE may understate the improvements in visual appeal obtained using the proposed variable kernel estimator. These numerical studies also show that the proposed estimator outperforms existing variable kernel density estimators implemented using piecewise constant bandwidth functions.  相似文献   

11.
In this article, we introduce a wavelet threshold estimator to estimate multinomial probabilities. The advantages of the estimator are its adaptability to the roughness and sparseness of the data. The asymptotic behavior of the estimator is investigated through an often-used criteria: the mean sum of squared error (MSSE). We show that the MSSE of the estimator achieves the optimal rate of convergence. Its performance on finite samples is examined through simulation studies which show favorable results for the new estimator over the commonly used kernel estimator.  相似文献   

12.
We present schemes for the allocation of subjects to treatment groups, in the presence of prognostic factors. The allocations are robust against incorrectly specified regression responses, and against possible heteroscedasticity. Assignment probabilities which minimize the asymptotic variance are obtained. Under certain conditions these are shown to be minimax (with respect to asymptotic mean squared error) as well. We propose a method of sequentially modifying the associated assignment rule, so as to address both variance and bias in finite samples. The resulting scheme is assessed in a simulation study. We find that, relative to common competitors, the robust allocation schemes can result in significant decreases in the mean squared error when the fitted models are biased, at a minimal cost in efficiency when in fact the fitted models are correct.  相似文献   

13.
Strategies for improving fixed non-negative kernel estimators have focused on reducing the bias, either by employing higher-order kernels or by adjusting the bandwidth locally. Intuitively, bandwidths in the tails should be relatively larger in order to reduce wiggles since there is less data available in the tails. We show that in regions where the density function is convex, it is theoretically possible to find local bandwidths such that the pointwise bias is exactly zero. The corresponding pointwise mean squared error converges at the parametric rate of O ( n −1 ) rather than the slower O ( n −4/5). These so-called zero-bias bandwidths are constant and are usually orders of magnitude larger than the optimal locally adaptive bandwidths predicted by asymptotic mean squared error analysis. We describe data-based algorithms for estimating zero-bias bandwidths over intervals where the density is convex. We find that our particular density estimator attains the usual O ( n −4/5) rate. However, we demonstrate that the algorithms can provide significant improvement in mean squared error, often clearly visually superior curves, and a new operating point in the usual bias-variance tradeoff.  相似文献   

14.
This article deals with progressive first-failure censoring, which is a generalization of progressive censoring. We derive maximum likelihood estimators of the unknown parameters and reliability characteristics of generalized inverted exponential distribution using progressive first-failure censored samples. The asymptotic confidence intervals and coverage probabilities for the parameters are obtained based on the observed Fisher's information matrix. Bayes estimators of the parameters and reliability characteristics under squared error loss function are obtained using the Lindley approximation and importance sampling methods. Also, highest posterior density credible intervals for the parameters are computed using importance sampling procedure. A Monte Carlo simulation study is conducted to analyse the performance of the estimators derived in the article. A real data set is discussed for illustration purposes. Finally, an optimal censoring scheme has been suggested using different optimality criteria.  相似文献   

15.
This paper addresses the problem of the probability density estimation in the presence of covariates when data are missing at random (MAR). The inverse probability weighted method is used to define a nonparametric and a semiparametric weighted probability density estimators. A regression calibration technique is also used to define an imputed estimator. It is shown that all the estimators are asymptotically normal with the same asymptotic variance as that of the inverse probability weighted estimator with known selection probability function and weights. Also, we establish the mean squared error (MSE) bounds and obtain the MSE convergence rates. A simulation is carried out to assess the proposed estimators in terms of the bias and standard error.  相似文献   

16.
Abstract

In this work, we propose beta prime kernel estimator for estimation of a probability density functions defined with nonnegative support. For the proposed estimator, beta prime probability density function used as a kernel. It is free of boundary bias and nonnegative with a natural varying shape. We obtained the optimal rate of convergence for the mean squared error (MSE) and the mean integrated squared error (MISE). Also, we use adaptive Bayesian bandwidth selection method with Lindley approximation for heavy tailed distributions and compare its performance with the global least squares cross-validation bandwidth selection method. Simulation studies are performed to evaluate the average integrated squared error (ISE) of the proposed kernel estimator against some asymmetric competitors using Monte Carlo simulations. Moreover, real data sets are presented to illustrate the findings.  相似文献   

17.
The ridge regression, as a modification of least squares, is one of the ways of the ways of overcoming multicollinearity of regressors in regression analysis. The central problem in the application of the ridge regression is the choice of the perturbation factor k. The paper compares the performance of some subjective (SM) and objective (OM) methods of selection k in respect of the estimates of the mean squared estimation error (MSEE) and the mean squared prediction error (MSPE). The chosen methods were applied on empirical data which relate to social product and some other relevant factors of agriculture of Yugoslavia and its regions.  相似文献   

18.
Kupper and Meydrech and Myers and Lahoda introduced the mean squared error (MSE) approach to study response surface designs, Duncan and DeGroot derived a criterion for optimality of linear experimental designs based on minimum mean squared error. However, minimization of the MSE of an estimator maxr renuire some knowledge about the unknown parameters. Without such knowledge construction of designs optimal in the sense of MSE may not be possible. In this article a simple method of selecting the levels of regressor variables suitable for estimating some functions of the parameters of a lognormal regression model is developed using a criterion for optimality based on the variance of an estimator. For some special parametric functions, the criterion used here is equivalent to the criterion of minimizing the mean squared error. It is found that the maximum likelihood estimators of a class of parametric functions can be improved substantially (in the sense of MSE) by proper choice of the values of regressor variables. Moreover, our approach is applicable to analysis of variance as well as regression designs.  相似文献   

19.
This paper presents the Bayesian analysis of a semiparametric regression model that consists of parametric and nonparametric components. The nonparametric component is represented with a Fourier series where the Fourier coefficients are assumed a priori to have zero means and to decay to 0 in probability at either algebraic or geometric rates. The rate of decay controls the smoothness of the response function. The posterior analysis automatically selects the amount of smoothing that is coherent with the model and data. Posterior probabilities of the parametric and semiparametric models provide a method for testing the parametric model against a non-specific alternative. The Bayes estimator's mean integrated squared error compares favourably with the theoretically optimal estimator for kernel regression.  相似文献   

20.
Newhouse and Oman (1971) identified the orientations with respect to the eigenvectors of X'X of the true coefficient vector of the linear regression model for which the ordinary ridge regression estimator performs best and performs worse when mean squared error is the measure of performance. In this paper the corresponding result is derived for generalized ridge regression for two risk functions: mean squared error and mean squared error of prediction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号