首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An important problem in process adjustment using feedback is how often to sample the process and when and by how much to apply an adjustment. Minimum cost feedback schemes based on simple, but practically interesting, models for disturbances and dynamics have been discussed in several particular cases. The more general situation in which there may be measurement and adjustment errors, deterministic process drift, and costs of taking an observation, of making an adjustment, and of being off target, is considered in this article. Assuming all these costs to be known, a numerical method to minimize the overall expected cost is presented. This numerical method provides the optimal sampling interval, action limits, and amount of adjustment; and the resulting average adjustment interval, mean squared deviation from target, and minimum overall expected cost. When the costs of taking an observation, of making an adjustment, and of being off target are not known, the method can be used to choose a particular scheme by judging the advantages and disadvantages of alternative options considering the mean squared deviation they produce, the frequency with which they require observations to be made, and the resulting overall length of time between adjustments. Computer codes that perform the required computations are provided in the appendices and applied to find optimal adjustment schemes in three real examples of application.  相似文献   

2.
Many industrial processes must be adjusted from time to time to maintain their mean continuously close to the target value. Compensations for deviations of the process mean from the target may be accomplished by feedback and/or by feedforward adjustment. Feedback adjustments are made in reaction to errors at the output; feedforward adjustments are made to compensate anticipated changes. This article considers the complementary use of feedback and feedforward adjustments to compensate for anticipated step changes in the process mean as may be necessary in a manufacturing process each time a new batch of feedstock material is introduced. We consider and compare five alternative control schemes: (1) feedforward adjustment alone, (2) feedback adjustment alone, (3) feedback- feedforward adjustment, (4) feedback and indirect feedforward to increase the sensitivity of the feedback scheme, and (5) feedback with both direct and indirect feedforward.  相似文献   

3.
Srivastava and Wu (1997) considered a random walk model with sampling interval and measurement error which was assumed to be white noise. In this paper, we consider the situation in which the measurement error is also a random walk. It is assumed that there is a sampling cost and an adjustment cost. The cost of deviating from the target value is assumed to be proportional to the square of the deviations. The long-run average cost rate is evaluated exactly in terms of the first four moments of a randomly stopped random walk. Using approximations of those moments, optimum, values of the control parameters are given.  相似文献   

4.
If the unknown mean of a univariate population is sufficiently close to the value of an initial guess then an appropriate shrinkage estimator has smaller average squared error than the sample mean. This principle has been known for some time, but it does not appear to have found extension to problems of interval estimation. The author presents valid two‐sided 95% and 99% “shrinkage” confidence intervals for the mean of a normal distribution. These intervals are narrower than the usual interval based on the Student distribution when the population mean lies in such an “effective interval.” A reduction of 20% in the mean width of the interval is possible when the population mean is sufficiently close to the value of the guess. The author also describes a modification to existing shrinkage point estimators of the general univariate mean that enables the effective interval to be enlarged.  相似文献   

5.
Srivastava and Wu and Box and Kramer considered an integrated moving average process of order one with sampling interval for process adjustment. However, the results were obtained by asymptotic methods and simulations respectively. In this paper, these results are obtained analytically. It is assumed that there is a sampling cost and an adjustment cost. The cost of deviating from the target-value is assumed to be proportional to the square of the deviations. The long-run average cost is evaluated exactly in terms of moments of the randomly stopped random walk. Two approximations are given and shown by simulation to be close to the exact value One of these approximations is used to obtain an explicit expression for the optimum value of the inspection interval and the control limit where an adjustment is to be made.  相似文献   

6.
In this paper properties of two estimators of Cpm are investigated in terms of changes in the process mean and variance. The bias and mean squared error of these estimators are derived. It can be shown that the estimate of Cpm proposed by Chan, Cheng and Spiring (1988) has smaller bias than the one proposed by Boyles (1991) and also has a smaller mean squared error under certain conditions. Various approximate confidence intervals for Cpm are obtained and are compared in terms of coverage probabilities, missed rate and average interval width.  相似文献   

7.
The accuracy of a diagnostic test is typically characterized using the receiver operating characteristic (ROC) curve. Summarizing indexes such as the area under the ROC curve (AUC) are used to compare different tests as well as to measure the difference between two populations. Often additional information is available on some of the covariates which are known to influence the accuracy of such measures. The authors propose nonparametric methods for covariate adjustment of the AUC. Models with normal errors and possibly non‐normal errors are discussed and analyzed separately. Nonparametric regression is used for estimating mean and variance functions in both scenarios. In the model that relaxes the assumption of normality, the authors propose a covariate‐adjusted Mann–Whitney estimator for AUC estimation which effectively uses available data to construct working samples at any covariate value of interest and is computationally efficient for implementation. This provides a generalization of the Mann–Whitney approach for comparing two populations by taking covariate effects into account. The authors derive asymptotic properties for the AUC estimators in both settings, including asymptotic normality, optimal strong uniform convergence rates and mean squared error (MSE) consistency. The MSE of the AUC estimators was also assessed in smaller samples by simulation. Data from an agricultural study were used to illustrate the methods of analysis. The Canadian Journal of Statistics 38:27–46; 2010 © 2009 Statistical Society of Canada  相似文献   

8.

This paper is concerned with properties (bias, standard deviation, mean square error and efficiency) of twenty six estimators of the intraclass correlation in the analysis of binary data. Our main interest is to study these properties when data are generated from different distributions. For data generation we considered three over-dispersed binomial distributions, namely, the beta-binomial distribution, the probit normal binomial distribution and a mixture of two binomial distributions. The findings regarding bias, standard deviation and mean squared error of all these estimators, are that (a) in general, the distributions of biases of most of the estimators are negatively skewed. The biases are smallest when data are generated from the beta-binomial distribution and largest when data are generated from the mixture distribution; (b) the standard deviations are smallest when data are generated from the beta-binomial distribution; and (c) the mean squared errors are smallest when data are generated from the beta-binomial distribution and largest when data are generated from the mixture distribution. Of the 26, nine estimators including the maximum likelihood estimator, an estimator based on the optimal quadratic estimating equations of Crowder (1987), and an analysis of variance type estimator is found to have least amount of bias, standard deviation and mean squared error. Also, the distributions of the bias, standard deviation and mean squared error for each of these estimators are, in general, more symmetric than those of the other estimators. Our findings regarding efficiency are that the estimator based on the optimal quadratic estimating equations has consistently high efficiency and least variability in the efficiency results. In the important range in which the intraclass correlation is small (≤0 5), on the average, this estimator shows best efficiency performance. The analysis of variance type estimator seems to do well for larger values of the intraclass correlation. In general, the estimator based on the optimal quadratic estimating equations seems to show best efficiency performance for data from the beta-binomial distribution and the probit normal binomial distribution, and the analysis of variance type estimator seems to do well for data from the mixture distribution.  相似文献   

9.
Five sampling schemes (SS) for price index construction – one cut-off sampling technique and four probability-proportional-to-size (pps) methods – are evaluated by comparing their performance on a homescan market research data set across 21 months for each of the 13 classification of individual consumption by purpose (COICOP) food groups. Classifications are derived for each of the food groups and the population index value is used as a reference to derive performance error measures, such as root mean squared error, bias and standard deviation for each food type. Repeated samples are taken for each of the pps schemes and the resulting performance error measures analysed using regression of three of the pps schemes to assess the overall effect of SS and COICOP group whilst controlling for sample size, month and population index value. Cut-off sampling appears to perform less well than pps methods and multistage pps seems to have no advantage over its single-stage counterpart. The jackknife resampling technique is also explored as a means of estimating the standard error of the index and compared with the actual results from repeated sampling.  相似文献   

10.
In this paper, we consider an adjustment of degrees of freedom in the minimum mean squared error (MMSE) estimator, We derive the exact MSE of the adjusted MMSE (AMMSE) estimator, and compare the MSE of the AMMSE estimator with those of the Stein-(SR), positive-part Stein-rule (PSR) and MMSE estimators by numerical evaluations. It is shown that the adjustment of degrees of freedom is effective when the noncentrality parameter is close to zero, and the MSE performance of the MMSE estimator can be improved in the wide region of the noncentrality parameter by the adjustment, ft is also shown that the AMMSE estimator can have the smaller MSE than the PSR estimator in the wide region of the noncentrality parameter  相似文献   

11.
Although multiple indices were introduced in the area of agreement measurements, the only documented index for linear relational agreement, which is for interval scale data, is the Pearson product-moment correlation coefficient. Despite its meaningfulness, the Pearson product-moment correlation coefficient does not convey the practical information such as what proportion of observations is within a certain boundary of the target value. To address this need, based on the inverse regression, we proposed the adjusted mean squared deviation (AMSD), adjusted coverage probability (ACP), and adjusted total deviation index (ATDI) for the measurement of the relational agreement. They can serve as reasonable and practically meaningful measurements for relational agreement. Real life data are considered to illustrate the performance of the methods.  相似文献   

12.
This article proposes several estimators for estimating the ridge parameter k based on Poisson ridge regression (RR) model. These estimators have been evaluated by means of Monte Carlo simulations. As performance criteria, we have calculated the mean squared error (MSE), the mean value, and the standard deviation of k. The first criterion is commonly used, while the other two have never been used when analyzing Poisson RR. However, these performance criteria are very informative because, if several estimators have an equal estimated MSE, then those with low average value and standard deviation of k should be preferred. Based on the simulated results, we may recommend some biasing parameters that may be useful for the practitioners in the field of health, social, and physical sciences.  相似文献   

13.
We present schemes for the allocation of subjects to treatment groups, in the presence of prognostic factors. The allocations are robust against incorrectly specified regression responses, and against possible heteroscedasticity. Assignment probabilities which minimize the asymptotic variance are obtained. Under certain conditions these are shown to be minimax (with respect to asymptotic mean squared error) as well. We propose a method of sequentially modifying the associated assignment rule, so as to address both variance and bias in finite samples. The resulting scheme is assessed in a simulation study. We find that, relative to common competitors, the robust allocation schemes can result in significant decreases in the mean squared error when the fitted models are biased, at a minimal cost in efficiency when in fact the fitted models are correct.  相似文献   

14.
Spread can be measured, with some advantages, by measures based on squared pairwise differences instead of measures based on squared differences from the mean. The measures are equal to multiples of the various versions of the standard deviation. Their advantages are that the measure of spread does not depend on a previously defined measure of location, that the spread of a sample and of a population are both square roots of simple averages and are both intuitively reasonable, and that the formula for the normal density is simplified. Computation is not significantly increased.  相似文献   

15.
This article deals with the estimation of a fixed population size through capture-mark-recapture method that gives rise to hypergeometric distribution. There are a few well-known and popular point estimators available in the literature, but no good comprehensive comparison is available about their merits. Apart from the available estimators, an empirical Bayes (EB) estimator of the population size is proposed. We compare all the point estimators in terms of relative bias and relative mean squared error. Next, two new interval estimators – (a) an EB highest posterior distribution interval and (b) a frequentist interval estimator based on a parametric bootstrap method, are proposed. The comparison is then carried among the two proposed interval estimators and interval estimators derived from the currently available estimators in terms of coverage probability and average length (AL). Based on comprehensive numerical results, we rank and recommend the point estimators as well as interval estimators for practical use. Finally, a real-life data set for a green treefrog population is used as a demonstration for all the methods discussed.  相似文献   

16.
This paper introduces two estimators, a boundary corrected minimum variance kernel estimator based on a uniform kernel and a discrete frequency polygon estimator, for the cell probabilities of ordinal contingency tables. Simulation results show that the minimum variance boundary kernel estimator has a smaller average sum of squared error than the existing boundary kernel estimators. The discrete frequency polygon estimator is simple and easy to interpret, and it is competitive with the minimum variance boundary kernel estimator. It is proved that both estimators have an optimal rate of convergence in terms of mean sum of squared error, The estimators are also defined for high-dimensional tables.  相似文献   

17.
The lasso procedure is an estimator‐shrinkage and variable selection method. This paper shows that there always exists an interval of tuning parameter values such that the corresponding mean squared prediction error for the lasso estimator is smaller than for the ordinary least squares estimator. For an estimator satisfying some condition such as unbiasedness, the paper defines the corresponding generalized lasso estimator. Its mean squared prediction error is shown to be smaller than that of the estimator for values of the tuning parameter in some interval. This implies that all unbiased estimators are not admissible. Simulation results for five models support the theoretical results.  相似文献   

18.
Standard methods of estimation for autoregressive models are known to be biased in finite samples, which has implications for estimation, hypothesis testing, confidence interval construction and forecasting. Three methods of bias reduction are considered here: first-order bias correction, FOBC, where the total bias is approximated by the O(T-1) bias; bootstrapping; and recursive mean adjustment, RMA. In addition, we show how first-order bias correction is related to linear bias correction. The practically important case where the AR model includes an unknown linear trend is considered in detail. The fidelity of nominal to actual coverage of confidence intervals is also assessed. A simulation study covers the AR(1) model and a number of extensions based on the empirical AR(p) models fitted by Nelson & Plosser (1982). Overall, which method dominates depends on the criterion adopted: bootstrapping tends to be the best at reducing bias, recursive mean adjustment is best at reducing mean squared error, whilst FOBC does particularly well in maintaining the fidelity of confidence intervals.  相似文献   

19.
This paper is concerned with Hintsberger type weighted shrinkage estimator of a parameter when a target value of the same is available. Expressions for the bias and the mean squared error of the estimator are derived. Some results concerning the bias, existence of uniformly minimum mean squared error estimator etc. are proved. For certain c to ices of the weight function, numerical results are presented for the pretest type weighted shrinkage estimator of the mean of normal as well as exponential distributions.  相似文献   

20.
Covariate adjustment for the estimation of treatment effect for randomized controlled trials (RCT) is a simple approach with a long history, hence, its pros and cons have been well‐investigated and published in the literature. It is worthwhile to revisit this topic since recently there has been significant investigation and development on model assumptions, robustness to model mis‐specification, in particular, regarding the Neyman‐Rubin model and the average treatment effect estimand. This paper discusses key results of the investigation and development and their practical implication on pharmaceutical statistics. Accordingly, we recommend that appropriate covariate adjustment should be more widely used for RCTs for both hypothesis testing and estimation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号