首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Experimenters are often confronted with the problem that errors in setting factor levels cannot be measured. In the robust design scenario, the goal is to determine the design that minimizes the variability transmitted to the response from the variables’ errors. The prediction variance performance of response surface designs with errors is investigated using design efficiency and the maximum and minimum scaled prediction variance. The evaluation and comparison of response surface designs with and without errors in variables are developed for second order designs on spherical regions. The prediction variance and design efficiency results and recommendations for their use are provided.  相似文献   

2.
Summary.  Longitudinal population-based surveys are widely used in the health sciences to study patterns of change over time. In many of these data sets unique patient identifiers are not publicly available, making it impossible to link the repeated measures from the same individual directly. This poses a statistical challenge for making inferences about time trends because repeated measures from the same individual are likely to be positively correlated, i.e., although the time trend that is estimated under the naïve assumption of independence is unbiased, an unbiased estimate of the variance cannot be obtained without knowledge of the subject identifiers linking repeated measures over time. We propose a simple method for obtaining a conservative estimate of variability for making inferences about trends in proportions over time, ensuring that the type I error is no greater than the specified level. The method proposed is illustrated by using longitudinal data on diabetes hospitalization proportions in South Carolina.  相似文献   

3.
This paper is concerned with the problem of simultaneously monitoring the process mean and process variability of continuous production processes using combined Shewhart-cumulative score (cuscore) quality control procedures developed by Ncube and Woodall (1984). Two methods of approach are developed and their properties are investigated. One method uses two separate Shewhart-cuscore control charts, one for determining shifts in the process mean and the other for detecting shifts in process variability. The other method uses a single combined statistic which is sensitive to shifts in both the mean and the variance. Each procedure is compared to the corresponding Shewhart schemes. It will be shown by average run length calculations that the proposed Shewhart- cuscore schemes are considerably more efficient than the comparative Shewhart procedures for certain shifts in the process mean and process variability for the case when the underlying process control variable is assumed to be normally distributed.  相似文献   

4.
New techniques for the analysis of stochastic volatility models in which the logarithm of conditional variance follows an autoregressive model are developed. A cyclic Metropolis algorithm is used to construct a Markov-chain simulation tool. Simulations from this Markov chain converge in distribution to draws from the posterior distribution enabling exact finite-sample inference. The exact solution to the filtering/smoothing problem of inferring about the unobserved variance states is a by-product of our Markov-chain method. In addition, multistep-ahead predictive densities can be constructed that reflect both inherent model variability and parameter uncertainty. We illustrate our method by analyzing both daily and weekly data on stock returns and exchange rates. Sampling experiments are conducted to compare the performance of Bayes estimators to method of moments and quasi-maximum likelihood estimators proposed in the literature. In both parameter estimation and filtering, the Bayes estimators outperform these other approaches.  相似文献   

5.
In scenarios where the variance of a response variable can be attributed to two sources of variation, a confidence interval for a ratio of variance components gives information about the relative importance of the two sources. For example, if measurements taken from different laboratories are nine times more variable than the measurements taken from within the laboratories, then 90% of the variance in the responses is due to the variability amongst the laboratories and 10% of the variance in the responses is due to the variability within the laboratories. Assuming normally distributed sources of variation, confidence intervals for variance components are readily available. In this paper, however, simulation studies are conducted to evaluate the performance of confidence intervals under non-normal distribution assumptions. Confidence intervals based on the pivotal quantity method, fiducial inference, and the large-sample properties of the restricted maximum likelihood (REML) estimator are considered. Simulation results and an empirical example suggest that the REML-based confidence interval is favored over the other two procedures in unbalanced one-way random effects model.  相似文献   

6.
The balanced half-sample technique has been used for estimating variances in large scale sample surveys. This paper considers the bias and variability of two balanced half-sample variance estimators when unique statistical weights are assigned to the sample individuals. Two weighting schemes are considered. In the first, the statistical weights based on the entire sample are used for each of the individual half-samples while in the second, the weights are adjusted for each individual half-sample.Sampling experiments using computer generated data from populations with specified values for the strata parameters were performed. Their results indicate that the variance estimators based on the second method are subject to much less bias and variability than those based on the first.  相似文献   

7.
The inverse of the Fisher information matrix is commonly used as an approximation for the covariance matrix of maximum-likelihood estimators. We show via three examples that for the covariance parameters of Gaussian stochastic processes under infill asymptotics, the covariance matrix of the limiting distribution of their maximum-likelihood estimators equals the limit of the inverse information matrix. This is either proven analytically or justified by simulation. Furthermore, the limiting behaviour of the trace of the inverse information matrix indicates equivalence or orthogonality of the underlying Gaussian measures. Even in the case of singularity, the estimator of the process variance is seen to be unbiased, and also its variability is approximated accurately from the information matrix.  相似文献   

8.
Construction of closed-form confidence intervals on linear combinations of variance components were developed generically for balanced data and studied mainly for one-way and two-way random effects analysis of variance models. The Satterthwaite approach is easily generalized to unbalanced data and modified to increase its coverage probability. They are applied on measures of assay precision in combination with (restricted) maximum likelihood and Henderson III Type 1 and 3 estimation. Simulations of interlaboratory studies with unbalanced data and with small sample sizes do not show superiority of any of the possible combinations of estimation methods and Satterthwaite approaches on three measures of assay precision. However, the modified Satterthwaite approach with Henderson III Type 3 estimation is often preferred above the other combinations.  相似文献   

9.
The cost of certain types of warranties is closely related to functions that arise in renewal theory. The problem of estimating the warranty cost for a random sample of size n can be reduced to estimating these functions. In an earlier paper, I gave several methods of estimating the expected number of renewals, called the renewal function. This answered an important accounting question of how to arrive at a good approximation of the expected warranty cost. In this article, estimation of the renewal function is reviewed and several extensions are given. In particular, a resampling estimator of the renewal function is introduced. Further, I argue that managers may wish to examine other summary measures of the warranty cost, in particular the variability. To estimate this variability, I introduce estimators, both parametric and nonparametric, of the variance associated with the number of renewals. Several numerical examples are provided.  相似文献   

10.
The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re‐estimation procedures have been proposed in the literature. We compare the blinded sample size re‐estimation procedures based on the one‐sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re‐estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re‐estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one‐sample estimator and in turn the sample size resulting from the related re‐estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one‐sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
In model-based estimation of unobserved components, the minimum mean squared error estimator of the noise component is different from white noise. In this article, some of the differences are analyzed. It is seen how the variance of the component is always underestimated, and the smaller the noise variance, the larger the underestimation. Estimators of small-variance noise components will also have large autocorrelations. Finally, in the context of an application, the sample autocorrelation function of the estimated noise is seen to perform well as a diagnostic tool, even when the variance is small and the series is of relatively short length.  相似文献   

12.
The method of target estimation developed by Cabrera and Fernholz [(1999). Target estimation for bias and mean square error reduction. The Annals of Statistics, 27(3), 1080–1104.] to reduce bias and variance is applied to logistic regression models of several parameters. The expectation functions of the maximum likelihood estimators for the coefficients in the logistic regression models of one and two parameters are analyzed and simulations are given to show a reduction in both bias and variability after targeting the maximum likelihood estimators. In addition to bias and variance reduction, it is found that targeting can also correct the skewness of the original statistic. An example based on real data is given to show the advantage of using target estimators for obtaining better confidence intervals of the corresponding parameters. The notion of the target median is also presented with some applications to the logistic models.  相似文献   

13.
This paper discusses extensions of the variability of the parameters (or functions of parameters) in a recursive system of regression models, and shows that conditioning on the carriers may lead to drastically different conclusions than when the carriers are viewed as stochastic. The relationships among the variables in these models are derived by a sequence of regressions, in which the dependent variable of one equation may reappear as a carrier in a later equation. The model to be fitted need not be identical with the generating equations. In these recursive systems of equations, when the models are miss-specified, or when functions of parameters from different equations are to be estimated, the variability of the estimators is shown to depend critically on the level of conditioning assumed. Various jackknife and bootstrap methods of estimating the variability of the estimators are suggested. In particular the bootstrap estimators of variability can be adopted to captured the correct level of conditioning, by mimicking the conditioning in their design. Two problems in which the level of conditioning matters are described and analysed under the general chained regression models. A real data problem. Omission of variables is sometimes advocated for reducting the variance of the remaining estimators. In both cases the effectiveness of the nonparametric variance estimators is demonstrated using simulation studies.  相似文献   

14.
In some clinical trials and epidemiologic studies, investigators are interested in knowing whether the variability of a biomarker is independently predictive of clinical outcomes. This question is often addressed via a naïve approach where a sample-based estimate (e.g., standard deviation) is calculated as a surrogate for the “true” variability and then used in regression models as a covariate assumed to be free of measurement error. However, it is well known that the measurement error in covariates causes underestimation of the true association. The issue of underestimation can be substantial when the precision is low because of limited number of measures per subject. The joint analysis of survival data and longitudinal data enables one to account for the measurement error in longitudinal data and has received substantial attention in recent years. In this paper we propose a joint model to assess the predictive effect of biomarker variability. The joint model consists of two linked sub-models, a linear mixed model with patient-specific variance for longitudinal data and a full parametric Weibull distribution for survival data, and the association between two models is induced by a latent Gaussian process. Parameters in the joint model are estimated under Bayesian framework and implemented using Markov chain Monte Carlo (MCMC) methods with WinBUGS software. The method is illustrated in the Ocular Hypertension Treatment Study to assess whether the variability of intraocular pressure is an independent risk of primary open-angle glaucoma. The performance of the method is also assessed by simulation studies.  相似文献   

15.
The nonparametric two-sample bootstrap is employed to compute uncertainties of measures in receiver operating characteristic (ROC) analysis on large datasets in areas such as biometrics, and so on. In this framework, the bootstrap variability was empirically studied without a normality assumption, exhaustively in five scenarios involving both high- and low-accuracy matching algorithms. With a tolerance 0.02 of the coefficient of variation, it was found that 2000 bootstrap replications were appropriate for ROC analysis on large datasets in order to reduce the bootstrap variance and ensure the accuracy of the computation.  相似文献   

16.
ABSTRACT

Control charts are effective tools for signal detection in both manufacturing processes and service processes. Much service data come from a process with variables having non-normal or unknown distributions. The commonly used Shewhart variable control charts, which depend heavily on the normality assumption, should not be properly used in such circumstances. In this paper, we propose a new variance chart based on a simple statistic to monitor process variance shifts. We explore the sampling properties of the new monitoring statistic and calculate the average run lengths (ARLs) of the proposed variance chart. Furthermore, an arcsine transformed exponentially weighted moving average (EWMA) chart is proposed because the ARLs of this modified chart are more intuitive and reasonable than those of the variance chart. We compare the out-of-control variance detection performance of the proposed variance chart with that of the non-parametric Mood variance (NP-M) chart with runs rules, developed by Zombade and Ghute [Nonparametric control chart for variability using runs rules. Experiment. 2014;24(4):1683–1691], and the nonparametric likelihood ratio-based distribution-free exponential weighted moving average (NLE) chart and the combination of traditional exponential weighted moving average (EWMA) mean and EWMA variance (CEW) control chart proposed by Zou and Tsung [Likelihood ratio-based distribution-free EWMA control charts. J Qual Technol. 2010;42(2):174–196] by considering cases in which the critical quality characteristic has a normal, a double exponential or a uniform distribution. Comparison results showed that the proposed chart performs better than the NP-M with runs rules, and the NLE and CEW control charts. A numerical example of service times with a right-skewed distribution from a service system of a bank branch in Taiwan is used to illustrate the application of the proposed variance chart and of the arcsine transformed EWMA chart and to compare them with three existing variance (or standard deviation) charts. The proposed charts show better detection performance than those three existing variance charts in monitoring and detecting shifts in the process variance.  相似文献   

17.
A robust algorithm for utility-based shortfall risk (UBSR) measures is developed by combining the kernel density estimation with importance sampling (IS) using exponential twisting techniques. The optimal bandwidth of the kernel density is obtained by minimizing the mean square error of the estimators. Variance is reduced by IS where exponential twisting is applied to determine the optimal IS distribution. Conditions for the best distribution parameters are derived based on the piecewise polynomial loss function and the exponential loss function. The proposed method not only solves the problem of sampling from the kernel density but also reduces the variance of the UBSR estimator.  相似文献   

18.
This paper proposes two new variability measures for categorical data. The first variability measure is obtained as one minus the square root of the sum of the squares of the relative frequencies of the different categories. The second measure is obtained by standardizing the first measure. The measures proposed are functions of the variability measure proposed by Gini [Variabilitá e Mutuabilitá Contributo allo Studio delle Distribuzioni e delle Relazioni Statistiche, C. Cuppini, Bologna, 1912] and approximate the coefficient of nominal variation introduced by Kvålseth [Coefficients of variation for nominal and ordinal categorical data, Percept. Motor Skills 80 (1995), pp. 843–847] when the number of categories increases. Different mathematical properties of the proposed variability measures are studied and analyzed. Several examples illustrate how the variability measures can be interpreted and used in practice.  相似文献   

19.
Understanding multivariate variability is a difficult task because there is no single measure that can be properly used. This article presents a new measure that features good properties. If this measure is simultaneously used with generalized variance, it will give a better understanding of multivariate variability. It can also efficiently be used for large data sets with high dimensions. Furthermore, when it is used for constructing a Shewhart-type chart to monitor multivariate variability, the resulting chart has a much better out-of-control ARL than the generalized variance chart. An example illustrates its advantage.  相似文献   

20.
Bayesian hierarchical models typically involve specifying prior distributions for one or more variance components. This is rather removed from the observed data, so specification based on expert knowledge can be difficult. While there are suggestions for “default” priors in the literature, often a conditionally conjugate inverse‐gamma specification is used, despite documented drawbacks of this choice. The authors suggest “conservative” prior distributions for variance components, which deliberately give more weight to smaller values. These are appropriate for investigators who are skeptical about the presence of variability in the second‐stage parameters (random effects) and want to particularly guard against inferring more structure than is really present. The suggested priors readily adapt to various hierarchical modelling settings, such as fitting smooth curves, modelling spatial variation and combining data from multiple sites.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号