首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

A vast majority of the literature on the design of sampling plans by variables assumes that the distribution of the quality characteristic variable is normal, and that only its mean varies while its variance is known and remains constant. But, for many processes, the quality variable is nonnormal, and also either one or both of the mean and the variance of the variable can vary randomly. In this paper, an optimal economic approach is developed for design of plans for acceptance sampling by variables having Inverse Gaussian (IG) distributions. The advantage of developing an IG distribution based model is that it can be used for diverse quality variables ranging from highly skewed to almost symmetrical. We assume that the process has two independent assignable causes, one of which shifts the mean of the quality characteristic variable of a product and the other shifts the variance. Since a product quality variable may be affected by any one or both of the assignable causes, three different likely cases of shift (mean shift only, variance shift only, and both mean and variance shift) have been considered in the modeling process. For all of these likely scenarios, mathematical models giving the cost of using a variable acceptance sampling plan are developed. The cost models are optimized in selecting the optimal sampling plan parameters, such as the sample size, and the upper and lower acceptance limits. A large set of numerical example problems is solved for all the cases. Some of these numerical examples are also used in depicting the consequences of: 1) using the assumption that the quality variable is normally distributed when the true distribution is IG, and 2) using sampling plans from the existing standards instead of the optimal plans derived by the methodology developed in this paper. Sensitivities of some of the model input parameters are also studied using the analysis of variance technique. The information obtained on the parameter sensitivities can be used by the model users on prudently allocating resources for estimation of input parameters.  相似文献   

2.
A new, fully data-driven bandwidth selector with a double smoothing (DS) bias term and a data-driven variance estimator is developed following the bootstrap idea. The data-driven variance estimation does not involve any additional bandwidth selection. The proposed bandwidth selector convergences faster than a plug-in one due to the DS bias estimate, whereas the data-driven variance improves its finite sample performance clearly and makes it stable. Asymptotic results of the proposals are obtained. A comparative simulation study was done to show the overall gains and the gains obtained by improving either the bias term or the variance estimate, respectively. It is shown that the use of a good variance estimator is more important when the sample size is relatively small.  相似文献   

3.
4.
We give a formal definition of a representative sample, but roughly speaking, it is a scaled‐down version of the population, capturing its characteristics. New methods for selecting representative probability samples in the presence of auxiliary variables are introduced. Representative samples are needed for multipurpose surveys, when several target variables are of interest. Such samples also enable estimation of parameters in subspaces and improved estimation of target variable distributions. We describe how two recently proposed sampling designs can be used to produce representative samples. Both designs use distance between population units when producing a sample. We propose a distance function that can calculate distances between units in general auxiliary spaces. We also propose a variance estimator for the commonly used Horvitz–Thompson estimator. Real data as well as illustrative examples show that representative samples are obtained and that the variance of the Horvitz–Thompson estimator is reduced compared with simple random sampling.  相似文献   

5.
A balanced sampling design has the interesting property that Horvitz–Thompson estimators of totals for a set of balancing variables are equal to the totals we want to estimate, therefore the variance of Horvitz–Thompson estimators of variables of interest are reduced in function of their correlations with the balancing variables. Since it is hard to derive an analytic expression for the joint inclusion probabilities, we derive a general approximation of variance based on a residual technique. This approximation is useful even in the particular case of unequal probability sampling with fixed sample size. Finally, a set of numerical studies with an original methodology allows to validate this approximation.  相似文献   

6.
The importance of individual inputs of a computer model is sometimes assessed using indices that reflect the amount of output variation that can be attributed to random variation in each input. We review two such indices, and consider input sampling plans that support estimation of one of them, the variance of conditional expectation or VCE (Mckay, 1995. Los Alamos National Laboratory Report NUREG/CR-6311, LA-12915-MS). Sampling plans suggested by Sobol’, Saltelli, and McKay, are examined and compared to a new sampling plan based on balanced incomplete block designs. The new design offers better sampling efficiency for the VCE than those of Sobol’ and Saltelli, and supports unbiased estimation of the index associated with each input.  相似文献   

7.
Unequal probability sampling is commonly used for sample selection. In the context of spatial sampling, the variables of interest often present a positive spatial correlation, so that it is intuitively relevant to select spatially balanced samples. In this article, we study the properties of pivotal sampling and propose an application to tesselation for spatial sampling. We also propose a simple conservative variance estimator. We show that the proposed sampling design is spatially well balanced, with good statistical properties and is computationally very efficient.  相似文献   

8.
The cube method proposed by Deville and Tillé (2004) enables the selection of balanced samples: that is, samples such that the Horvitz-Thompson estimators of auxiliary variables match the known totals of those variables. As an exact balanced sampling design often does not exist, the cube method generally proceeds in two steps: a “flight phase” in which exact balance is maintained, and a “landing phase” in which the final sample is selected while respecting the balance conditions as closely as possible. Deville and Tillé (2005) derive a variance approximation for balanced sampling that takes account of the flight phase only, whereas the landing phase can prove to add non-negligible variance. This paper uses a martingale difference representation of the cube method to construct an efficient simulation-based method for calculating approximate second-order inclusion probabilities. The approximation enables nearly unbiased variance estimation, where the bias is primarily due to the limited number of simulations. In a Monte Carlo study, the proposed method has significantly less bias than the standard variance estimator, leading to improved confidence interval coverage.  相似文献   

9.
Abstract. Systematic sampling is frequently used in surveys, because of its ease of implementation and its design efficiency. An important drawback of systematic sampling, however, is that no direct estimator of the design variance is available. We describe a new estimator of the model‐based expectation of the design variance, under a non‐parametric model for the population. The non‐parametric model is sufficiently flexible that it can be expected to hold at least approximately in many situations with continuous auxiliary variables observed at the population level. We prove the model consistency of the estimator for both the anticipated variance and the design variance under a non‐parametric model with a univariate covariate. The broad applicability of the approach is demonstrated on a dataset from a forestry survey.  相似文献   

10.
Variance estimators for probability sample-based predictions of species richness (S) are typically conditional on the sample (expected variance). In practical applications, sample sizes are typically small, and the variance of input parameters to a richness estimator should not be ignored. We propose a modified bootstrap variance estimator that attempts to capture the sampling variance by generating B replications of the richness prediction from stochastically resampled data of species incidence. The variance estimator is demonstrated for the observed richness (SO), five richness estimators, and with simulated cluster sampling (without replacement) in 11 finite populations of forest tree species. A key feature of the bootstrap procedure is a probabilistic augmentation of a species incidence matrix by the number of species expected to be ‘lost’ in a conventional bootstrap resampling scheme. In Monte-Carlo (MC) simulations, the modified bootstrap procedure performed well in terms of tracking the average MC estimates of richness and standard errors. Bootstrap-based estimates of standard errors were as a rule conservative. Extensions to other sampling designs, estimators of species richness and diversity, and estimates of change are possible.  相似文献   

11.
Stratified Case-Cohort Analysis of General Cohort Sampling Designs   总被引:1,自引:0,他引:1  
Abstract.  It is shown that variance estimates for regression coefficients in exposure-stratified case-cohort studies (Borgan et al ., Lifetime Data Anal., 6, 2000, 39–58) can easily be obtained from influence terms routinely calculated in the standard software for Cox regression. By allowing for post-stratification on outcome we also place the estimators proposed by Chen ( J. R. Statist. Soc. Ser. B , 63, 2001, 791–809) for a general class of cohort sampling designs within the Borgan et al. 's framework, facilitating simple variance estimation for these designs. Finally, the Chen approach is extended to accommodate stratified designs with surrogate variables available for all cohort members, such as stratified case-cohort and counter-matching designs.  相似文献   

12.
In this article the outgoing quality and the total inspection for the chain sampling plan ChSP-4(c 1, c 2) are introduced as well-defined random variables. The probability distributions of outgoing quality and total inspection are stated based on total rectification of non conforming units. The variances of these random variables are studied. The aim of this article is to develop procedures for minimum variance ChSP-4(c 1, c 2) sampling plans and their determination. In addition to minimum variance sampling plans, a procedure is developed for designing plans with a designated maximum variance, a VOQL (Variance of Outgoing Quality Limit) plan. The VOQL concept is analogous to the AOQL (Average Outgoing Quality Limit) except in the VOQL plan, it is the maximum variance which is established instead of the usual maximum AOQ.  相似文献   

13.
When auxiliary information is available at the design stage, samples may be selected by means of balanced sampling. The variance of the Horvitz-Thompson estimator is then reduced, since it is approximately given by that of the residuals of the variable of interest on the balancing variables. In this paper, a method for computing optimal inclusion probabilities for balanced sampling on given auxiliary variables is studied. We show that the method formerly suggested by Tillé and Favre (2005) enables the computation of inclusion probabilities that lead to a decrease in variance under some conditions on the set of balancing variables. A disadvantage is that the target optimal inclusion probabilities depend on the variable of interest. If the needed quantities are unknown at the design stage, we propose to use estimates instead (e.g., arising from a previous wave of the survey). A limited simulation study suggests that, under some conditions, our method performs better than the method of Tillé and Favre (2005).  相似文献   

14.
The population growth rate of the European dipper has been shown to decrease with winter temperature and population size. We examine here the demographic mechanism for this effect by analysing how these factors affect the survival rate. Using more than 20 years of capture-mark-recapture data (1974-1997) based on more than 4000 marked individuals, we perform analyses using open capture-mark-recapture models. This allowed us to estimate the annual apparent survival rates (probability of surviving and staying on the study site from one year to the next one) and the recapture probabilities. We partitioned the variance of the apparent survival rates into sampling variance and process variance using random effects models, and investigated which variables best accounted for temporal process variation. Adult males and females had similar apparent survival rates, with an average of 0.52 and a coefficient of variation of 40%. Chick apparent survival was lower, averaging 0.06 with a coefficient of variation of 42%. Eighty percent of the variance in apparent survival rates was explained by winter temperature and population size for adults and 48% by winter temperature for chicks. The process variance outweighed the sampling variance both for chick and adult survival rates, which explained that shrunk estimates obtained under random effects models were close to MLE estimates. A large proportion of the annual variation in the apparent survival rate of chicks appears to be explained by inter-year differences in dispersal rates.  相似文献   

15.
As modeling efforts expand to a broader spectrum of areas the amount of computer time required to exercise the corresponding computer codes has become quite costly (several hours for a single run is not uncommon). This costly process can be directly tied to the complexity of the modeling and to the large number of input variables (often numbering in the hundreds) Further, the complexity of the modeling (usually involving systems of differential equations) makes the relationships among the input variables not mathematically tractable. In this setting it is desired to perform sensitivity studies of the input-output relationships. Hence, a judicious selection procedure for the choic of values of input variables is required, Latin hypercube sampling has been shown to work well on this type of problem.

However, a variety of situations require that decisions and judgments be made in the face of uncertainty. The source of this uncertainty may be lack ul knowledge about probability distributions associated with input variables, or about different hypothesized future conditions, or may be present as a result of different strategies associated with a decision making process In this paper a generalization of Latin hypercube sampling is given that allows these areas to be investigated without making additional computer runs. In particular it is shown how weights associated with Latin hypercube input vectors may be rhangpd to reflect different probability distribution assumptions on key input variables and yet provide: an unbiased estimate of the cumulative distribution function of the output variable. This allows for different distribution assumptions on input variables to be studied without additional computer runs and without fitting a response surface. In addition these same weights can be used in a modified nonparametric Friedman test to compare treatments, Sample size requirements needed to apply the results of the work are also considered. The procedures presented in this paper are illustrated using a model associated with the risk assessment of geologic disposal of radioactive waste.  相似文献   

16.
The population growth rate of the European dipper has been shown to decrease with winter temperature and population size. We examine here the demographic mechanism for this effect by analysing how these factors affect the survival rate. Using more than 20 years of capture-mark-recapture data (1974-1997) based on more than 4000 marked individuals, we perform analyses using open capture-mark-recapture models. This allowed us to estimate the annual apparent survival rates (probability of surviving and staying on the study site from one year to the next one) and the recapture probabilities. We partitioned the variance of the apparent survival rates into sampling variance and process variance using random effects models, and investigated which variables best accounted for temporal process variation. Adult males and females had similar apparent survival rates, with an average of 0.52 and a coefficient of variation of 40%. Chick apparent survival was lower, averaging 0.06 with a coefficient of variation of 42%. Eighty percent of the variance in apparent survival rates was explained by winter temperature and population size for adults and 48% by winter temperature for chicks. The process variance outweighed the sampling variance both for chick and adult survival rates, which explained that shrunk estimates obtained under random effects models were close to MLE estimates. A large proportion of the annual variation in the apparent survival rate of chicks appears to be explained by inter-year differences in dispersal rates.  相似文献   

17.
In statistical inference one usual assumption is, that data relates to a set of independent identically distributed random variables. From the viewpoint of sampling theory this assumption is only satisfied, if we draw a simple random sample with replacement or the population size is infinite. Then it is not necessary to consider a finite population correction when calculating the variance of a given estimator. To examine the effect of simple random sampling without replacement on the above assumption, the exact variances are calculated in the cases of mean value and variance estimation. This may give us information whether finite population correction is neglible or not.  相似文献   

18.
A genuine small sample theory for post-stratification is developed in this paper. This includes the definition of a ratio estimator of the population mean ?, the derivation of its bias and its exact variance and a discussion of variance estimation. The estimator has both a within strata component of variance which is comparable with that obtained in proportional allocation stratified sampling and a between strata component of variance which will tend to zero as the overall sample size becomes large. Certain optimality properties of the estimator are obtained. The generalization of post-stratification from the simple random sampling to post-stratification used in conjunction with stratification and multi-stage designs is discussed.  相似文献   

19.
This study proposes a synthetic double sampling s chart that integrates the double sampling (DS) s chart and the conforming run length chart. An optimization procedure is proposed to compute the optimal parameters of the synthetic DS s chart. The performance of the synthetic DS s chart is compared with other existing control charts for monitoring process standard deviation. The results show that the synthetic DS s chart is more effective for detecting increases in the process standard deviation for a wide range of shifts. An example is provided to illustrate the operation procedure of the synthetic DS s chart.  相似文献   

20.
ABSTRACT

In this paper, a general class of estimators for estimating the finite population variance in successive sampling on two occasions using multi-auxiliary variables has been proposed. The expression of variance has also been derived. Further, it has been shown that the proposed general class of estimators is more efficient than the usual variance estimator and the class of variance estimators proposed by Singh et al. (2011) when we used more than one auxiliary variable. In addition, we support this with the aid of numerical illustration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号