首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Some properties of control procedures with variable sampling intervals (VSI) have been investigated in recent years by Amin, Renolds et al, and others. Such procedures have been shown to be more efficient when compared to the corresponding fixed sampling interval (FSI) charts with respect to the Average Time to Signal (ATS) when the Average Run Length (ARL) values for both types of procedures are held equal. Frequent switching between the different sampling intervals can be a complicating factor in the application of control charts with variable sampling intervals (VSI). This problem is being addressed in this article, and improved switching rules are presented and evaluated for Shewhart, CUSUM, and EWMA control procedures. The proposed rules considerably reduce the average number of switches between the sampling intervals and also improve the ATS properties of the control procedures when compared to the conventional variable sampling interval procedures  相似文献   

2.
Shewhart, cumulative sum (CUSUM), and exponentially weighted moving average (EWMA) control procedures with variable sampling intervals (VSI) have been investigated in recent years for detecting shifts in the process mean. Such procedures have been shown to be more efficient when compared with the corresponding fixed sampling interval (FSI) charts with respect to the average time to signal (ATS) when the average run length (ARL) values of both types of procedures are held equal. Frequent switching between the different sampling intervals can be a complicating factor in the application of control charts with variable sampling intervals. In this article, we propose using a double exponentially weighted moving average control procedure with variable sampling intervals (VSI-DEWMA) for detecting shifts in the process mean. It is shown that the proposed VSI-DEWMA control procedure is more efficient when compared with the corresponding fixed sampling interval FSI-DEWMA chart with respect to the average time to signal (ATS) when the average run length (ARL) values of both types of procedures are held equal. It is also shown that the VSI-DEWMA procedure reduces the average number of switches between the sampling intervals and has similar ATS properties as compared to the VSI-EMTMA control procedure  相似文献   

3.
A class of cohort sampling designs, including nested case–control, case–cohort and classical case–control designs involving survival data, is studied through a unified approach using Cox's proportional hazards model. By finding an optimal sample reuse method via local averaging, a closed form estimating function is obtained, leading directly to the estimators of the regression parameters that are relatively easy to compute and are more efficient than some commonly used estimators in case–cohort and nested case–control studies. A semiparametric efficient estimator can also be found with some further computation. In addition, the class of sampling designs in this study provides a variety of sampling options and relaxes the restrictions of sampling schemes that are currently available.  相似文献   

4.
We consider variable acceptance sampling plans that control the lot or process fraction defective, where a specification limit defines acceptable quality. The problem is to find a sampling plan that fulfils some conditions, usually on the operation characteristic. Its calculation heavily depends on distributional properties that, in practice, might be doubtful. If prior data are already available, we propose to estimate the sampling plan by means of bootstrap methods. The bias and standard error of the estimated plan can be assessed easily by Monte Carlo approximation to the respective bootstrap moments. This resampling approach does not require strong assumptions and, furthermore, is a flexible method that can be extended to any statistic that might be informative for the fraction defective in a lot.  相似文献   

5.
Testing the equality of two survival distributions can be difficult in a prevalent cohort study when non random sampling of subjects is involved. Due to the biased sampling scheme, independent censoring assumption is often violated. Although the issues about biased inference caused by length-biased sampling have been widely recognized in statistical, epidemiological and economical literature, there is no satisfactory solution for efficient two-sample testing. We propose an asymptotic most efficient nonparametric test by properly adjusting for length-biased sampling. The test statistic is derived from a full likelihood function, and can be generalized from the two-sample test to a k-sample test. The asymptotic properties of the test statistic under the null hypothesis are derived using its asymptotic independent and identically distributed representation. We conduct extensive Monte Carlo simulations to evaluate the performance of the proposed test statistics and compare them with the conditional test and the standard logrank test for different biased sampling schemes and right-censoring mechanisms. For length-biased data, empirical studies demonstrated that the proposed test is substantially more powerful than the existing methods. For general left-truncated data, the proposed test is robust, still maintains accurate control of type I error rate, and is also more powerful than the existing methods, if the truncation patterns and right-censoring patterns are the same between the groups. We illustrate the methods using two real data examples.  相似文献   

6.
This paper considers the problem of using control charts to simultaneously monitor more than one parameter with emphasis on simultaneously monitoring the mean and variance. Fixed sampling interval control charts are modified to use variable sampling intervals depending on what is being observed from the data. Two basic strategies are investigated. One strategy uses separate control charts for each parameter, A second strategy uses a proposed single combined statistic which is sensitive to shifts in both the mean and variance. Each procedure is compared to corresponding fixed interval procedures. It is seen that for both strategies the variable sampling interval approach is substantially more efficient than fixed interval procedures.  相似文献   

7.
This paper proposes an economic-statistical design of the EWMA chart with time-varying control limits in which the Taguchi's quadratic loss function is incorporated into the economic-statistical design based on Lorenzen and Vance's economical model. A nonlinear programming with statistical performance constraints is developed and solved to minimize the expected total quality cost per unit time. This model, which is divided into three parts, depends on whether production continues during the period when the assignable cause is being searched for and/or repaired. Through a computational procedure, the optimal decision variables, including the sample size, the sampling interval, the control limit width, and the smoothing constant, can be solved for by each model. It is showed that the optimal economic-statistical design solution can be found from the set of optimal solutions obtained from the statistical design, and both the optimal sample size and sampling interval always decrease as the magnitude of shift increases.  相似文献   

8.
The stated goal of this paper is to propose the uniformly minimum variance unbiased estimator of odds ratio in case–control studies under inverse sampling design. The problem of estimating odds ratio plays a central role in case–control studies. However, the traditional sampling schemes appear inadequate when the expected frequencies of not exposed cases and exposed controls can be very low. In such a case, it is convenient to use the inverse sampling design, which requires that random drawings shall be continued until a given number of relevant events has emerged. In this paper we prove that a uniformly minimum variance unbiased estimator of odds ratio does not exist under usual binomial sampling, while the standard odds ratio estimator is uniformly minimum variance unbiased under inverse sampling. In addition, we compare these two sampling schemes by means of large-sample theory and small-sample simulation.  相似文献   

9.
基于回归组合技术的连续性抽样估计方法研究   总被引:1,自引:1,他引:0  
在使用样本轮换的连续性抽样调查中,不仅可以利用前期调查的研究变量的信息,还可使用现期调查的辅助变量信息来建立回归模型进行回归估计,进而构造回归组合估计量,并在此基础上确定最优样本轮换率和最优权重系数,使得回归组合估计量的方差最小,从而更大程度地提高连续性抽样调查的估计精度。  相似文献   

10.
 在改革开放的新形势下,我国政府统计部门开展了农村住户等一系列农村统计调查,为解决“三农”问题提供了多方面的数据信息。本文通过分析总结现行农村住户抽样调查方案中存在的各种矛盾和问题,利用国际上前沿的连续性抽样调查方法作为理论基础,分别从农村住户抽样框的构建、连续各期调查样本的抽取、二维平衡轮换模式的设计、连续性抽样估计及其方差估计和连续时间序列数据的调整分析等角度提出一系列改革措施,从而设计出更加科学的调查方案,为及时、准确地搜集和提供关于“三农”问题的数据信息服务。关于其它类型的抽样调查方案亦可按照本文研究的思路类似地加以设计和解决。  相似文献   

11.
Single sampling plans are widely used for appraising incoming product quality. However, for situations where a continuous product flow exists, lot-by-lot demarcations may not exist, and it may be necessary to use alternate procedures, such as CSP-1, for continuous processes. In this case, one would like to be able to understand how average performance of the continuous sampling procedures compares to the more commonly used single sampling plans.

In this study, a model is devised which can be used to relate plan performance between single sample lot acceptance procedures and Dodge's(1943) CSP-1 continuous sampling plan. It is shown that it is generally not possible to match up performance based upon operating characteristic curve expressions for the two plans. Instead, the plans are matched by equating expressions for π(p), the long run proportion of product which is accepted, under both procedures. This is shown to be equivalent to matching up properties on an average outgoing quality basis. The methodology may be extended for any derivative plan under MIL-STD-1235B (1982), the military standard for continuous acceptance sampling.  相似文献   

12.
基于时间序列分析方法的连续性抽样调查研究   总被引:1,自引:0,他引:1  
针对连续性抽样调查中如何利用过去各期的调查信息来提高现期抽样估计精度的问题,引入时间序列分析方法,分别考虑连续性抽样调查中重复样本和重叠样本等不同情况,建立了不同情况下的时间序列模型,利用成熟的时间序列分析方法给出了总体特征的线性组合估计量。由于时间序列分析方法能够充分利用以往各期的调查信息,从而能够给出精度更高的估计量。  相似文献   

13.
Poisson and collocated sampling are methods of selecting samples that allow for simple control as to which units are to be in sample and which not. They are particularly suitable for use when selecting more than one sample from the same framework. Sections 2 and 3 deal with Poisson sampling. Section 4 deals with modified Poisson sampling, a device to ensure that an empty sample is never selected. Sections 5, 6 and 7 deal with collocated sampling, another device for reducing the variance of sample size. In Section 8 a comparative study of variances and mean square errors is presented for a number of unequal probability sampling strategies.  相似文献   

14.
Many sampling problems from multiple populations can be considered under the semiparametric framework of the biased, or weighted, sampling model. Included under this framework is logistic regression under case–control sampling. For any model, atypical observations can greatly influence the maximum likelihood estimate of the parameters. Several robust alternatives have been proposed for the special case of logistic regression. However, some current techniques can exhibit poor behavior in many common situations. In this paper a new family of procedures are constructed to estimate the parameters in the semiparametric biased sampling model. The procedures incorporate a minimum distance approach, but are instead based on characteristic functions. The estimators can also be represented as the minimizers of quadratic forms in simple residuals, thus yielding straightforward computation. For the case of logistic regression, the resulting estimators are shown to be competitive with the existing robust approaches in terms of both robustness and efficiency, while maintaining affine equivariance. The approach is developed under the case–control sampling scheme, yet is shown to be applicable under prospective sampling logistic regression as well.  相似文献   

15.
Ranked set sampling (RSS) was first used to obtain a more efficient estimator of the population mean, as compared to the one based on simple random sampling. This technique is useful when judgment ordering of a simple random sample (SRS) of small size can be done easily and fairly accurately, but exact measurement of an observation is difficult and expensive. It is noted that, due to the complicated likelihood, parametric estimation with RSS is difficult. In this article, the notion of steady-state RSS is introduced, its relation to stratified sampling is established, and its possible use in parametric estimation is explored and put forward for further investigations.  相似文献   

16.
When the sampling units can be easily ranked than quantified, ranked set sampling (RSS) is a viable alternative to the traditional simple random sampling (SRS). Much effort has been made for modifying basic RSS protocol with the aim of deriving more efficient estimators of the population attributes. Entropy has been seminal in developing measures of distributional disparities as a tool for statistical inference. This article is concerned with testing exponentiality based on sample entropy under some RSS-based designs. A simulation study shows that the proposed tests possess good power properties against several alternatives as compared with the ordinary test based on SRS.  相似文献   

17.
Although many methods are available for performing multiple comparisons based on some measure of location, most can be unsatisfactory in at least some situations, in simulations when sample sizes are small, say less than or equal to twenty. That is, the actual Type I error probability can substantially exceed the nominal level, and for some methods the actual Type I error probability can be well below the nominal level, suggesting that power might be relatively poor. In addition, all methods based on means can have relatively low power under arbitrarily small departures from normality. Currently, a method based on 20% trimmed means and a percentile bootstrap method performs relatively well (Wilcox, in press). However, symmetric trimming was used, even when sampling from a highly skewed distribution and a rigid adherence to 20% trimming can result in low efficiency when a distribution is sufficiently heavy-tailed. Robust M-estimators are more flexible but they can be unsatisfactory in terms of Type I errors when sample sizes are small. This paper describes an alternative approach based on a modified one-step M-estimator that introduces more flexibility than a trimmed mean but provides better control over Type I error probabilities compared with using a one-step M-estimator.  相似文献   

18.
This paper shows how the average run length for a one-sided Cusum chart varies as a function of the length of the sampling interval between consecutive observations, the decision limit for the Cusum statistic, and the amount of autocorrelation between successive observations. It is shown that the rate of false alarms can be decreased considerably, without modifying the rate of valid alarms, by decreasing the sampling interval and appropriately increasing the decision interval. It is also shown that this can be done even when the shorter sampling interval induces moderate autocorrelation between successive observations.  相似文献   

19.
Conventional Phase II statistical process control (SPC) charts are designed using control limits; a chart gives a signal of process distributional shift when its charting statistic exceeds a properly chosen control limit. To do so, we only know whether a chart is out-of-control at a given time. It is therefore not informative enough about the likelihood of a potential distributional shift. In this paper, we suggest designing the SPC charts using p values. By this approach, at each time point of Phase II process monitoring, the p value of the observed charting statistic is computed, under the assumption that the process is in-control. If the p value is less than a pre-specified significance level, then a signal of distributional shift is delivered. This p value approach has several benefits, compared to the conventional design using control limits. First, after a signal of distributional shift is delivered, we could know how strong the signal is. Second, even when the p value at a given time point is larger than the significance level, it still provides us useful information about how stable the process performs at that time point. The second benefit is especially useful when we adopt a variable sampling scheme, by which the sampling time can be longer when we have more evidence that the process runs stably, supported by a larger p value. To demonstrate the p value approach, we consider univariate process monitoring by cumulative sum control charts in various cases.  相似文献   

20.
Simulated annealing—moving from a tractable distribution to a distribution of interest via a sequence of intermediate distributions—has traditionally been used as an inexact method of handling isolated modes in Markov chain samplers. Here, it is shown how one can use the Markov chain transitions for such an annealing sequence to define an importance sampler. The Markov chain aspect allows this method to perform acceptably even for high-dimensional problems, where finding good importance sampling distributions would otherwise be very difficult, while the use of importance weights ensures that the estimates found converge to the correct values as the number of annealing runs increases. This annealed importance sampling procedure resembles the second half of the previously-studied tempered transitions, and can be seen as a generalization of a recently-proposed variant of sequential importance sampling. It is also related to thermodynamic integration methods for estimating ratios of normalizing constants. Annealed importance sampling is most attractive when isolated modes are present, or when estimates of normalizing constants are required, but it may also be more generally useful, since its independent sampling allows one to bypass some of the problems of assessing convergence and autocorrelation in Markov chain samplers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号