首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper demonstrates the use of maxima nomination sampling (MNS) technique in design and evaluation of single AQL, LTPD, and EQL acceptance sampling plans for attributes. We exploit the effect of sample size and acceptance number on the performance of our proposed MNS plans using operating characteristic (OC) curve. Among other results, we show that MNS acceptance sampling plans with smaller sample size and bigger acceptance number perform better than commonly used acceptance sampling plans for attributes based on simple random sampling (SRS) technique. Indeed, MNS acceptance sampling plans result in OC curves which, compared to their SRS counterparts, are much closer to the ideal OC curve. A computer program is designed which can be used to specify the optimum MNS acceptance sampling plan and to show, visually, how the shape of the OC curve changes when parameters of the acceptance sampling plan vary. Theoretical results and numerical evaluations are given.  相似文献   

2.
In this paper, a new mixed sampling plan based on the process capability index (PCI) Cpk is proposed and the resultant plan is called mixed variable lot-size chain sampling plan (ChSP). The proposed mixed plan comprises of both attribute and variables inspections. The variable lot-size sampling plan can be used for inspection of attribute quality characteristics and for the inspection of measurable quality characteristics, the variables ChSP based on PCI will be used. We have considered both symmetric and asymmetric fraction non conforming cases for the variables ChSP. Tables are developed for determining the optimal parameters of the proposed mixed plan based on two points on the operating characteristic (OC) approach. In order to construct the tables, the problem is formulated as a non linear programming where the average sample number function is considered as an objective function to be minimized and the lot acceptance probabilities at acceptable quality level and limiting quality level under the OC curve are considered as constraints. The practical implementation of the proposed mixed sampling plan is explained with an illustrative real time example. Advantages of the proposed sampling plan are also discussed in terms of comparison with other existing sampling plans.  相似文献   

3.
Progressive censoring is quite useful in many practical situations where budget constraints are in place or there is a demand for rapid testing. Balasooriya & Saw (1998) present reliability sampling plans for the two-parameter exponential distribution, based on progressively censored samples. However, the operating characteristic (OC) curve derived in their paper does not depend on the sample size. This seems to be because, in their computations, they forget to consider the proportion of uncensored data, which also has an important influence on the subsequent developments. In consequence, their OC curve is only valid when there is no censoring. In this paper, some modifications are proposed. These are needed to obtain a proper design of the above sampling plan. Whenever at least two uncensored observations are available, the OC curve is derived in closed form and a procedure for determining progressively censored reliability sampling plans is also presented. Finally, the example considered by Balasooriya & Saw is used to illustrate the results developed in this paper for several censoring levels.  相似文献   

4.
We develop quality control charts for attributes using the maxima nomination sampling (MNS) method and compare them with the usual control charts based on simple random sampling (SRS) method, using average run length (ARL) performance, the required sample size in detecting quality improvement, and non-existence region for control limits. We study the effect of the sample size, the set size, and nonconformity proportion on the performance of MNS control charts using ARL curve. We show that MNS control chart can be used as a better benchmark for indicating quality improvement or quality deterioration relative to its SRS counterpart. We consider MNS charts from a cost perspective. We also develop MNS attribute control charts using randomized tests. A computer program is designed to determine the optimal control limits for an MNS p-chart such that, assuming known parameter values, the absolute deviation between the ARL and a specific nominal value is minimized. We provide good approximations for the optimal MNS control limits using regression analysis. Theoretical results are augmented with numerical evaluations. These show that MNS based control charts can yield substantial improvement over the usual control charts based on SRS.  相似文献   

5.
In drug development, bioequivalence studies are used to indirectly demonstrate clinical equivalence of a test formulation and a reference formulation of a specific drug by establishing their equivalence in bioavailability. These studies are typically run as crossover studies. In the planning phase of such trials, investigators and sponsors are often faced with a high variability in the coefficients of variation of the typical pharmacokinetic endpoints such as the area under the concentration curve or the maximum plasma concentration. Adaptive designs have recently been considered to deal with this uncertainty by adjusting the sample size based on the accumulating data. Because regulators generally favor sample size re‐estimation procedures that maintain the blinding of the treatment allocations throughout the trial, we propose in this paper a blinded sample size re‐estimation strategy and investigate its error rates. We show that the procedure, although blinded, can lead to some inflation of the type I error rate. In the context of an example, we demonstrate how this inflation of the significance level can be adjusted for to achieve control of the type I error rate at a pre‐specified level. Furthermore, some refinements of the re‐estimation procedure are proposed to improve the power properties, in particular in scenarios with small sample sizes. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
A simulation study was conducted to assess how well the necessary sample size to achieve a stipulated margin of error can be estimated prior to sampling. Our concern was particularly focused on performance when sampling from a very skewed distribution, which is a common feature of many biological, economic, and other populations. We examined two approaches for estimating sample size—one being the commonly used strategy aimed at regulating the average magnitude of the stipulated margin of error and the second being a previously proposed strategy to control the tolerance probability with which the stipulated margin of error is exceeded. Results of the simulation revealed that (1) skewness does not much affect the average estimated sample size but can greatly extend the range of estimated sample sizes; and (2) skewness does reduce the effectiveness of Kupper and Hafner's sample size estimator, yet its effectiveness is negatively impacted less by skewness directly, and to a much greater degree by the common practice of estimating the population variance via a pilot sampling from the skewed population. Nonetheless, the simulations suggest that estimating sample size to control the probability with which the desired margin of error is achieved is a worthwhile alternative to the usual sample size formula that controls the average width of the confidence interval only.  相似文献   

7.
ABSTRACT

Additional critical points are presented for the Steel–Dwass–Critchlow–Fligner distribution-free multiple comparison procedure for comparing all pairs of three population medians in the one-way layout. A computational technique developed by van de Wiel is used to find critical points yielding an experimentwise error rate of approximately 0.01, 0.05, and 0.10 for a total sample size of at most 30, with individual sample sizes from 4 to 10 and a maximum sample size of at least 8, and for equal sample sizes from 8 to 14. Additional discussion is given regarding step-down testing methods and the dangers of using the Steel–Dwass–Critchlow–Fligner procedure with unequal sample sizes if two of the sample sizes are very small.  相似文献   

8.
In this paper, we consider the problem of empirical choice of optimal block sizes for block bootstrap estimation of population parameters. We suggest a nonparametric plug-in principle that can be used for estimating ‘mean squared error’-optimal smoothing parameters in general curve estimation problems, and establish its validity for estimating optimal block sizes in various block bootstrap estimation problems. A key feature of the proposed plug-in rule is that it can be applied without explicit analytical expressions for the constants that appear in the leading terms of the optimal block lengths. Furthermore, we also discuss the computational efficacy of the method and explore its finite sample properties through a simulation study.  相似文献   

9.
We consider in this article the problem of numerically approximating the quantiles of a sample statistic for a given population, a problem of interest in many applications, such as bootstrap confidence intervals. The proposed Monte Carlo method can be routinely applied to handle complex problems that lack analytical results. Furthermore, the method yields estimates of the quantiles of a sample statistic of any sample size though Monte Carlo simulations for only two optimally selected sample sizes are needed. An analysis of the Monte Carlo design is performed to obtain the optimal choices of these two sample sizes and the number of simulated samples required for each sample size. Theoretical results are presented for the bias and variance of the numerical method proposed. The results developed are illustrated via simulation studies for the classical problem of estimating a bivariate linear structural relationship. It is seen that the size of the simulated samples used in the Monte Carlo method does not have to be very large and the method provides a better approximation to quantiles than those based on an asymptotic normal theory for skewed sampling distributions.  相似文献   

10.
One of the most important steps in the design of a pharmaceutical clinical trial is the estimation of the sample size. For a superiority trial the sample size formula (to achieve a stated power) would be based on a given clinically meaningful difference and a value for the population variance. The formula is typically used as though this population variance is known whereas in reality it is unknown and is replaced by an estimate with its associated uncertainty. The variance estimate would be derived from an earlier similarly designed study (or an overall estimate from several previous studies) and its precision would depend on its degrees of freedom. This paper provides a solution for the calculation of sample sizes that allows for the imprecision in the estimate of the sample variance and shows how traditional formulae give sample sizes that are too small since they do not allow for this uncertainty with the deficiency being more acute with fewer degrees of freedom. It is recommended that the methodology described in this paper should be used when the sample variance has less than 200 degrees of freedom.  相似文献   

11.
In this paper, Anbar's (1983) approach for estimating a difference between two binomial proportions is discussed with respect to a hypothesis testing problem. Such an approach results in two possible testing strategies. While the results of the tests are expected to agree for a large sample size when two proportions are equal, the tests are shown to perform quite differently in terms of their probabilities of a Type I error for selected sample sizes. Moreover, the tests can lead to different conclusions, which is illustrated via a simple example; and the probability of such cases can be relatively large. In an attempt to improve the tests while preserving their relative simplicity feature, a modified test is proposed. The performance of this test and a conventional test based on normal approximation is assessed. It is shown that the modified Anbar's test better controls the probability of a Type I error for moderate sample sizes.  相似文献   

12.
We consider the estiinution of wildlife population density based on line transect data. Nonparametric kernel method is employed, without the usual assumption that the detection curve has a shoulder at distance zero, with the help of a special class of kernels called boundary kernels. Asymptotic distribution results are included. It is pointed out that the boundery kernel of Zhang and Karunamuni (1998) (see also Müller and Wang (1994)) performs better (for asmyptotic mean square error consideration) than that of the boundary kernel of M¨ller (1991). But both of these kernels are clearly superior to the half-nonnal and one-sided Epanechnikov kernel when the shoulder condition fails to hold. In practice, however, for small to moderate sample sizes, caution should be exercised in using bounrlary kernels in that the density estimate might become negative. A Monte Carlo study is also presented, comparing the performance of four kernels applied to detection data, with and without the shoulder condition, Two bundary kernels for deriatives are also included for the point transect case.  相似文献   

13.
Two new implementations of the EM algorithm are proposed for maximum likelihood fitting of generalized linear mixed models. Both methods use random (independent and identically distributed) sampling to construct Monte Carlo approximations at the E-step. One approach involves generating random samples from the exact conditional distribution of the random effects (given the data) by rejection sampling, using the marginal distribution as a candidate. The second method uses a multivariate t importance sampling approximation. In many applications the two methods are complementary. Rejection sampling is more efficient when sample sizes are small, whereas importance sampling is better with larger sample sizes. Monte Carlo approximation using random samples allows the Monte Carlo error at each iteration to be assessed by using standard central limit theory combined with Taylor series methods. Specifically, we construct a sandwich variance estimate for the maximizer at each approximate E-step. This suggests a rule for automatically increasing the Monte Carlo sample size after iterations in which the true EM step is swamped by Monte Carlo error. In contrast, techniques for assessing Monte Carlo error have not been developed for use with alternative implementations of Monte Carlo EM algorithms utilizing Markov chain Monte Carlo E-step approximations. Three different data sets, including the infamous salamander data of McCullagh and Nelder, are used to illustrate the techniques and to compare them with the alternatives. The results show that the methods proposed can be considerably more efficient than those based on Markov chain Monte Carlo algorithms. However, the methods proposed may break down when the intractable integrals in the likelihood function are of high dimension.  相似文献   

14.
A consistent test for difference in locations between two bivariate populations is proposed, The test is similar as the Mann-Whitney test and depends on the exceedances of slopes of the two samples where slope for each sample observation is computed by taking the ratios of the observed values. In terms of the slopes, it reduces to a univariate problem, The power of the test has been compared with those of various existing tests by simulation. The proposed test statistic is compared with Mardia's(1967) test statistics, Peters-Randies(1991) test statistic, Wilcoxon's rank sum test. statistic and Hotelling' T2 test statistic using Monte Carlo technique. It performs better than other statistics compared for small differences in locations between two populations when underlying population is population 7(light tailed population) and sample size 15 and 18 respectively. When underlying population is population 6(heavy tailed population) and sample sizes are 15 and 18 it performas better than other statistic compared except Wilcoxon's rank sum test statistics for small differences in location between two populations. It performs better than Mardia's(1967) test statistic for large differences in location between two population when underlying population is bivariate normal mixture with probability p=0.5, population 6, Pearson type II population and Pearson type VII population for sample size 15 and 18 .Under bivariate normal population it performs as good as Mardia' (1967) test statistic for small differences in locations between two populations and sample sizes 15 and 18. For sample sizes 25 and 28 respectively it performs better than Mardia's (1967) test statistic when underlying population is population 6, Pearson type II population and Pearson type VII population  相似文献   

15.
In this paper, the Gompertz model is extended to incorporate time-dependent covariates in the presence of interval-, right-, left-censored and uncensored data. Then, its performance at different sample sizes, study periods and attendance probabilities are studied. Following that, the model is compared to a fixed covariate model. Finally, two confidence interval estimation methods, Wald and likelihood ratio (LR), are explored and conclusions are drawn based on the results of the coverage probability study. The results indicate that bias, standard error and root mean square error values of the parameter estimates decrease with the increase in study period, attendance probability and sample size. Also, LR was found to work slightly better than the Wald for parameters of the model.  相似文献   

16.
The estimation or prediction of population characteristics based on the sample information is the key issue in survey sampling. If the sample sizes in subpopulations (domains) are large enough, similar methods as used for the whole population can be used to estimate or to predict subpopulations characteristics as well. To estimate or to predict characteristics of domains with small or even zero sample sizes, small area estimation methods “borrowing strength” from other subpopulations or time periods are widely used. We extend this problem and study methods of prediction of future population and subpopulations’ characteristics based on the longitudinal data.  相似文献   

17.
Sample size determination for testing the hypothesis of equality of two proportions against an alternative with specified type I and type II error probabilities is considered for two finite populations. When two finite populations involved are quite different in sizes, the equal size assumption may not be appropriate. In this paper, we impose a balanced sampling condition to determine the necessary samples taken without replacement from the finite populations. It is found that our solution requires smaller samples as compared to those using binomial distributions. Furthermore, our solution is consistent with the sampling with replacement or when population size is large. Finally, three examples are given to show the application of the derived sample size formula.  相似文献   

18.
In this paper, a variable repetitive group sampling plans based on one-sided process capability indices is proposed to deal with lot sentencing for one-sided specifications. The parameters of the proposed plans are tabulated for some combinations of acceptance quality levels with commonly used producer's risk and consumer's risk. The efficiency of the proposed plan is compared with the Pearn and Wu [Critical acceptance values and sample sizes of a variables sampling plan for very low fraction of defectives. Omega – Int J Manag Sci. 2006;34(1):90–101] plan in terms of sample size and the power curve. One example is given to illustrate the proposed methodology.  相似文献   

19.
In this study, it was aimed to determine accuracy of generalized estimating equations versus logistic regressions on different correlation levels and sample sizes. For this aim, two methods were compared with different sample sizes 10, 25, 50 and 100 and correlation levels 0.0, 0.3, 0.5 and 0.8. Result of this study showed that using generalized estimating equations could be preferred versus logistic regression when the sample size is over than 25 and correlation level is higher than 0.3 on data taken from studies with repeated measurements, but logistic regression could be better when the autocorrelations do not exist.  相似文献   

20.
K correlated 2×2 tables with structural zero are commonly encountered in infectious disease studies. A hypothesis test for risk difference is considered in K independent 2×2 tables with structural zero in this paper. Score statistic, likelihood ratio statistic and Wald‐type statistic are proposed to test the hypothesis on the basis of stratified data and pooled data. Sample size formulae are derived for controlling a pre‐specified power or a pre‐determined confidence interval width. Our empirical results show that score statistic and likelihood ratio statistic behave better than Wald‐type statistic in terms of type I error rate and coverage probability, sample sizes based on stratified test are smaller than those based on the pooled test in the same design. A real example is used to illustrate the proposed methodologies. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号