首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
Until now, in the literature, a variety of acceptance reliability sampling plans have been developed based on different life test plans. In most of the reliability sampling plans, the decision procedures to accept or reject the corresponding lot are developed based on the lifetimes of the items observed on tests, or the number of failures observed during a pre-specified testing time. However, frequently, the items are subject to degradation phenomena and, in these cases, the observed degradation level of the item can be used as a decision statistic. In this paper, we develop a variables acceptance sampling plan based on the information on the degradation process of the items, assuming that the degradation process follows the inverse Gaussian process. It is shown that the developed sampling plan improves the reliability performance of the items conditional on the acceptance in the test and that the lifetimes of items after the reliability sampling test are stochastically larger than those before the test. A study comparing the proposed degradation-based sampling plan with the conventional sampling plan which is based on a life test is also performed.KEYWORDS: Variables sampling plan, degradation test, inverse Gaussian process, mixture distribution, stochastic ordering  相似文献   

2.
This paper presents reliability sampling plans for the Weibull distribution under Type II progressive censoring with random removals (PCR), where the number of units removed at each failure time follows a binomial distribution. To construct the sampling plans, the sample size n and the acceptance constant k are determined based on asymptotic distribution theory. The resulting sampling plans are tabulated for selected specifications under the proposed censoring scheme. Furthermore, a Monte Carlo simulation is conducted to validate the true probability of acceptance for the designed sampling plans.  相似文献   

3.
Testing the equality of two survival distributions can be difficult in a prevalent cohort study when non random sampling of subjects is involved. Due to the biased sampling scheme, independent censoring assumption is often violated. Although the issues about biased inference caused by length-biased sampling have been widely recognized in statistical, epidemiological and economical literature, there is no satisfactory solution for efficient two-sample testing. We propose an asymptotic most efficient nonparametric test by properly adjusting for length-biased sampling. The test statistic is derived from a full likelihood function, and can be generalized from the two-sample test to a k-sample test. The asymptotic properties of the test statistic under the null hypothesis are derived using its asymptotic independent and identically distributed representation. We conduct extensive Monte Carlo simulations to evaluate the performance of the proposed test statistics and compare them with the conditional test and the standard logrank test for different biased sampling schemes and right-censoring mechanisms. For length-biased data, empirical studies demonstrated that the proposed test is substantially more powerful than the existing methods. For general left-truncated data, the proposed test is robust, still maintains accurate control of type I error rate, and is also more powerful than the existing methods, if the truncation patterns and right-censoring patterns are the same between the groups. We illustrate the methods using two real data examples.  相似文献   

4.
The Cochran-Armitage test is the most frequently used test for trend among binomial proportions. This test can be performed based on the asymptotic normality of its test statistic or based on an exact null distribution. As an alternative, a recently introduced modification of the Baumgartner-Weiß-Schindler statistic, a novel nonparametric statistic, can be used. Simulation results indicate that the exact test based on this modification is preferable to the Cochran-Armitage test. This exact test is less conservative and more powerful than the exact Cochran-Armitage test. The power comparison to the asymptotic Cochran-Armitage test does not show a clear winner, but the difference in power is usually small. The exact test based on the modification is recommended here because, in contrast to the asymptotic Cochran-Armitage test, it guarantees a type I error rate less than or equal to the significance level. Moreover, an exact test is often more appropriate than an asymptotic test because randomization rather than random sampling is the norm, for example in biomedical research. The methods are illustrated with an example data set.  相似文献   

5.
In this paper we examine the failure-censored sampling plans for the two–parameter exponential distri- bution based on m random samples, each of size n. The suggested procedure is based on exact results and only the first failure time of each sample is needed. The values of the acceptability constant are also tabulated for selected values of p α 1 p β 1, α and β. Further, a comparison of the proposed sampling plans with ordinary sampling plans using a sample of size mn is made. When compared to ordinary sampling plans, the proposed plan has an advantage in terms of shorter test-time and a saving of resources.  相似文献   

6.
The importance of individual inputs of a computer model is sometimes assessed using indices that reflect the amount of output variation that can be attributed to random variation in each input. We review two such indices, and consider input sampling plans that support estimation of one of them, the variance of conditional expectation or VCE (Mckay, 1995. Los Alamos National Laboratory Report NUREG/CR-6311, LA-12915-MS). Sampling plans suggested by Sobol’, Saltelli, and McKay, are examined and compared to a new sampling plan based on balanced incomplete block designs. The new design offers better sampling efficiency for the VCE than those of Sobol’ and Saltelli, and supports unbiased estimation of the index associated with each input.  相似文献   

7.
We consider fixed size sampling plans for which the second order inclusion probabilities are zero for pairs of contiguous units and constant for pairs of non-contiguous units. A practical motivation for the use of such plans is pointed out and a statistical condition is identified under which these plans are more efficient than the corresponding simple random sampling plans. Results on the existence and construction of these plans are obtained.  相似文献   

8.
This paper investigates the design of accelerated life test (ALT) plans under progressive Type II interval censoring with random removals. Units’ lifetimes are assumed to follow a Weibull distribution, and the number of random removals at each inspection is assumed to follow a binomial distribution. The optimal ALT plans, which minimize the asymptotic variance of an estimated quantile at use condition, are determined. The expected duration of the test and the expected number of inspections on each stress level are calculated. A numerical study is conducted to investigate the properties of the derived ALT plans under different parameter values. For illustration purpose, a numerical example is also given.  相似文献   

9.
In this paper, we consider fixed size sampling plans for which the first order inclusion probabilities are identical for all units and the second order inclusion probabilities are constant for every pair-wise unit. The statistical conditions are identified under which these plans are equivalent to the usual simple random sampling plan. These sampling plans are constructed to reduce undesirable units.  相似文献   

10.
This paper presents the results of a small sample simulation study designed to evaluate the performance of a recently proposed test statistic for the analysis of correlated binary data. The new statistic is an adjusted Mantel-Haenszel test, which may be used in testing for association between a binary exposure and a binary outcome of interest across several fourfold tables when the data have been collected under a cluster sampling design. Al- though originally developed for the analysis of periodontal data, the proposed method may be applied to clustered binary data arising in a variety of settings, including longitu- dinal studies, family studies, and school-based research. The features of the simulation are intended to mimic those of a research study of periodontal health, in which a large number of observations is made on each of a relatively small number of patients. The simulation reveals that the adjusted test statistic performs well in finite samples, having empirical type I error rates close to nominal and empirical power similar to that of more complicated marginal regression methods. Software for computing the adjusted statistic is also provided.  相似文献   

11.
This paper demonstrates the use of maxima nomination sampling (MNS) technique in design and evaluation of single AQL, LTPD, and EQL acceptance sampling plans for attributes. We exploit the effect of sample size and acceptance number on the performance of our proposed MNS plans using operating characteristic (OC) curve. Among other results, we show that MNS acceptance sampling plans with smaller sample size and bigger acceptance number perform better than commonly used acceptance sampling plans for attributes based on simple random sampling (SRS) technique. Indeed, MNS acceptance sampling plans result in OC curves which, compared to their SRS counterparts, are much closer to the ideal OC curve. A computer program is designed which can be used to specify the optimum MNS acceptance sampling plan and to show, visually, how the shape of the OC curve changes when parameters of the acceptance sampling plan vary. Theoretical results and numerical evaluations are given.  相似文献   

12.
A semiparametric logistic regression model is proposed in which its nonparametric component is approximated with fixed-knot cubic B-splines. To assess the linearity of the nonparametric component, we construct a penalized likelihood ratio test statistic. When the number of knots is fixed, the null distribution of the test statistic is shown to be asymptotically the distribution of a linear combination of independent chi-squared random variables, each with one degree of freedom. We set the asymptotic null expectation of this test statistic equal to a value to determine the smoothing parameter value. Monte Carlo experiments are conducted to investigate the performance of the proposed test. Its practical use is illustrated with a real-life example.  相似文献   

13.
Given the random walk model, we show, for the traditional unrestricted regression used in testing stationarity, that no matter what the initial value of the random walk is or its drift or its error standard deviation, the sampling distributions of certain statistics remain unchanged. Using Monte Carlo simulations, we estimate, for different finite samples, the sampling distributions of these statistics. After smoothing the percentiles of the empirical sampling distributions, we come up with a new set of critical values for testing the existence of a random walk, if each statistic is being used on an individual base. Combining the new sets of critical values, we finally suggest a general methodology for testing for a random walk model.  相似文献   

14.
We consider a Bayesian approach to the study of independence in a two-way contingency table which has been obtained from a two-stage cluster sampling design. If a procedure based on single-stage simple random sampling (rather than the appropriate cluster sampling) is used to test for independence, the p-value may be too small, resulting in a conclusion that the null hypothesis is false when it is, in fact, true. For many large complex surveys the Rao–Scott corrections to the standard chi-squared (or likelihood ratio) statistic provide appropriate inference. For smaller surveys, though, the Rao–Scott corrections may not be accurate, partly because the chi-squared test is inaccurate. In this paper, we use a hierarchical Bayesian model to convert the observed cluster samples to simple random samples. This provides surrogate samples which can be used to derive the distribution of the Bayes factor. We demonstrate the utility of our procedure using an example and also provide a simulation study which establishes our methodology as a viable alternative to the Rao–Scott approximations for relatively small two-stage cluster samples. We also show the additional insight gained by displaying the distribution of the Bayes factor rather than simply relying on a summary of the distribution.  相似文献   

15.
This paper presents a new random weighting-based adaptive importance resampling method to estimate the sampling distribution of a statistic. A random weighting-based cross-entropy procedure is developed to iteratively calculate the optimal resampling probability weights by minimizing the Kullback-Leibler distance between the optimal importance resampling distribution and a family of parameterized distributions. Subsequently, the random weighting estimation of the sampling distribution is constructed from the obtained optimal importance resampling distribution. The convergence of the proposed method is rigorously proved. Simulation and experimental results demonstrate that the proposed method can effectively estimate the sampling distribution of a statistic.  相似文献   

16.
We consider a likelihood ratio test of independence for large two-way contingency tables having both structural (non-random) and sampling (random) zeros in many cells. The solution of this problem is not available using standard likelihood ratio tests. One way to bypass this problem is to remove the structural zeroes from the table and implement a test on the remaining cells which incorporate the randomness in the sampling zeros; the resulting test is a test of quasi-independence of the two categorical variables. This test is based only on the positive counts in the contingency table and is valid when there is at least one sampling (random) zero. The proposed (likelihood ratio) test is an alternative to the commonly used ad hoc procedures of converting the zero cells to positive ones by adding a small constant. One practical advantage of our procedure is that there is no need to know if a zero cell is structural zero or a sampling zero. We model the positive counts using a truncated multinomial distribution. In fact, we have two truncated multinomial distributions; one for the null hypothesis of independence and the other for the unrestricted parameter space. We use Monte Carlo methods to obtain the maximum likelihood estimators of the parameters and also the p-value of our proposed test. To obtain the sampling distribution of the likelihood ratio test statistic, we use bootstrap methods. We discuss many examples, and also empirically compare the power function of the likelihood ratio test relative to those of some well-known test statistics.  相似文献   

17.
This paper considers Bayesian sampling plans for exponential distribution with random censoring. The efficient Bayesian sampling plan for a general loss function is derived. This sampling plan possesses the property that it may make decisions prior to the end of the life test experiment, and its decision function is the same as the Bayes decision function which makes decisions based on data collected at the end of the life test experiment. Compared with the optimal Bayesian sampling plan of Chen et al. (2004), the efficient Bayesian sampling plan has the smaller Bayes risk due to the less duration time of life test experiment. Computations of the efficient Bayes risks for the conjugate prior are given. Numerical comparisons between the proposed efficient Bayesian sampling plan and the optimal Bayesian sampling plan of Chen et al. (2004) under two special decision losses, including the quadratic decision loss, are provided. Numerical results also demonstrate that the performance of the proposed efficient sampling plan is superior to that of the optimal sampling plan by Chen et al. (2004).  相似文献   

18.
The paper investigates the design of life test plans under progressively interval censored test. Based on the likelihood ratio, the proposed life test plans are established so that the required producer and consumer risks can be satisfied simultaneously. The advantage of the proposed method is that the developed sampling procedure depends on the likelihood ratio only so that the method can be applied to any lifetime distribution when only one parameter is unknown. A numerical study is conducted and some of the sampling plans for the Weibull lifetime distribution with different shape parameters are tabulated for illustration. Moreover, the influence of the removal schemes on the proposed sampling plans is discussed.  相似文献   

19.
The nonparametric component in a partially linear model is approximated via cubic B-splines with a second-order difference penalty on the adjacent B-spline coefficients to avoid undersmoothing. A Wald-type spline-based test statistic is constructed for the null hypothesis of no effect of a continuous covariate. When the number of knots is fixed, the limiting null distribution of the test statistic is the distribution of a linear combination of independent chi-squared random variables, each with one degree of freedom. A real-life dataset is provided to illustrate the practical use of the test statistic.  相似文献   

20.
This paper considers the design of accelerated life test (ALT) sampling plans under Type I progressive interval censoring with random removals. We assume that the lifetime of products follows a Weibull distribution. Two levels of constant stress higher than the use condition are used. The sample size and the acceptability constant that satisfy given levels of producer's risk and consumer's risk are found. In particular, the optimal stress level and the allocation proportion are obtained by minimizing the generalized asymptotic variance of the maximum likelihood estimators of the model parameters. Furthermore, for validation purposes, a Monte Carlo simulation is conducted to assess the true probability of acceptance for the derived sampling plans.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号