首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Process capability analysis is applied to monitor the process quality. Process capability can be quantified by process capability index. These indices have wide application in quality control methods and acceptance sampling plans. In this paper, we introduce a double-sampling plan based on process capability index. In this type of scheme, under a decision rule and with the specified rejection and acceptance numbers, the second sample is selected and the decision of rejection or acceptance is made based on the information obtained from two samples. The purpose of this scheme is to reduce the average sample number in order to reduce the time and cost of sampling. A comparison study has been conducted in order to evaluate the performance of proposed method in comparison with classical single sampling plans.  相似文献   

2.
Given realizations of two possion processes with unknown intensities A(·) and F(·) observed over the interval (t1,t2), we suppose that it is desired to distinution between H0 Ξ(·)/λ(·) is constant on (t1,t2) versus H+:Ξ(·)/λ(·) increases on (t1,t2). We propose a decision rule which uses the percentage points of the Mann-Whitney U-distribution. We show that the decision rule is unbiased and that the set of alternatives in H+ can be weakly ordered, specifically: if Ξ(·)/λ(·), β(·)/λ(·) and Ξ(·)/β(·) are increasing on (t1, t2) then P{H0 is rejected |Ξ(·)}≧P{H0 is rejected|B(·)}≧P{H0 is rejected|H0}.  相似文献   

3.
On-line process control consists of inspecting a single item for every m (integer and m ≥ 2) produced items. Based on the results of the inspection, it is decided whether the process is in-control (the fraction of conforming items is p 1; State I) or out-of-control (the fraction of conforming items is p 2 < p 1; State II). If the inspected item is non conforming, it is determined that the process is out-of-control, and the production process is stopped for an adjustment; otherwise, production continues. As most designs of on-line process control assume a long-run production, this study can be viewed as an extension because it is concerned with short-run production and the decision regarding the process is subject to misclassification errors. The probabilistic model of the control system employs properties of an ergodic Markov chain to obtain the expression of the average cost of the system per unit produced, which can be minimised as a function of the sampling interval, m. The procedure is illustrated by a numerical example.  相似文献   

4.
ABSTRACT

Background: Instrumental variables (IVs) have become much easier to find in the “Big data era” which has increased the number of applications of the Two-Stage Least Squares model (TSLS). With the increased availability of IVs, the possibility that these IVs are weak has increased. Prior work has suggested a ‘rule of thumb’ that IVs with a first stage F statistic at least ten will avoid a relative bias in point estimates greater than 10%. We investigated whether or not this threshold was also an efficient guarantee of low false rejection rates of the null hypothesis test in TSLS applications with many IVs.

Objective: To test how the ‘rule of thumb’ for weak instruments performs in predicting low false rejection rates in the TSLS model when the number of IVs is large.

Method: We used a Monte Carlo approach to create 28 original data sets for different models with the number of IVs varying from 3 to 30. For each model, we generated 2000 observations for each iteration and conducted 50,000 iterations to reach convergence in rejection rates. The point estimate was set to 0, and probabilities of rejecting this hypothesis were recorded for each model as a measurement of false rejection rate. The relationship between the endogenous variable and IVs was carefully adjusted to let the F statistics for the first stage model equal ten, thus simulating the ‘rule of thumb.’

Results: We found that the false rejection rates (type I errors) increased when the number of IVs in the TSLS model increased while holding the F statistics for the first stage model equal to 10. The false rejection rate exceeds 10% when TLSL has 24 IVs and exceed 15% when TLSL has 30 IVs.

Conclusion: When more instrumental variables were applied in the model, the ‘rule of thumb’ was no longer an efficient guarantee for good performance in hypothesis testing. A more restricted margin for F statistics is recommended to replace the ‘rule of thumb,’ especially when the number of instrumental variables is large.  相似文献   

5.
The Generalized Lorenz dominance can be used to take account of differences in mean income as well as income inequality in case of two income distributions possessing unequal means. Asymptotically distribution-free and consistent tests have been proposed for comparing two generalized Lorenz curves in the whole interval [p 1, p 2] where 0 < p 1 < p 2 < 1. Size and power of the test has been derived.  相似文献   

6.
Sequential estimation of parameters In a continuous time Markov branching process with Immigration with split rate λ1 Immigration rate λ2, offspring distribution {p1j≥O) and Immigration distribution {p2j≥l} is considered. A sequential version of the Cramér-Rao type information inequality is derived which gives a lower bound on the variances of unbiased estimators for any function of these parameters. Attaining the lower bounds depends on whether the sampling plan or stopping rule S, the estimator f, and the parametric function g = E(f) are efficient. All efficient triples (S,f,g) are characterized; It Is shown that for i = 1,2, only linear combinations of λipij j's or their ratios are efficiently estimable. Applications to a Yule process, a linear birth and death process with immigration and an M/M/∞ queue are also considered  相似文献   

7.
Vannman has earlier studied a class of capability indices, containing the indices C p , C pk , C pm and C pmk , when the tolerances are symmetric. We study the properties of this class when the tolerances are asymmetric and suggest a new enlargened class of indices. Under the assumption of normality an explicit form of the distribution of the new class of the estimated indices is provided. Numerical investigations are made to explore the behavior of the estimators of the indices for different values of the parameters. Based on the estimator a decision rule that can be used to determine whether the process can be considered capable or not is provided and suitable criteria for choosing an index from the family are suggested.  相似文献   

8.
Abstract

Repeated measurement designs are widely used in medicine, pharmacology, animal sciences and psychology. If there is a restriction on the total number of treatments, some experimental units can receive on the total length of time while some experimental units can remain in the trial, then repeated measurements designs with unequal period sizes should be used. In this article, some infinite series are developed to generate the minimal balanced repeated measurement designs in periods of three different sizes p1, p2 and p3, where 2?≤?p3?<?p2 ≤ 10 and p2?<?p1.  相似文献   

9.
In this paper, the design of reliability sampling plans for the Pareto lifetime model under progressive Type-II right censoring is considered. Sampling plans are derived using the decision theoretic approach with a suitable loss or cost function that consists of sampling cost, rejection cost, and acceptance cost. The decision rule is based on the estimated reliability function. Plans are constructed within the Bayesian context using the natural conjugate prior. Simulations for evaluating the Bayes risk are carried out and the optimal sampling plans are reported for various sample sizes, observed number of failures and removal probabilities.  相似文献   

10.
Let X1,…,X7 be i.i.d. random variables with a common continuous distribution F, Two parameters, μ(F) = P(X1 < X5 and X1+X4 < X2+X3) and λ(F) = P(X1+X4 < X2+X3 and X1+X7 < X5+X6), which appear in the moments of some rank statistics have been studied by several authors. It is shown that the existing lower bound, 3/10 ≤ μ(F) can be improved to 3/10 < μ(F) and that no further improvement is possible. It is also shown that the existing upper bounds μ(F) ≤ (21/2+6)/24 ≈ 0.30893 and λ(F) ≤ 7/24 ≈ 0.29167 can be improved to [14+(2/3)1/2]/48 ≈ 0.30868 and {7 ? [1 ? (2/3)1/2]2/4}/24 ≈ 0.29132.  相似文献   

11.
The statistical inference drawn from the difference between two independent Poisson parameters is often discussed in the medical literature. However, such discussions are usually based on the frequentist viewpoint rather than the Bayesian viewpoint. Here, we propose an index θ=P(λ1, post2, post), where λ1, post and λ2, post denote Poisson parameters following posterior density. We provide an exact and an approximate expression for calculating θ using the conjugate gamma prior and compare the probabilities obtained using the approximate and the exact expressions. Moreover, we also show a relation between θ and the p-value. We also highlight the significance of θ by applying it to the result of actual clinical trials. Our findings suggest that θ may provide useful information in a clinical trial.  相似文献   

12.
Locally best invariant tests for the null hypothesis of I(p) against the alternative hypothesis of I(q), < q, are developed for models with independent normal errors. The tests are semiparametrically extended for models with autocorrelated errors. The method is illustrated by two real data sets in terms of double unit roots. The proposed tests can be used for determining integration orders of nonstationary time series.  相似文献   

13.
One sided two stage sign tests that allow for both first stage acceptance and rejection of the null hypothesis are presented. The decision constants are selected so that the power curve of the two stage test matches the power curve of the usual sign test at three specified points. These tests are compared with two stages sign tests recently presented by Colton and McPherson (1976) which allow only first stage rejection of the null hypothesis.  相似文献   

14.
This article considers the problem of choosing between two treatments that have binary outcomes with unknown success probabilities p1 and p2. The choice is based upon the information provided by two observations X1B(n1, p1) and X2B(n2, p2) from independent binomial distributions. Standard approaches to this problem utilize basic statistical inference methodologies such as hypothesis tests and confidence intervals for the difference p1 ? p2 of the success probabilities. However, in this article the analysis of win-probabilities is considered. If X*1 represents a potential future observation from Treatment 1 while X*2 represents a potential future observation from Treatment 2, win-probabilities are defined in terms of the comparisons of X*1 and X*2. These win-probabilities provide a direct assessment of the relative advantages and disadvantages of choosing either treatment for one future application, and their interpretation can be combined with other factors such as costs, side-effects, and the availabilities of the two treatments. In this article, it is shown how confidence intervals for the win-probabilities can be constructed, and examples of their use are provided. Computer code for the implementation of this new methodology is available from the authors.  相似文献   

15.
The Schilling and Dodge (1969) [2] formulation of mixed dependent acceptance sampling plans has a number of favorable properties; so we decided to build on their results and extend them to better levels. One genuine important weakness of their procedure is that it is not safeguarded against departure (of the parent distribution) from normality when accepting a batch based on the first sample by ‘average-testing ’ basis. One argues that rejection occurs only after re-sampling and, that too, on the ‘quantity-based attribute-testing’ basis according to the setup of their plans. Acceptance, according to their scheme, can possibly occur on the average-testing basis based on first sample. And as a consequence of the reliance of the variables plan (especially the σ-plans) on normality assumption, and lack of distributional robustness, the acceptance decision may not be very solidly based. With the developments suggested in this paper we propose to modify the plan in a way to strengthen this weak area.  相似文献   

16.
We deal with single sampling by variables with two-way-protection in case of normally distributed characteristics with unknown variance. Givenp 1(AQL),p 2 (LQ) and α, β (risks of errors of the first and the second kind), there are two well-known methods of determining the corresponding sampling plans. Both methods are based on an approximation of the OC. Therefore these plans are only approximations, the true risks α and β are not known exactly. In section II we present a new sampling scheme based on an estimatorp for the percent defectivep. We give an exact formula for the OC. Thus we are able to determine these plans exactly without any approximations.  相似文献   

17.
Let {X, Xn; n ≥ 1} be a sequence of real-valued iid random variables, 0 < r < 2 and p > 0. Let D = { A = (ank; 1 ≤ kn, n ≥ 1); ank, ? R and supn, k |an,k| < ∞}. Set Sn( A ) = ∑nk=1an, kXk for A ? D and n ≥ 1. This paper is devoted to determining conditions whereby E{supn ≥ 1, |Sn( A )|/n1/r}p < ∞ or E{supn ≥ 2 |Sn( A )|/2n log n)1/2}p < ∞ for every A ? D. This generalizes some earlier results, including those of Burkholder (1962), Choi and Sung (1987), Davis (1971), Gut (1979), Klass (1974), Siegmund (1969) and Teicher (1971).  相似文献   

18.
ABSTRACT

When the editors of Basic and Applied Social Psychology effectively banned the use of null hypothesis significance testing (NHST) from articles published in their journal, it set off a fire-storm of discussions both supporting the decision and defending the utility of NHST in scientific research. At the heart of NHST is the p-value which is the probability of obtaining an effect equal to or more extreme than the one observed in the sample data, given the null hypothesis and other model assumptions. Although this is conceptually different from the probability of the null hypothesis being true, given the sample, p-values nonetheless can provide evidential information, toward making an inference about a parameter. Applying a 10,000-case simulation described in this article, the authors found that p-values’ inferential signals to either reject or not reject a null hypothesis about the mean (α?=?0.05) were consistent for almost 70% of the cases with the parameter’s true location for the sampled-from population. Success increases if a hybrid decision criterion, minimum effect size plus p-value (MESP), is used. Here, rejecting the null also requires the difference of the observed statistic from the exact null to be meaningfully large or practically significant, in the researcher’s judgment and experience. The simulation compares performances of several methods: from p-value and/or effect size-based, to confidence-interval based, under various conditions of true location of the mean, test power, and comparative sizes of the meaningful distance and population variability. For any inference procedure that outputs a binary indicator, like flagging whether a p-value is significant, the output of one single experiment is not sufficient evidence for a definitive conclusion. Yet, if a tool like MESP generates a relatively reliable signal and is used knowledgeably as part of a research process, it can provide useful information.  相似文献   

19.
Various criteria have been proposed for determining the reliability of noncompartmental pharmacokinetic estimates of the terminal disposition phase half‐life (t1/2) and the extrapolated area under the curve (AUCextrap). This simulation study assessed the performance of two frequently used reportability rules: the terminal disposition phase regression adjusted‐r2 classification rule and the regression data point time span classification rule. Using simulated data, these rules were assessed in relation to the magnitude of the variability in the terminal disposition phase slope, the length of the terminal disposition phase captured in the concentration‐time profile (data span), the number of data points present in the terminal disposition phase, and the type and level of variability in concentration measurement. The accuracy of estimating t1/2 was satisfactory for data spans of 1.5 and longer, given low measurement variability; and for spans of 2.5 and longer, given high measurement variability. Satisfactory accuracy in estimating AUCextrap was only achieved with low measurement variability and spans of 2.5 and longer. Neither of the classification rules improved the identification of accurate t1/2 and AUCextrap estimates. Based on the findings of this study, a strategy is proposed for determining the reportability of estimates of t1/2 and area under the curve extrapolated to infinity.  相似文献   

20.
Two independent samples from control with N(μ1, σ2) and treatment with pN(μ1, σ2) + (1 − p)N(μ2, σ2) are considered. A locally most powerful invariant test for testing H0: μ1 = μ2 against H1 : μ2 > μ1, where σ2 > 0, 0 < p < 1 are unknown, is obtained. Also, the robustness of the test statistic on the lines of Kariya and Sinha (Robustness of Statistical Tests (1989). Academic Press, New York) is studied.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号