首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
In this article, we construct an improved procedure for estimating the process capability index C pmk . We propose a new C pmk lower-bound approach based on the GCI concept, and compare it with other existing methods. Based on the comparison results, we conclude with a recommendation, and construct a step-by-step procedure for the recommended approach to estimate the actual process capability C pmk for various sample sizes. The lower bound attended by our recommended approach, indeed, improves other existing lower bound methods. We also investigate a real-world application to illustrate how we could apply the recommended approach to the actual manufacturing processes.  相似文献   

2.
Abstract

In this article, we introduce a new distribution???free Shewhart???type control chart implementing a modified Wilcoxon-type rank sum statistic based on progressive Type-II censoring reference data. The proposed chart is also a tool for monitoring the incomplete data, because the censoring scheme applied allows the protection of experimental units at an early stage of the testing procedure. The setup of the new nonparametric control chart is presented in detail, while its operating characteristic function is studied. Explicit formulae for the evaluation of Alarm Rate and Average Run Length values for both in-control and out-of-control situations are established. A numerical study carried out depicts the performance and robustness of the proposed control chart. For illustration purposes, a practical example is also discussed.  相似文献   

3.
The problem of constructing confidence limits for a scalar parameter is considered. Under weak conditions, Efron's accelerated bias-corrected bootstrap confidence limits are correct to second order in parametric familles. In this article, a new method, called the automatic percentile method, for setting approximate confidence limits is proposed as an attempt to alleviate two problems inherent in Efron's method. The accelerated bias-corrected method is not fully automatic, since it requires the calculation of an analytical adjustment; furthermore, it is typically not exact, though for many situations, particularly scalar-parameter familles, exact answers are available. In broader generality, the proposed method is exact when exact answers exist, and it is second-order accurate otherwise. The automatic percentile method is automatic, and for scalar parameter models it can be iterated to achieve higher accuracy, with the number of computations being linear in the number of iterations. However, when nuisance parameters are present, only second-order accuracy seems obtainable.  相似文献   

4.
The procedure of on-line process control for variables proposed by Taguchi consists of inspecting the mth item (a single item) of every m items produced and deciding, at each inspection, whether the mean value is increased or not. If the value of the monitored statistic is outside of the control limits, one decides the process is out-of-control and the production is stopped for adjustment; otherwise, it continues. In this article, a variable sampling interval (with a longer L and a shorter m ≤ L) chart with two set of limits is used. These limits are the warning (±W) and the control (±C), where W ≤ C. The process is stopped for adjustment when an observation falls outside of the control limits or a sequence of h observations falls between the warning limits and the control limits. The longer sample interval is used after an adjustment or when an observation falls inside the warning limits; otherwise, the short sampling interval is used. The properties of an ergodic Markov chain are used to evaluate the time (in units) that the process remains in-control and out-of-control, with the aim of building an economic–statistical model. The parameters (the sampling intervals m and L, the control limits W and C and the length of run h) are optimized by minimizing the cost function with constraints on the average run lengths (ARLs) and the conformity fraction. The performance of the current proposal is more economical than the decision taken based on a sequence of length h = 1, L = m, and W = C, which is the model employed in earlier studies. A numerical example illustrates the proposed procedure.  相似文献   

5.
Multivariate control charts are used to monitor stochastic processes for changes and unusual observations. Hotelling's T2 statistic is calculated for each new observation and an out‐of‐control signal is issued if it goes beyond the control limits. However, this classical approach becomes unreliable as the number of variables p approaches the number of observations n, and impossible when p exceeds n. In this paper, we devise an improvement to the monitoring procedure in high‐dimensional settings. We regularise the covariance matrix to estimate the baseline parameter and incorporate a leave‐one‐out re‐sampling approach to estimate the empirical distribution of future observations. An extensive simulation study demonstrates that the new method outperforms the classical Hotelling T2 approach in power, and maintains appropriate false positive rates. We demonstrate the utility of the method using a set of quality control samples collected to monitor a gas chromatography–mass spectrometry apparatus over a period of 67 days.  相似文献   

6.
In this article, we present the problem of selecting a good stochastic system with high probability and minimum total simulation cost when the number of alternatives is very large. We propose a sequential approach that starts with the Ordinal Optimization procedure to select a subset that overlaps with the set of the actual best m% systems with high probability. Then we use Optimal Computing Budget Allocation to allocate the available computing budget in a way that maximizes the Probability of Correct Selection. This is followed by a Subset Selection procedure to get a smaller subset that contains the best system among the subset that is selected before. Finally, the Indifference-Zone procedure is used to select the best system among the survivors in the previous stage. The numerical test involved with all these procedures shows the results for selecting a good stochastic system with high probability and a minimum number of simulation samples, when the number of alternatives is large. The results also show that the proposed approach is able to identify a good system in a very short simulation time.  相似文献   

7.
A generalization of step-up and step-down multiple test procedures is proposed. This step-up-down procedure is useful when the objective is to reject a specified minimum number, q, out of a family of k hypotheses. If this basic objective is met at the first step, then it proceeds in a step-down manner to see if more than q hypotheses can be rejected. Otherwise it proceeds in a step-up manner to see if some number less than q hypotheses can be rejected. The usual step-down procedure is the special case where q = 1, and the usual step-up procedure is the special case where q = k. Analytical and numerical comparisons between the powers of the step-up-down procedures with different choices of q are made to see how these powers depend on the actual number of false hypotheses. Examples of application include comparing the efficacy of a treatment to a control for multiple endpoints and testing the sensitivity of a clinical trial for comparing the efficacy of a new treatment with a set of standard treatments.  相似文献   

8.
This article introduces a new method, named the two-sided M-Bayesian credible limits method, to estimate reliability parameters. It is especially suitable for situations of high reliability or zero-failure data. The definition, properties, and related formulas of the two-sided M-Bayesian credible limit are proposed. A real data set are also discussed. By means of an example we can see that the two-sided M-Bayesian credible limits method is efficient and easy to perform.  相似文献   

9.
In this article, the Food and Drug Administration's (FDA) medical device substantial equivalence provision will be briefly introduced and some statistical methods useful for evaluating device equivalence are discussed. A sample size formula is derived for limits of agreement, which may be used to assess statistical equivalence in certain medical device situations. Simulation findings on the formula are presented, which can be used to guide sample size selections in common practical situations. Examples of the sample size procedure are given.  相似文献   

10.
This article has the following contributions. First, this article develops a new criterion for identifying whether or not a particular time series variable is a common factor in the conventional approximate factor model. Second, by modeling observed factors as a set of potential factors to be identified, this article reveals how to easily pin down the factor without performing a large number of estimations. This allows the researcher to check whether or not each individual in the panel is the underlying common factor and, from there, identify which individuals best represent the factor space by using a new clustering mechanism. Asymptotically, the developed procedure correctly identifies the factor when N and T jointly approach infinity. The procedure is shown to be quite effective in the finite sample by means of Monte Carlo simulation. The procedure is then applied to an empirical example, demonstrating that the newly developed method identifies the unknown common factors accurately.  相似文献   

11.
In this article we consider a test procedure which is useful in the situations where data are given by n independent blocks and the experimental conditions differ between blocks. The basic idea is very simple. The significance of the sample for each block is calculated and then standardized by its null mean and variance. The sum of standardized significances is proposed as a test statistic. The normal approximation for large n and the exact method for small n are applied in the continuous case. For the discrete case, some devices are also proposed. Several examples are given in order to explain how to apply the procedure.  相似文献   

12.
This article compares eight estimators in terms of relative efficiencies with the univariate mean, some of which have not been compared previously. Four estimators, when testing hypotheses, are compared in terms of actual Type I errors. In terms of point estimation, the modified one-step M-estimator, one-step M-estimator, and rfch estimator are found to be the three best choices depending on the proportion of outliers. In terms of actual Type I errors, the modified one-step M estimator's and rfch estimator's level was between.045 and.055 in 5 out of 7 situations when real data were used in simulations.  相似文献   

13.
The Shewhart, Bonferroni-adjustment, and analysis of means (ANOM) control charts are typically applied to monitor the mean of a quality characteristic. The Shewhart and Bonferroni procedure are utilized to recognize special causes in production process, where the control limits are constructed by assuming normal distribution for known parameters (mean and standard deviation), and approximately normal distribution regarding to unknown parameters. The ANOM method is an alternative to the analysis of variance method. It can be used to establish the mean control charts by applying equicorrelated multivariate non central t distribution. In this article, we establish new control charts, in phases I and II monitoring, based on normal and t distributions having as a cause a known (or unknown) parameter (standard deviation). Our proposed methods are at least as effective as the classical Shewhart methods and have some advantages.  相似文献   

14.
This article develops a new distribution-free multivariate procedure for statistical process control based on minimal spanning tree (MST), which integrates a multivariate two-sample goodness-of-fit (GOF) test based on MST and change-point model. Simulation results show that our proposed procedure is quite robust to nonnormally distributed data, and moreover, it is efficient in detecting process shifts, especially moderate to large shifts, which is one of the main drawbacks of most distribution-free procedures in the literature. The proposed procedure is particularly useful in start-up situations. Comparison results and a real data example show that our proposed procedure has great potential for application.  相似文献   

15.
Consider testing multiple hypotheses using tests that can only be evaluated by simulation, such as permutation tests or bootstrap tests. This article introduces MMCTest , a sequential algorithm that gives, with arbitrarily high probability, the same classification as a specific multiple testing procedure applied to ideal p‐values. The method can be used with a class of multiple testing procedures that include the Benjamini and Hochberg false discovery rate procedure and the Bonferroni correction controlling the familywise error rate. One of the key features of the algorithm is that it stops sampling for all the hypotheses that can already be decided as being rejected or non‐rejected. MMCTest can be interrupted at any stage and then returns three sets of hypotheses: the rejected, the non‐rejected and the undecided hypotheses. A simulation study motivated by actual biological data shows that MMCTest is usable in practice and that, despite the additional guarantee, it can be computationally more efficient than other methods.  相似文献   

16.
ABSTRACT

The identification of the out of control variable, or variables, after a multivariate control chart signals, is an appealing subject for many researchers in the last years. In this paper we propose a new method for approaching this problem based on principal components analysis. Theoretical control limits are derived and a detailed investigation of the properties and the limitations of the new method is given. A graphical technique which can be applied in some of these limiting situations is also provided.  相似文献   

17.
ABSTRACT

In this article, a procedure for comparisons between k (k ? 3) successive populations with respect to the variance is proposed when it is reasonable to assume that variances satisfy simple ordering. Critical constants required for the implementation of the proposed procedure are computed numerically and selected values of the computed critical constants are tabulated. The proposed procedure for normal distribution is extended for making comparisons between successive exponential populations with respect to scale parameter. A comparison between the proposed procedure and its existing competitor procedures is carried out, using Monte Carlo simulation. Finally, a numerical example is given to illustrate the proposed procedure.  相似文献   

18.
In this article, we investigate techniques for constructing tolerance limits such that the probability is γ that at least p proportion of the population would exceed that limit. We consider the unbalanced case and study the behavior of the limit as a function of ni 's (where ni is the number of observations in the ith batch), as well as that of the variance ratio. To construct the tolerance limits we use the approximation given in Thomas and Hultquist (1978). We also discuss the procedure for constructing the tolerance limits when the variance ratio is unknown. An example is given to illustrate the results.  相似文献   

19.
Applied statisticians and pharmaceutical researchers are frequently involved in the design and analysis of clinical trials where at least one of the outcomes is binary. Treatments are judged by the probability of a positive binary response. A typical example is the noninferiority trial, where it is tested whether a new experimental treatment is practically not inferior to an active comparator with a prespecified margin δ. Except for the special case of δ = 0, no exact conditional test is available although approximate conditional methods (also called second‐order methods) can be applied. However, in some situations, the approximation can be poor and the logical argument for approximate conditioning is not compelling. The alternative is to consider an unconditional approach. Standard methods like the pooled z‐test are already unconditional although approximate. In this article, we review and illustrate unconditional methods with a heavy emphasis on modern methods that can deliver exact, or near exact, results. For noninferiority trials based on either rate difference or rate ratio, our recommendation is to use the so‐called E‐procedure, based on either the score or likelihood ratio statistic. This test is effectively exact, computationally efficient, and respects monotonicity constraints in practice. We support our assertions with a numerical study, and we illustrate the concepts developed in theory with a clinical example in pulmonary oncology; R code to conduct all these analyses is available from the authors.  相似文献   

20.
The traditional non-parametric bootstrap (referred to as the n-out-of-n bootstrap) is a widely applicable and powerful tool for statistical inference, but in important situations it can fail. It is well known that by using a bootstrap sample of size m, different from n, the resulting m-out-of-n bootstrap provides a method for rectifying the traditional bootstrap inconsistency. Moreover, recent studies have shown that interesting cases exist where it is better to use the m-out-of-n bootstrap in spite of the fact that the n-out-of-n bootstrap works. In this paper, we discuss another case by considering its application to hypothesis testing. Two new data-based choices of m are proposed in this set-up. The results of simulation studies are presented to provide empirical comparisons between the performance of the traditional bootstrap and the m-out-of-n bootstrap, based on the two data-dependent choices of m, as well as on an existing method in the literature for choosing m. These results show that the m-out-of-n bootstrap, based on our choice of m, generally outperforms the traditional bootstrap procedure as well as the procedure based on the choice of m proposed in the literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号