首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we consider the supplier selection problem, which deals with comparing two one-sided processes and selecting a better one that has a higher capability. We first review two existing approximation approaches, and an exact approach proposed which we refer to as the division method. We then develop a new exact approach called the subtraction method. We compare the two exact methods on the selection power. The results show that the proposed subtraction method is indeed more powerful than the division method. A two-phase selecting procedure is then developed based on the subtraction method for practical applications. Some computational results are tabulated for practitioners’ convenience.  相似文献   

2.
In a two-treatment trial, a two-sided test is often used to reach a conclusion, Usually we are interested in doing a two-sided test because of no prior preference between the two treatments and we want a three-decision framework. When a standard control is just as good as the new experimental treatment (which has the same toxicity and cost), then we will accept both treatments. Only when the standard control is clearly worse or better than the new experimental treatment, then we choose only one treatment. In this paper, we extend the concept of a two-sided test to the multiple treatment trial where three or more treatments are involved. The procedure turns out to be a subset selection procedure; however, the theoretical framework and performance requirement are different from the existing subset selection procedures. Two procedures (exclusion or inclusion) are developed here for the case of normal data with equal known variance. If the sample size is large, they can be applied with unknown variance and with the binomial data or survival data with random censoring.  相似文献   

3.
Evaluating and comparing process capabilities are important tasks of production management. Manufacturers should apply the process with the highest capability among competing processes. A process group selection method is developed to solve the process selection problem based on overall yields. The goal is to select the processes with the highest overall yield among I processes under multiple quality characteristics, I > 2. The proposed method uses Bonferroni adjustment to control the overall error rate of comparing multiple processes. The critical values and the required sample sizes for designated powers are provided for practical use.  相似文献   

4.
A conformance proportion is an important and useful index to assess industrial quality improvement. Statistical confidence limits for a conformance proportion are usually required not only to perform statistical significance tests, but also to provide useful information for determining practical significance. In this article, we propose approaches for constructing statistical confidence limits for a conformance proportion of multiple quality characteristics. Under the assumption that the variables of interest are distributed with a multivariate normal distribution, we develop an approach based on the concept of a fiducial generalized pivotal quantity (FGPQ). Without any distribution assumption on the variables, we apply some confidence interval construction methods for the conformance proportion by treating it as the probability of a success in a binomial distribution. The performance of the proposed methods is evaluated through detailed simulation studies. The results reveal that the simulated coverage probability (cp) for the FGPQ-based method is generally larger than the claimed value. On the other hand, one of the binomial distribution-based methods, that is, the standard method suggested in classical textbooks, appears to have smaller simulated cps than the nominal level. Two alternatives to the standard method are found to maintain their simulated cps sufficiently close to the claimed level, and hence their performances are judged to be satisfactory. In addition, three examples are given to illustrate the application of the proposed methods.  相似文献   

5.
In drug development, non‐inferiority tests are often employed to determine the difference between two independent binomial proportions. Many test statistics for non‐inferiority are based on the frequentist framework. However, research on non‐inferiority in the Bayesian framework is limited. In this paper, we suggest a new Bayesian index τ = P(π1 > π2 ? Δ0 | X1,X2), where X1 and X2 denote binomial random variables for trials n1 and n2, and parameters π1 and π2, respectively, and the non‐inferiority margin is Δ0 > 0. We show two calculation methods for τ, an approximate method that uses normal approximation and an exact method that uses an exact posterior PDF. We compare the approximate probability with the exact probability for τ. Finally, we present the results of actual clinical trials to show the utility of index τ. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
7.
The squared error loss function applied to Bayesian predictive distributions is investigated as a variable selection criterion in linear regression equations. It is illustrated that “cost-free” variables may be eliminated if they are poor predictors. Regression models where the predictors are fixed and where they are stochastic are both considered. An empirical examination of the criterion and a comparison with other techniques are presented.  相似文献   

8.
In this paper, a new test method for analyzing unreplicated factorial designs is proposed. The proposed method is illustrated by some examples. An extensive simulation with the standard 16-run designs was carried out to compare the proposed method with three another existing methods. Besides the usual power criterion, another three versions of power, Power I–III, were also used to evaluate the performance of the compared methods. The simulation study shows that the proposed method has higher ability than the remaining three compared methods to identify all active effects without misidentifying any inactive effects as active.  相似文献   

9.
The problem of selection of the best multivariate population is given a new formulation which does not involve reducing the populations to univariate quantities. This formulation's solution is developed for known, and (using the Heteroscedastic Method) also for unknown, variance-covariance matrices. Preference reversals and arbitrary nonlinear preference functions are explicitly allowed in this new theory  相似文献   

10.
Variable selection in multiple linear regression models is considered. It is shown that for the special case of orthogonal predictor variables, an adaptive pre-test-type procedure proposed by Venter and Steel [Simultaneous selection and estimation for the some zeros family of normal models, J. Statist. Comput. Simul. 45 (1993), pp. 129–146] is almost equivalent to least angle regression, proposed by Efron et al. [Least angle regression, Ann. Stat. 32 (2004), pp. 407–499]. A new adaptive pre-test-type procedure is proposed, which extends the procedure of Venter and Steel to the general non-orthogonal case in a multiple linear regression analysis. This new procedure is based on a likelihood ratio test where the critical value is determined data-dependently. A practical illustration and results from a simulation study are presented.  相似文献   

11.
The statistical methods for variable selection and prediction could be challenging when missing covariates exist. Although multiple imputation (MI) is a universally accepted technique for solving missing data problem, how to combine the MI results for variable selection is not quite clear, because different imputations may result in different selections. The widely applied variable selection methods include the sparse partial least-squares (SPLS) method and the penalized least-squares method, e.g. the elastic net (ENet) method. In this paper, we propose an MI-based weighted elastic net (MI-WENet) method that is based on stacked MI data and a weighting scheme for each observation in the stacked data set. In the MI-WENet method, MI accounts for sampling and imputation uncertainty for missing values, and the weight accounts for the observed information. Extensive numerical simulations are carried out to compare the proposed MI-WENet method with the other competing alternatives, such as the SPLS and ENet. In addition, we applied the MI-WENet method to examine the predictor variables for the endothelial function that can be characterized by median effective dose (ED50) and maximum effect (Emax) in an ex-vivo phenylephrine-induced extension and acetylcholine-induced relaxation experiment.  相似文献   

12.
Abstract

The use of indices as an estimation tool of process capability is long-established among the statistical quality professionals. Numerous capability indices have been proposed in last few years. Cpm constitutes one of the most widely used capability indices and its estimation has attracted much interest. In this paper, we propose a new method for constructing an approximate confidence interval for the index Cpm. The proposed method is based on the asymptotic distribution of the index Cpm obtained by the Delta Method. Under some regularity conditions, the distribution of an estimator of the process capability index Cpm is asymptotically normal.  相似文献   

13.
14.
A sample size selection procedure for paired comparisons of means is presented which controls the half width of the confidence intervals while allowing for unequal variances of treatment means.  相似文献   

15.
The process capability index C pk is widely used when measuring the capability of a manufacturing process. A process is defined to be capable if the capability index exceeds a stated threshold value, e.g. C pk >4/3. This inequality can be expressed graphically using a process capability plot, which is a plot in the plane defined by the process mean and the process standard deviation, showing the region for a capable process. In the process capability plot, a safety region can be plotted to obtain a simple graphical decision rule to assess process capability at a given significance level. We consider safety regions to be used for the index C pk . Under the assumption of normality, we derive elliptical safety regions so that, using a random sample, conclusions about the process capability can be drawn at a given significance level. This simple graphical tool is helpful when trying to understand whether it is the variability, the deviation from target, or both that need to be reduced to improve the capability. Furthermore, using safety regions, several characteristics with different specification limits and different sample sizes can be monitored in the same plot. The proposed graphical decision rule is also investigated with respect to power.  相似文献   

16.
Franklin and Wasserman (1991) introduced the use of Bootstrap sampling procedures for deriving nonparametric confidence intervals for the process capability index, Cpk, which are applicable for instances when at least twenty data points are available. This represents a significant reduction in the usually recommended sample requirement of 100 observations (see Gunther 1989). To facilitate and encourage the use of these procedures. a FORTRAN program is provided for computation of confidence intervals for Cpk. Three methods are provided for this calculation including the standard method, the percentile confidence interval, and the biased - corrected percentile confidence interval.  相似文献   

17.
We propose a nonparametric procedure to test for changes in correlation matrices at an unknown point in time. The new test requires constant expectations and variances, but only mild assumptions on the serial dependence structure, and has considerable power in finite samples. We derive the asymptotic distribution under the null hypothesis of no change as well as local power results and apply the test to stock returns.  相似文献   

18.
In sampling inspection by variables, an item is considered defective if its quality characteristic Y falls below some specification limit L0. We consider switching to a new supplier if we can be sure that the proportion of defective items for the new supplier is smaller than the proportion defective for the present supplier.

Assume that Y has a normal distribution. A test for comparing these proportions is developed. A simulation study of the performance of the test is presented.  相似文献   

19.
Tests that combine p-values, such as Fisher's product test, are popular to test the global null hypothesis H0 that each of n component null hypotheses, H1,…,Hn, is true versus the alternative that at least one of H1,…,Hn is false, since they are more powerful than classical multiple tests such as the Bonferroni test and the Simes tests. Recent modifications of Fisher's product test, popular in the analysis of large scale genetic studies include the truncated product method (TPM) of Zaykin et al. (2002), the rank truncated product (RTP) test of Dudbridge and Koeleman (2003) and more recently, a permutation based test—the adaptive rank truncated product (ARTP) method of Yu et al. (2009). The TPM and RTP methods require users' specification of a truncation point. The ARTP method improves the performance of the RTP method by optimizing selection of the truncation point over a set of pre-specified candidate points. In this paper we extend the ARTP by proposing to use all the possible truncation points {1,…,n} as the candidate truncation points. Furthermore, we derive the theoretical probability distribution of the test statistic under the global null hypothesis H0. Simulations are conducted to compare the performance of the proposed test with the Bonferroni test, the Simes test, the RTP test, and Fisher's product test. The simulation results show that the proposed test has higher power than the Bonferroni test and the Simes test, as well as the RTP method. It is also significantly more powerful than Fisher's product test when the number of truly false hypotheses is small relative to the total number of hypotheses, and has comparable power to Fisher's product test otherwise.  相似文献   

20.
We consider a Bayesian nonignorable model to accommodate a nonignorable selection mechanism for predicting small area proportions. Our main objective is to extend a model on selection bias in a previously published paper, coauthored by four authors, to accommodate small areas. These authors assume that the survey weights (or their reciprocals that we also call selection probabilities) are available, but there is no simple relation between the binary responses and the selection probabilities. To capture the nonignorable selection bias within each area, they assume that the binary responses and the selection probabilities are correlated. To accommodate the small areas, we extend their model to a hierarchical Bayesian nonignorable model and we use Markov chain Monte Carlo methods to fit it. We illustrate our methodology using a numerical example obtained from data on activity limitation in the U.S. National Health Interview Survey. We also perform a simulation study to assess the effect of the correlation between the binary responses and the selection probabilities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号