首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The existing process capability indices (PCI's) assume that the distribution of the process being investigated is normal. For non-normal distributions, PCI's become unreliable in that PCI's may indicate the process is capable when in fact it is not. In this paper, we propose a new index which can be applied to any distribution. The proposed indexCf:, is directly related to the probability of non-conformance of the process. For a given random sample, the estimation of Cf boils down to estimating non-parametrically the tail probabilities of an unknown distribution. The approach discussed in this paper is based on the works by Pickands (1975) and Smith (1987). We also discuss the construction of bootstrap confidence intervals of Cf: based on the so-called accelerated bias correction method (BC a:). Several simulations are carried out to demonstrate the flexibility and applicability of Cf:. Two real life data sets are analyzed using the proposed index.  相似文献   

2.
ABSTRACT

The Concordance statistic (C-statistic) is commonly used to assess the predictive performance (discriminatory ability) of logistic regression model. Although there are several approaches for the C-statistic, their performance in quantifying the subsequent improvement in predictive accuracy due to inclusion of novel risk factors or biomarkers in the model has been extremely criticized in literature. This paper proposed a model-based concordance-type index, CK, for use with logistic regression model. The CK and its asymptotic sampling distribution is derived following Gonen and Heller's approach for Cox PH model for survival data but taking necessary modifications for use with binary data. Unlike the existing C-statistics for logistic model, it quantifies the concordance probability by taking the difference in the predicted risks between two subjects in a pair rather than ranking them and hence is able to quantify the equivalent incremental value from the new risk factor or marker. The simulation study revealed that the CK performed well when the model parameters are correctly estimated for large sample and showed greater improvement in quantifying the additional predictive value from the new risk factor or marker than the existing C-statistics. Furthermore, the illustration using three datasets supports the findings from simulation study.  相似文献   

3.
Real-time polymerase chain reaction (PCR) is reliable quantitative technique in gene expression studies. The statistical analysis of real-time PCR data is quite crucial for results analysis and explanation. The statistical procedures of analyzing real-time PCR data try to determine the slope of regression line and calculate the reaction efficiency. Applications of mathematical functions have been used to calculate the target gene relative to the reference gene(s). Moreover, these statistical techniques compare Ct (threshold cycle) numbers between control and treatments group. There are many different procedures in SAS for real-time PCR data evaluation. In this study, the efficiency of calibrated model and delta delta Ct model have been statistically tested and explained. Several methods were tested to compare control with treatment means of Ct. The methods tested included t-test (parametric test), Wilcoxon test (non-parametric test) and multiple regression. Results showed that applied methods led to similar results and no significant difference was observed between results of gene expression measurement by the relative method.  相似文献   

4.
ABSTRACT

Considerable effort has been spent on the development of confidence intervals for process capability indices (PCIs) based on the sampling distribution of the PCI or the transferred PCI. However, there is still no definitive way to construct a closed interval for a PCI. The aim of this study is to develop closed intervals for the PCIs Cpu, Cpl, and Spk based on Boole's inequality and de Morgan's laws. The relationships between different sample sizes, the significance levels, and the confidence intervals of the PCIs Cpu, Cpl, and Spk are investigated. Then, a testing model for interval estimation for the PCIs Cpu, Cpl, and Spk is built as a powerful tool for measuring the quality performance of a product. Finally, an applied example is given to demonstrate the effectiveness and applicability of the proposed method and the testing model.  相似文献   

5.
Stuart's (1953) measure of association in contingency tables, tC, based on Kendall's (1962) t, is compared with Goodman and Kruskal's (1954, 1959, 1963, 1972) measure G. First, it is proved that |G| ≥ |tC|; and then it is shown that the upper bound for the asymptotic variance of G is not necessarily always smaller than the upper bound for the asymptotic variance of tC. It is proved, however, that the upper bound for the coefficient of variation of G cannot be larger in absolute value than the upper bound for the coefficient of variation of tC. The asymptotic variance of tC is also derived and hence we obtain an upper bound for this asymptotic variance which is sharper than Stuart's (1953) upper bound.  相似文献   

6.
In this article, we investigated the bootstrap calibrated generalized confidence limits for process capability indices C pk for the one-way random effect model. Also, we derived Bissell's approximation formula for the lower confidence limit using Satterthwaite's method and calculated its coverage probabilities and expected values. Then we compared it with standard bootstrap (SB) method and generalized confidence interval method. The simulation results indicate that the confidence limit obtained offers satisfactory coverage probabilities. The proposed method is illustrated with the help of simulation studies and data sets.  相似文献   

7.
In the context of the general linear model E(Y)=Xβ possibly subject to restrictions Rβ=r two secondary parameters may be well defined by Θi=GiE(Y)-Θoi=Ci βoi,i=1,2, and corresponding nonconstant hypotheses, Hii=0. The following possible relations are defined: Θ1: is dependent upon /equivalent to/identical to Θ2:H1is a subhypothesis of/is identical to H2. Necessary and sufficient conditions, involving straightforward matrix computations, are presented for each relation. Comparisons of secondary parameters and hypotheses are illustrated with an incomplete, unbalanced 3 × 4 factorial design from Searle in which, using a constrained version of Searle's model, parameters and hypotheses in the incomplete, unbalanced design are shown to be indentical to parameters one would define if complete balanced data were available. Techniques for computing simplified definitions are illustrated.  相似文献   

8.
We consider the problem of deciding which of a set of p independent variables x1 X2J xs we are to regard as being functionally involved in the mean of a dependent normal random variable Y and estimating E( Y) in terms of the chosen x's. This mean is an unknown function (assumed to be doubly differentiable) of some or all of the x's, so that the problem is of wide relevance. We approximate to the hypersurface in two different ways, and select within each approximation:

(a)For the situation where the mean of Y is assumed to be a linear function of the x's, we use ono of the optimum methods of selection.

(b)More generally, in the space of the X's the function will be approximately linear in a relatively small region. Accordingly this p-dimensional space is subdivided into smaller regions by a clustering procedure, and a hyperplane if fitted with in each region to aproximate to the unknown responce surface.An adaption of an optimum-regressor-selection procedure is then used to assist in the selection of the regressors

Approximate F tests are given to choose between models, including deciding how many x's to retain. Alternatively: the application of Akaike's Extended Maximum Likelihood Principle provides another way of choosing between the models and of selecting regressor variables. The methods are applied to data on glass manufacture.  相似文献   

9.
The Akaike Information Criterion (AIC) is developed for selecting the variables of the nested error regression model where an unobservable random effect is present. Using the idea of decomposing the likelihood into two parts of “within” and “between” analysis of variance, we derive the AIC when the number of groups is large and the ratio of the variances of the random effects and the random errors is an unknown parameter. The proposed AIC is compared, using simulation, with Mallows' C p , Akaike's AIC, and Sugiura's exact AIC. Based on the rates of selecting the true model, it is shown that the proposed AIC performs better.  相似文献   

10.
We employ quantile regression fixed effects models to estimate the income-pollution relationship on NO x (nitrogen oxide) and SO 2 (sulfur dioxide) using U.S. data. Conditional median results suggest that conditional mean methods provide too optimistic estimates about emissions reduction for NO x , while the opposite is found for SO 2. Deleting outlier states reverses the absence of a turning point for SO 2 in the conditional mean model, while the conditional median model is robust to them. We also document the relationship's sensitivity to including additional covariates for NO x , and undertake simulations to shed light on some estimation issues of the methods employed.  相似文献   

11.
In this work, we study D s -optimal design for Kozak's tree taper model. The approximate D s -optimal designs are found invariant to tree size and hence create a ground to construct a general replication-free D s -optimal design. Even though the designs are found not to be dependent on the parameter value p of the Kozak's model, they are sensitive to the s×1 subset parameter vector values of the model. The 12 points replication-free design (with 91% efficiency) suggested in this study is believed to reduce cost and time for data collection and more importantly to precisely estimate the subset parameters of interest.  相似文献   

12.
In order to reach the inference about a linear combination of two independent binomial proportions, various procedures exist (Wald's classic method, the exact, approximate, or maximized score methods, and the Newcombe-Zou method). This article defines and evaluates 25 different methods of inference, and selects the ones with the best behavior. In general terms, the optimal method is the classic Wald method applied to the data to which z 2 α/2/4 successes and z 2 α/2/4 failures are added (≈1 if α = 5%) if no sample proportion has a value of 0 or 1 (otherwise the added increase may be different).

Supplemental materials are available for this article. Go to the publisher's online edition of Communications in Statistics - Simulation and Computation to view the free supplemental file.  相似文献   

13.
This paper derives Akaike information criterion (AIC), corrected AIC, the Bayesian information criterion (BIC) and Hannan and Quinn’s information criterion for approximate factor models assuming a large number of cross-sectional observations and studies the consistency properties of these information criteria. It also reports extensive simulation results comparing the performance of the extant and new procedures for the selection of the number of factors. The simulation results show the di?culty of determining which criterion performs best. In practice, it is advisable to consider several criteria at the same time, especially Hannan and Quinn’s information criterion, Bai and Ng’s ICp2 and BIC3, and Onatski’s and Ahn and Horenstein’s eigenvalue-based criteria. The model-selection criteria considered in this paper are also applied to Stock and Watson’s two macroeconomic data sets. The results differ considerably depending on the model-selection criterion in use, but evidence suggesting five factors for the first data and five to seven factors for the second data is obtainable.  相似文献   

14.
In a recent paper (J. Statist. Comput. Simul., 1995, Vol.53, pp. 195–203) P. A. Wright proposed a new process capability index Cs which generalizes the Pearn-Kotz-Johnson’s index Cpmk by taking into account the skewness (in addition to deviation of the mean from tliCrntarget already incorporated in Cpmk ). The purpose of this article is to study the consistency and asymptotics of an estimate ?s of Cs The asymptotic distribution provides an insight into some desirable properties of the estimate which are not apparent from its original definition  相似文献   

15.
We study a system of two non-identical and separate M/M/1/? queues with capacities (buffers) C1 < ∞ and C2 = ∞, respectively, served by a single server that alternates between the queues. The server’s switching policy is threshold-based, and, in contrast to other threshold models, is determined by the state of the queue that is not being served. That is, when neither queue is empty while the server attends Qi (i = 1, 2), the server switches to the other queue as soon as the latter reaches its threshold. When a served queue becomes empty we consider two switching scenarios: (i) Work-Conserving, and (ii) Non-Work-Conserving. We analyze the two scenarios using Matrix Geometric methods and obtain explicitly the rate matrix R, where its entries are given in terms of the roots of the determinants of two underlying matrices. Numerical examples are presented and extreme cases are investigated.  相似文献   

16.
The Dirichlet-multinomial model is considered as a model for cluster sampling. The model assumes that the design's covariance matrix is a constant times the covariance under multinomial sampling. The use of this model requires estimating a parameter C, that measures the clustering effect. In this paper, a regression estimate for C is obtained. An approximate distribution of this estimator is obtained through the use of asymptotic techniques. A goodness of fit statistic for testing the fit of the Dirichlet Multinomial model is also obtained, based on those asymptotic techniques. These statistics provide a means of knowing when the data satisfy the model assumption. These results are used to analyze data concerning the authorship of Greek prose.  相似文献   

17.
By considering separately B and C, the frequencies of individuals who consistently gave positive or negative answers in before and after responses, a new revised version of McNemar's test is derived. It improves upon Lu's revised formula, which considers B and C together. When both B and C are 0, the new revised version produces the same results as McNemar's test. When one of B and C is 0, the new revised test produces the same results as Lu's version. Compared to Lu's version, the new revised test is a more complete revision of McNemar's test.  相似文献   

18.
We derive two C(α) statistics and the likelihood-ratio statistic for testing the equality of several correlation coefficients, from k ≥ 2 independent random samples from bivariate normal populations. The asymptotic relationship of the C(α) tests, the likelihood-ratio test, and a statistic based on the normality assumption of Fisher's Z-transform of the sample correlation coefficient is established. A comparative performance study, in terms of size and power, is then conducted by Monte Carlo simulations. The likelihood-ratio statistic is often too liberal, and the statistic based on Fisher's Z-transform is conservative. The performance of the two C(α) statistics is identical. They maintain significance level well and have almost the same power as the other statistics when empirically calculated critical values of the same size are used. The C(α) statistic based on a noniterative estimate of the common correlation coefficient (based on Fisher's Z-transform) is recommended.  相似文献   

19.
When the distribution of one of the characteristics of a process is non normal, methods based on empirical percentiles suggest the use of several process capability indices (PCIs) which are similar to the usual C p , C pk , C pm , and C pmk indices. However most of these PCIs apply only to the case of symmetrical tolerances. To take into account the asymmetry of the tolerances as well as the asymmetry of the process distribution, new PCIs which improve the previous ones are proposed. In the end and in order to validate the method proposed here, we apply it to a real production case.  相似文献   

20.
In this article, we implement the Regression Method for estimating (d 1, d 2) of the FISSAR(1, 1) model. It is also possible to estimate d 1 and d 2 by Whittle's method. We also compute the estimated bias, standard error, and root mean square error by a simulation study. A comparison was made between the Regression Method of estimating d 1 and d 2 to that of the Whittle's method. It was found in this simulation study that the Regression Method of estimation was better when compare with the Whittle's estimator, in the sense that it had smaller root mean square errors (RMSE) values.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号