首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
The Weibull distribution is one of the most popular distributions for lifetime modeling. However, there has not been much research on control charts for a Weibull distribution. Shewhart control is known to be inefficient to detect a small shift in the process, while exponentially weighted moving average (EWMA) and cumulative sum control chart (CUSUM) charts have the ability to detect small changes in the process. To enhance the performance of a control chart for a Weibull distribution, we introduce a new control chart based on hybrid EWMA and CUSUM statistic, called the HEWMA-CUSUM chart. The performance of the proposed chart is compared with the existing chart in terms of the average run length (ARL). The proposed chart is found to be more sensitive than the existing chart in ARL. A simulation study is provided for illustration purposes. A real data is also applied to the proposed chart for practical use.  相似文献   

2.
Distribution-free control charts gained momentum in recent years as they are more efficient in detecting a shift when there is a lack of information regarding the underlying process distribution. However, a distribution-free control chart for monitoring the process location often requires information on the in-control process median. This is somewhat challenging because, in practice, any information on the location parameter might not be known in advance and estimation of the parameter is therefore required. In view of this, a time-weighted control chart, labelled as the Generally Weighted Moving Average (GWMA) exceedance (EX) chart (in short GWMA-EX chart), is proposed for detection of a shift in the unknown process location; this chart is based on exceedance statistic when there is no information available on the process distribution. An extensive performance analysis shows that the proposed GWMA-EX control chart is, in many cases, better than its contenders.  相似文献   

3.
The exponential family structure of the joint distribution of generalized order statistics is utilized to establish multivariate tests on the model parameters. For simple and composite null hypotheses, the likelihood ratio test (LR test), Wald's test, and Rao's score test are derived and turn out to have simple representations. The asymptotic distribution of the corresponding test statistics under the null hypothesis is stated, and, in case of a simple null hypothesis, asymptotic optimality of the LR test is addressed. Applications of the tests are presented; in particular, we discuss their use in reliability, and to decide whether a Poisson process is homogeneous. Finally, a power study is performed to measure and compare the quality of the tests for both, simple and composite null hypotheses.  相似文献   

4.
Abstract

In this article, a new non parametric control chart based on the modified or controlled exponentially weighted moving average (EWMA) statistic is developed to monitor the process deviation from the target value. The proposed control chart is evaluated for different values of design parameters using the average run length as a performance criterion under various sample sizes. The proposed chart is compared with the existing non parametric EWMA sign control chart. It is observed that the proposed chart is better than the existing EWMA sign control chart in terms of run length characteristics. An empirical example is provided for the practical implementation of the proposed chart.  相似文献   

5.
In this paper, we investigated the Andrews–Pregibon (AP), COVRATIO and Cook–Weisberg (CW) statistics to determine the influential observations on the confidence ellipsoids in linear regression model with correlated errors and correlated regressors. A real example and a Monte Carlo simulation study are given to detect the effects of autocorrelation coefficient and ridge parameter on the AP, COVRATIO and CW statistics.  相似文献   

6.
A control procedure is presented in this article that is based on jointly using two separate control statistics in the detection and interpretation of signals in a multivariate normal process. The procedure detects the following three situations: (i) a mean vector shift without a shift in the covariance matrix; (ii) a shift in process variation (covariance matrix) without a mean vector shift; and (iii) both a simultaneous shift in the mean vector and covariance matrix as the result of a change in the parameters of some key process variables. It is shown that, following the occurrence of a signal on either of the separate control charts, the values from both of the corresponding signaling statistics can be decomposed into interpretable elements. Viewing the two decompositions together helps one to specifically identify the individual components and associated variables that are being affected. These components may include individual means or variances of the process variables as well as the correlations between or among variables. An industrial data set is used to illustrate the procedure.  相似文献   

7.
ABSTRACT

Likelihood ratio tests for a change in mean in a sequence of independent, normal random variables are based on the maximum two-sample t-statistic, where the maximum is taken over all possible changepoints. The maximum t-statistic has the undesirable characteristic that Type I errors are not uniformly distributed across possible changepoints. False positives occur more frequently near the ends of the sequence and occur less frequently near the middle of the sequence. In this paper we describe an alternative statistic that is based upon a minimum p-value, where the minimum is taken over all possible changepoints. The p-value at any particular changepoint is based upon both the two-sample t-statistic at that changepoint and the probability that the maximum two-sample t-statistic is achieved at that changepoint. The new statistic has a more uniform distribution of Type I errors across potential changepoints and it compares favorably with respect to statistical power, false discovery rates, and the mean square error of changepoint estimates.  相似文献   

8.
We consider the problem of detecting a ‘bump’ in the intensity of a Poisson process or in a density. We analyze two types of likelihood ratio‐based statistics, which allow for exact finite sample inference and asymptotically optimal detection: The maximum of the penalized square root of log likelihood ratios (‘penalized scan’) evaluated over a certain sparse set of intervals and a certain average of log likelihood ratios (‘condensed average likelihood ratio’). We show that penalizing the square root of the log likelihood ratio — rather than the log likelihood ratio itself — leads to a simple penalty term that yields optimal power. The thus derived penalty may prove useful for other problems that involve a Brownian bridge in the limit. The second key tool is an approximating set of intervals that is rich enough to allow for optimal detection, but which is also sparse enough to allow justifying the validity of the penalization scheme simply via the union bound. This results in a considerable simplification in the theoretical treatment compared with the usual approach for this type of penalization technique, which requires establishing an exponential inequality for the variation of the test statistic. Another advantage of using the sparse approximating set is that it allows fast computation in nearly linear time. We present a simulation study that illustrates the superior performance of the penalized scan and of the condensed average likelihood ratio compared with the standard scan statistic.  相似文献   

9.
This paper is an investigation on the sufficient statistic for the parameters of the vector-valued (multivariate) ARMA models, when a finite sample is available. In the simplest case ARMA(1,1), by using the factorization theorem, we present a sufficient statistic whose dimension depends on the sample size and this dimension is even larger than the sample size. In this case and under some restrictions, we have solved this problem and have presented a sufficient statistic whose dimension does not depend on the sample size. In the general case, due to the complexity of the problem, we will use the modified versions of the likelihood function to find an approximate sufficient statistic in terms of the periodogram. The dimension of this sufficient statistic depends on the sample size; however, this dimension is much lower than the sample size.  相似文献   

10.
Let X1,… Xm be a random sample of m failure times under normal conditions with the underlying distribution F(x) and Y1,…,Yn a random sample of n failure times under accelerated condititons with underlying distribution G(x);G(x)=1?[1?F(x)]θ with θ being the unknown parameter under study.Define:Uij=1 otherwise.The joint distribution of ijdoes not involve the distribution F and thus can be used to estimate the acceleration parameter θ.The second approach for estimating θ is to use the ranks of the Y-observations in the combined X- and Y-samples.In this paper we establish that the rank of the Y-observations in the pooled sample form a sufficient statistic for the information contained in the Uii 's about the parameter θ and that there does not exist an unbiassed estimator for the parameter θ.We also construct several estimators and confidence interavals for the parameter θ.  相似文献   

11.
This paper provides the theoretical explanation and Monte Carlo experiments of using a modified version of Durbin-Watson ( D W ) statistic to test an 1 ( 1 ) process against I ( d ) alternatives, that is, integrated process of order d, where d is a fractional number. We provide the exact order of magnitude of the modified D W test when the data generating process is an I ( d ) process with d E (0. 1.5). Moreover, the consistency of the modified DW statistic as a unit root test against I ( d ) alternatives with d E ( 0 , l ) U ( 1 , 1.5) is proved in this paper. In addition to the theoretical analysis, Monte Carlo experiments show that the performance of the modified D W statistic reveals that it can be used as a unit root test against I ( d ) alternatives.  相似文献   

12.
A control procedure is presented for monitoring changes in variation for a multivariate normal process in a Phase II operation where the subgroup size, m, is less than p, the number of variates. The methodology is based on a form of Wilk' statistic, which can be expressed as a function of the ratio of the determinants of two separate estimates of the covariance matrix. One estimate is based on the historical data set from Phase I and the other is based on an augmented data set including new data obtained in Phase II. The proposed statistic is shown to be distributed as the product of independent beta distributions that can be approximated using either a chi-square or F-distribution. An ARL study of the statistic is presented for a range of conditions for the population covariance matrix. Cases are considered where a p-variate process is being monitored using a sample of m observations per subgroup and m < p. Data from an industrial multivariate process is used to illustrate the proposed technique.  相似文献   

13.
We consider a novel univariate non parametric cumulative sum (CUSUM) control chart for detecting the small shifts in the mean of a process, where the nominal value of the mean is unknown but some historical data are available. This chart is established based on the Mann–Whitney statistic as well as the change-point model, where any assumption for the underlying distribution of the process is not required. The performance comparisons based on simulations show that the proposed control chart is slightly more effective than some other related non parametric control charts.  相似文献   

14.
Following Viraswami and Reid (1996), higher-order results under model misspecification are obtained for the likelihood-ratio statistic and the adjusted likelihood-ratio statistic, for the case of a scalar parameter. An improved version of the adjusted likelihood-ratio statistic is suggested.  相似文献   

15.
In applications of generalized order statistics as, for instance, reliability analysis of engineering systems, prior knowledge about the order of the underlying model parameters is often available and may therefore be incorporated in inferential procedures. Taking this information into account, we establish the likelihood ratio test, Rao's score test, and Wald's test for test problems arising from the question of appropriate model selection for ordered data, where simple order restrictions are put on the parameters under the alternative hypothesis. For simple and composite null hypothesis, explicit representations of the corresponding test statistics are obtained along with some properties and their asymptotic distributions. A simulation study is carried out to compare the order restricted tests in terms of their power. In the set-up considered, the adapted tests significantly improve the power of the associated omnibus versions for small sample sizes, especially when testing a composite null hypothesis.  相似文献   

16.
Ranked-set sampling is an alternative to random sampling for settings in which measurements are difficult or costly. Ranked-set sampling utilizes information gained without measurement to structure the eventual measured sample. This additional information yields improved properties for ranked-set sample procedures relative to their simple random sample counterparts. We review the available nonparametric procedures for data from ranked-set samples. Estimation of the distribution function was the first nonparametric setting to which ranked-set sampling methodology was applied. Since the first paper on the ranked-set sample empirical distribution function, the two-sample location setting, the sign test, and the signed-rank test have all been examined for ranked-set samples. In addition, estimation of the distribution function has been considered in a more general setting. We discuss the similarities and differences in the properties of the ranked-set sample procedures for the various settings  相似文献   

17.
This article presents a synthetic control chart for detection of shifts in the process median. The synthetic chart is a combination of sign chart and conforming run-length chart. The performance evaluation of the proposed chart indicates that the synthetic chart has a higher power of detecting shifts in process median than the Shewhart charts based on sign statistic as well as the classical Shewhart X-bar chart for various symmetric distributions. The improvement is significant for shifts of moderate to large shifts in the median. The robustness studies of the proposed synthetic control chart against outliers indicate that the proposed synthetic control chart is robust against contamination by outliers.  相似文献   

18.
Ranked set sampling is a procedure which may be used to improve the precision of the estimator of the mean. It is useful in cases where the variable of interest is much more difficult to measure than to order. However, even if ordering is difficult, but there is an easily ranked concomitant variable available, then it may be used to “judgment order” the original variable. The amount of increase in the precision of the estimator is dependent upon the correlation between the 2 variables.  相似文献   

19.
The Rayleigh distribution has been used to model right skewed data. Rayleigh [On the resultant of a large number of vibrations of the some pitch and of arbitrary phase. Philos Mag. 1880;10:73–78] derived it from the amplitude of sound resulting from many important sources. In this paper, a new goodness-of-fit test for the Rayleigh distribution is proposed. This test is based on the empirical likelihood ratio methodology proposed by Vexler and Gurevich [Empirical likelihood ratios applied to goodness-of-fit tests based on sample entropy. Comput Stat Data Anal. 2010;54:531–545]. Consistency of the proposed test is derived. It is shown that the distribution of the proposed test does not depend on scale parameter. Critical values of the test statistic are computed, through a simulation study. A Monte Carlo study for the power of the proposed test is carried out under various alternatives. The performance of the test is compared with some well-known competing tests. Finally, an illustrative example is presented and analysed.  相似文献   

20.
As the number of random variables for the categorical data increases, the possible number of log-linear models which can be fitted to the data increases rapidly, so that various model selection methods are developed. However, we often found that some models chosen by different selection criteria do not coincide. In this paper, we propose a comparison method to test the final models which are non-nested. The statistic of Cox (1961, 1962) is applied to log-linear models for testing non-nested models, and the Kullback-Leibler measure of closeness (Pesaran 1987) is explored. In log-linear models, pseudo estimators for the expectation and the variance of Cox's statistic are not only derived but also shown to be consistent estimators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号