共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
Correlation-Type Goodness of Fit Test for Extreme Value Distribution Based on Simultaneous Closeness
In reliability studies, one typically would assume a lifetime distribution for the units under study and then carry out the required analysis. One popular choice for the lifetime distribution is the family of two-parameter Weibull distributions (with scale and shape parameters) which, through a logarithmic transformation, can be transformed to the family of two-parameter extreme value distributions (with location and scale parameters). In carrying out a parametric analysis of this type, it is highly desirable to be able to test the validity of such a model assumption. A basic tool that is useful for this purpose is a quantile–quantile (QQ) plot, but in its use, the issue of the choice of plotting position arises. Here, by adopting the optimal plotting points based on Pitman closeness criterion proposed recently by Balakrishnan et al. (2010b), and referred to as simultaneous closeness probability (SCP) plotting points, we propose a correlation-type goodness of fit test for the extreme value distribution. We compute the SCP plotting points for various sample sizes and use them to determine the mean, standard deviation and critical values for the proposed correlation-type test statistic. Using these critical values, we carry out a power study, similar to the one carried out by Kinnison (1989), through which we demonstrate that the use of SCP plotting points results in better power than with the use of mean ranks as plotting points and nearly the same power as with the use of median ranks. We then demonstrate the use of the SCP plotting points and the associated correlation-type test for Weibull analysis with an illustrative example. Finally, for the sake of comparison, we also adapt two statistics proposed by Gan and Koehler (1990), in the context of probability–probability (PP) plots, based on SCP plotting points and compare their performance to those based on mean ranks. The empirical study also reveals that the tests from the QQ plot have better power than those from the PP plot. 相似文献
3.
《统计学通讯:理论与方法》2013,42(2):371-380
Palmer and Broemeling [1] compare Bayes and maximum likelihood estimates of the intraclass correlation (ICC). The prior information in their derivation of the Bayes estimator is placed on the variance components instead of the ICC itself. This paper finds a Bayes estimator of the ICC with the prior placed on the ICC. Bayes estimates based on three different priors are then compared to method of moments estimate. 相似文献
4.
Gadre and Rattihalli [5] have introduced the Modified Group Runs (MGR) control chart to identify the increases in fraction non-conforming and to detect shifts in the process mean. The MGR chart reduces the out-of-control average time-to-signal (ATS), as compared with most of the well-known control charts. In this article, we develop the Side Sensitive Modified Group Runs (SSMGR) chart to detect shifts in the process mean. With the help of numerical examples, it is illustrated that the SSMGR chart performs better than the Shewhart's X¯ chart, the synthetic chart [12], the Group Runs chart [4], the Side Sensitive Group Runs chart [6], as well as the MGR chart [5]. In some situations it is also superior to the Cumulative Sum chart p9] and the exponentially weighed moving average chart [10]. In the steady state also, its performance is better than the above charts. 相似文献
5.
Raja Rao et al. (1993) introduced the bivariate setting the clock back to zero property. A new variant of this property is introduced that is appropriate for analysing a broader area of practical situations. Some distributions possessing the proposed property are presented. Applications of this property for simplifying the computation of the bivariate mean residual life function and the bivariate percentile residual life function are studied. The relation between the proposed property with the one studied by Raja Rao and Talwalker (1990) and the bivariate lack of memory property is studied. 相似文献
6.
Qian Chen 《统计学通讯:模拟与计算》2013,42(4):789-804
We consider the relative merits of various saddlepoint approximations for the cumulative distribution function (cdf) of a statistic with a possibly non normal limit distribution. In addition to the usual Lugannani-Rice approximation, we also consider approximations based on higher-order expansions, including the case where the base distribution for the approximation is taken to be non normal. This extends earlier work by Wood et al. (1993). These approximations are applied to the distribution of the Anderson-Darling test statistic. While these generalizations perform well in the middle of the distribution's support, a conventional normal-based Lugannani-Rice approximation (Giles, 2001) is superior for conventional critical regions. 相似文献
7.
AbstractIn this article, we improvise Singh and Grewal (2013) and Hussain et al. (2016) techniques by introducing a new two-stage randomization response process. Using the proposed new technique, we achieve better efficiency and increasing protection of privacy of respondents than the Kuk (1990), Singh and Grewal (2013) and Hussain et al. (2016) models. The relative efficiency and protection of the respondents of the proposed two-stage randomization device have been investigated through simulation study, and the situations are reported where the proposed estimator performs better than its competitors. The SAS code used to investigate the performance of the proposed strategy are also provided. 相似文献
8.
This article compares three value-at-risk (VaR) approximation methods suggested in the literature: Cornish and Fisher (1937), Sillitto (1969), and Liu (2010). Simulation results are obtained for three families of distributions: student-t, skewed-normal, and skewed-t. We recommend the Sillitto approximation as the best method to evaluate the VaR when the financial return has an unknown, skewed, and heavy-tailed distribution. 相似文献
9.
ABSTRACTIn this work, we proposed an adaptive multivariate cumulative sum (CUSUM) statistical process control chart for signaling a range of location shifts. This method was based on the multivariate CUSUM control chart proposed by Pignatiello and Runger (1990), but we adopted the adaptive approach similar to that discussed by Dai et al. (2011), which was based on a different CUSUM method introduced by Crosier (1988). The reference value in this proposed procedure was changed adaptively in each run, with the current mean shift estimated by exponentially weighted moving average (EWMA) statistic. By specifying the minimal magnitude of the mean shift, our proposed control chart achieved a good overall performance for detecting a range of shifts rather than a single value. We compared our adaptive multivariate CUSUM method with that of Dai et al. (2001) and the non adaptive versions of these two methods, by evaluating both the steady state and zero state average run length (ARL) values. The detection efficiency of our method showed improvements over the comparative methods when the location shift is unknown but falls within an expected range. 相似文献
10.
11.
Jigao Yan 《统计学通讯:理论与方法》2013,42(20):5074-5098
AbstractIn this paper, the complete convergence for maximal weighted sums of extended negatively dependent (END, for short) random variables is investigated. Some sufficient conditions for the complete convergence and some applications to a nonparametric model are provided. The results obtained in the paper generalize and improve the corresponding ones of Wang et al. (2014b) and Shen, Xue, and Wang (2017). 相似文献
12.
《统计学通讯:理论与方法》2012,41(16-17):3198-3210
The randomized response (RR) technique with two decks of cards proposed by Odumade and Singh (2009) can always be made more efficient than the RR techniques proposed by Warner (1965), Mangat and Singh (1990), and Mangat (1994) by adjusting the proportion of cards in the decks. The proposed method of Odumade and Singh (2009) is limited to simple random sampling with replacement (SRSWR) sampling only. In this article, generalization of Odumade and Singh strategy is provided for complex survey designs and a wider class of estimators. The results of Odumade and Singh (2009) can be derived from the proposed method as a special case. 相似文献
13.
《统计学通讯:理论与方法》2013,42(8):1631-1646
Abstract In this paper we develop a Bayesian analysis for the nonlinear regression model with errors that follow a continuous autoregressive process. In this way, unequally spaced observations do not present a problem in the analysis. We employ the Gibbs sampler, (see Gelfand, A., Smith, A. (1990). Sampling based approaches to calculating marginal densities. J. Amer. Statist. Assoc. 85:398–409.), as the foundation for making Bayesian inferences. We illustrate these Bayesian inferences with an analysis of a real data-set. Using these same data, we contrast the Bayesian approach with a generalized least squares technique. 相似文献
14.
In this article, we propose a nonparametric method to test for symmetry in bivariate data. By using the extension of Fisher's exact treatment for 2 × 2 contingency tables proposed by Freeman and Halton (1951), we can test the hypothesis of equal distribution for two samples of integer valued variables. Then, by counting the number of observations belonging to each cell of a symmetric, appropriately built grid, we can produce the two samples of integers required to use this test for equal distribution. The resulting test for symmetry is potentially extendible to higher dimensions. A simulation study is performed to compare with some known tests (Bowker, 1948; Hollander, 1971; and its improvement given in Krampe and Kuhnt, 2007). Our proposal represents a competitive option as a test for symmetry. 相似文献
15.
Hidetoshi Murakami 《统计学通讯:模拟与计算》2013,42(10):2214-2219
The approximation for the distribution function of test statistic is extremely important in statistics. The standard and higher-order saddlepoint approximations are considered in tails of the limiting distribution for the modified Anderson–Darling test. The saddlepoint approximations are compared with the approximation of Sinclair et al. (1990) for upper tail area. An empirical function is derived to estimate the critical values of a saddlepoint approximation. 相似文献
16.
For the first time, we provide a matrix formula for second-order covariances of maximum likelihood estimates in heteroskedastic generalized linear models, thus generalizing the results of Cordeiro (2004) and Cordeiro et al. (2006) related to the generalized linear models with known and unknown dispersion parameter, respectively. The covariance matrix formula does not involve cumulants of log-likelihood derivatives and can be easily obtained using simple matrix operations. We apply our main result to a simple model. Some simulations show that the second-order covariances can be quite pronounced in small to moderate samples. The usual covariances of the maximum likelihood estimates can be corrected by these second-order covariances. 相似文献
17.
This article presents results concerning the performance of both single equation and system panel cointegration tests and estimators. The study considers the tests developed in Pedroni (1999, 2004), Westerlund (2005), Larsson et al. (2001), and Breitung (2005) and the estimators developed in Phillips and Moon (1999), Pedroni (2000), Kao and Chiang (2000), Mark and Sul (2003), Pedroni (2001), and Breitung (2005). We study the impact of stable autoregressive roots approaching the unit circle, of I(2) components, of short-run cross-sectional correlation and of cross-unit cointegration on the performance of the tests and estimators. The data are simulated from three-dimensional individual specific VAR systems with cointegrating ranks varying from zero to two for fourteen different panel dimensions. The usual specifications of deterministic components are considered. 相似文献
18.
In this article, we directly introduce the continuous version of the general discrete triangular distributions (Kokonendji and Zocchi, 2010). It is bounded and, in general, unimodal with pike. It contains thus a very useful class of two-sided power distributions (van Dorp and Kotz, 2002a,b, 2003). Moments, particular cases, limit distributions, and relations between parameters are straightforwardly derived. 相似文献
19.
Hall et al. (2007) propose a method for moment selection based on an information criterion that is a function of the entropy of the limiting distribution of the Generalized Method of Moments (GMM) estimator. They establish the consistency of the method subject to certain conditions that include the identification of the parameter vector by at least one of the moment conditions being considered. In this article, we examine the limiting behavior of this moment selection method when the parameter vector is weakly identified by all the moment conditions being considered. It is shown that the selected moment condition is random and hence not consistent in any meaningful sense. As a result, we propose a two-step procedure for moment selection in which identification is first tested using a statistic proposed by Stock and Yogo (2003) and then only if this statistic indicates identification does the researcher proceed to the second step in which the aforementioned information criterion is used to select moments. The properties of this two-step procedure are contrasted with those of strategies based on either using all available moments or using the information criterion without the identification pre-test. The performances of these strategies are compared via an evaluation of the finite sample behavior of various methods for inference about the parameter vector. The inference methods considered are based on the Wald statistic, Anderson and Rubin's (1949) statistic, Kleibergen (2002) K statistic, and combinations thereof in which the choice is based on the outcome of the test for weak identification. 相似文献
20.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献