首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 15 毫秒
1.
2.
Priors are introduced into goodness‐of‐fit tests, both for unknown parameters in the tested distribution and on the alternative density. Neyman–Pearson theory leads to the test with the highest expected power. To make the test practical, we seek priors that make it likely a priori that the power will be larger than the level of the test but not too close to one. As a result, priors are sample size dependent. We explore this procedure in particular for priors that are defined via a Gaussian process approximation for the logarithm of the alternative density. In the case of testing for the uniform distribution, we show that the optimal test is of the U‐statistic type and establish limiting distributions for the optimal test statistic, both under the null hypothesis and averaged over the alternative hypotheses. The optimal test statistic is shown to be of the Cramér–von Mises type for specific choices of the Gaussian process involved. The methodology when parameters in the tested distribution are unknown is discussed and illustrated in the case of testing for the von Mises distribution. The Canadian Journal of Statistics 47: 560–579; 2019 © 2019 Statistical Society of Canada  相似文献   

3.
The authors give tests of fit for the hyperbolic distribution, based on the Cramér‐von Mises statistic W2. They consider the general case with four parameters unknown, and some specific cases where one or two parameters are fixed. They give two examples using stock price data.  相似文献   

4.
The authors propose new rank statistics for testing the white noise hypothesis in a time series. These statistics are Cramér‐von Mises and Kolmogorov‐Smirnov functionals of an empirical distribution function whose mean is related to a serial version of Kendall's tau through a linear transform. The authors determine the asymptotic behaviour of the underlying serial process and the large‐sample distribution of the proposed statistics under the null hypothesis of white noise. They also present simulation results showing the power of their tests.  相似文献   

5.
Grouped data can often arise due to the lack of resolution of the measurement instruments; they also arise when data are deliberately rounded to a certain accuracy and are presented, say, in the form of a histogram. The author uses statistics of the Cramér‐von Mises type to test for the exponential distribution when data are grouped.  相似文献   

6.
In this paper the interest is in testing the null hypothesis of positive quadrant dependence (PQD) between two random variables. Such a testing problem is important since prior knowledge of PQD is a qualitative restriction that should be taken into account in further statistical analysis, for example, when choosing an appropriate copula function to model the dependence structure. The key methodology of the proposed testing procedures consists of evaluating a “distance” between a nonparametric estimator of a copula and the independence copula, which serves as a reference case in the whole set of copulas having the PQD property. Choices of appropriate distances and nonparametric estimators of copula are discussed, and the proposed methods are compared with testing procedures based on bootstrap and multiplier techniques. The consistency of the testing procedures is established. In a simulation study the authors investigate the finite sample size and power performances of three types of test statistics, Kolmogorov–Smirnov, Cramér–von‐Mises, and Anderson–Darling statistics, together with several nonparametric estimators of a copula, including recently developed kernel type estimators. Finally, they apply the testing procedures on some real data. The Canadian Journal of Statistics 38: 555–581; 2010 © 2010 Statistical Society of Canada  相似文献   

7.
The author extends to the Bayesian nonparametric context the multinomial goodness‐of‐fit tests due to Cressie & Read (1984). Her approach is suitable when the model of interest is a discrete distribution. She provides an explicit form for the tests, which are based on power‐divergence measures between a prior Dirichlet process that is highly concentrated around the model of interest and the corresponding posterior Dirichlet process. In addition to providing interesting special cases and useful approximations, she discusses calibration and the choice of test through examples.  相似文献   

8.
We propose nonparametric procedures for comparing the empirical distribution function of data from a complex survey with a hypothesized parametric reference distribution. The hypothesized distribution may be fully specified, or it may be a family with the parameters to be estimated from the data. Of the procedures studied, a modification of the Cramér–von Mises test proposed by Lockhart, Spinelli & Stephens [Lockhart, Spinelli and Stephens, The Canadian Journal of Statistics 2007; 35, 125–133] is supported theoretically and performs well in two simulation studies. The methods are applied to examine the distribution of body mass index in the U.S. National Health and Nutrition Examination Survey. The Canadian Journal of Statistics 47: 409–425; 2019 © 2019 Statistical Society of Canada  相似文献   

9.
Starting from the characterization of extreme‐value copulas based on max‐stability, large‐sample tests of extreme‐value dependence for multivariate copulas are studied. The two key ingredients of the proposed tests are the empirical copula of the data and a multiplier technique for obtaining approximate p‐values for the derived statistics. The asymptotic validity of the multiplier approach is established, and the finite‐sample performance of a large number of candidate test statistics is studied through extensive Monte Carlo experiments for data sets of dimension two to five. In the bivariate case, the rejection rates of the best versions of the tests are compared with those of the test of Ghoudi et al. (1998) recently revisited by Ben Ghorbal et al. (2009). The proposed procedures are illustrated on bivariate financial data and trivariate geological data. The Canadian Journal of Statistics 39: 703–720; 2011. © 2011 Statistical Society of Canada  相似文献   

10.
Ghoudi, Khoudraji & Rivest [The Canadian Journal of Statistics 1998;26:187–197] showed how to test whether the dependence structure of a pair of continuous random variables is characterized by an extreme‐value copula. The test is based on a U‐statistic whose finite‐ and large‐sample variance are determined by the present authors. They propose estimates of this variance which they compare to the jackknife estimate of Ghoudi, Khoudraji & Rivest ( 1998 ) through simulations. They study the finite‐sample and asymptotic power of the test under various alternatives. They illustrate their approach using financial and geological data. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

11.
Statistical procedures for the detection of a change in the dependence structure of a series of multivariate observations are studied in this work. The test statistics that are proposed are $L_1$ , $L_2$ , and $L_{\infty }$ distances computed from vectors of differences of Kendall's tau; two multivariate extensions of Kendall's measure of association are used. Since the distributions of these statistics under the null hypothesis of no change depend on the unknown underlying copula of the vectors, a procedure based on the multiplier central limit theorem is used for the computation of p‐values; the method is shown to be valid both asymptotically and for moderate sample sizes. Alternative versions of the tests that take into account possible breakpoints in the marginal distributions are also investigated. Monte Carlo simulations show that the tests are powerful under many scenarios of change‐point. In addition, two estimators of the time of change are proposed and their efficiency is carefully studied. The methodologies are illustrated on simulated series from the Canadian Regional Climate Model. The Canadian Journal of Statistics 41: 65–82; 2013 © 2012 Statistical Society of Canada  相似文献   

12.
The study of differences among groups is an interesting statistical topic in many applied fields. It is very common in this context to have data that are subject to mechanisms of loss of information, such as censoring and truncation. In the setting of a two‐sample problem with data subject to left truncation and right censoring, we develop an empirical likelihood method to do inference for the relative distribution. We obtain a nonparametric generalization of Wilks' theorem and construct nonparametric pointwise confidence intervals for the relative distribution. Finally, we analyse the coverage probability and length of these confidence intervals through a simulation study and illustrate their use with a real data set on gastric cancer. The Canadian Journal of Statistics 38: 453–473; 2010 © 2010 Statistical Society of Canada  相似文献   

13.
Test statistics for checking the independence between the innovations of several time series are developed. The time series models considered allow for general specifications for the conditional mean and variance functions that could depend on common explanatory variables. In testing for independence between more than two time series, checking pairwise independence does not lead to consistent procedures. Thus a finite family of empirical processes relying on multivariate lagged residuals are constructed, and we derive their asymptotic distributions. In order to obtain simple asymptotic covariance structures, Möbius transformations of the empirical processes are studied, and simplifications occur. Under the null hypothesis of independence, we show that these transformed processes are asymptotically Gaussian, independent, and with tractable covariance functions not depending on the estimated parameters. Various procedures are discussed, including Cramér–von Mises test statistics and tests based on non‐parametric measures. The ranks of the residuals are considered in the new methods, giving test statistics which are asymptotically margin‐free. Generalized cross‐correlations are introduced, extending the concept of cross‐correlation to an arbitrary number of time series; portmanteau procedures based on them are discussed. In order to detect the dependence visually, graphical devices are proposed. Simulations are conducted to explore the finite sample properties of the methodology, which is found to be powerful against various types of alternatives when the independence is tested between two and three time series. An application is considered, using the daily log‐returns of Apple, Intel and Hewlett‐Packard traded on the Nasdaq financial market. The Canadian Journal of Statistics 40: 447–479; 2012 © 2012 Statistical Society of Canada  相似文献   

14.
The author considers estimation under a Gamma process model for degradation data. The setting for degradation data is one in which n independent units, each with a Gamma process with a common shape function and scale parameter, are observed at several possibly different times. Covariates can be incorporated into the model by taking the scale parameter as a function of the covariates. The author proposes using the maximum pseudo‐likelihood method to estimate the unknown parameters. The method requires usage of the Pool Adjacent Violators Algorithm. Asymptotic properties, including consistency, convergence rate and asymptotic distribution, are established. Simulation studies are conducted to validate the method and its application is illustrated by using bridge beams data and carbon‐film resistors data. The Canadian Journal of Statistics 37: 102‐118; 2009 © 2009 Statistical Society of Canada  相似文献   

15.
In this article the author investigates the application of the empirical‐likelihood‐based inference for the parameters of varying‐coefficient single‐index model (VCSIM). Unlike the usual cases, if there is no bias correction the asymptotic distribution of the empirical likelihood ratio cannot achieve the standard chi‐squared distribution. To this end, a bias‐corrected empirical likelihood method is employed to construct the confidence regions (intervals) of regression parameters, which have two advantages, compared with those based on normal approximation, that is, (1) they do not impose prior constraints on the shape of the regions; (2) they do not require the construction of a pivotal quantity and the regions are range preserving and transformation respecting. A simulation study is undertaken to compare the empirical likelihood with the normal approximation in terms of coverage accuracies and average areas/lengths of confidence regions/intervals. A real data example is given to illustrate the proposed approach. The Canadian Journal of Statistics 38: 434–452; 2010 © 2010 Statistical Society of Canada  相似文献   

16.
17.
In many applications, a finite population contains a large proportion of zero values that make the population distribution severely skewed. An unequal‐probability sampling plan compounds the problem, and as a result the normal approximation to the distribution of various estimators has poor precision. The central‐limit‐theorem‐based confidence intervals for the population mean are hence unsatisfactory. Complex designs also make it hard to pin down useful likelihood functions, hence a direct likelihood approach is not an option. In this paper, we propose a pseudo‐likelihood approach. The proposed pseudo‐log‐likelihood function is an unbiased estimator of the log‐likelihood function when the entire population is sampled. Simulations have been carried out. When the inclusion probabilities are related to the unit values, the pseudo‐likelihood intervals are superior to existing methods in terms of the coverage probability, the balance of non‐coverage rates on the lower and upper sides, and the interval length. An application with a data set from the Canadian Labour Force Survey‐2000 also shows that the pseudo‐likelihood method performs more appropriately than other methods. The Canadian Journal of Statistics 38: 582–597; 2010 © 2010 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号