首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The Lagrange Multiplier (LM) test is one of the principal tools to detect ARCH and GARCH effects in financial data analysis. However, when the underlying data are non‐normal, which is often the case in practice, the asymptotic LM test, based on the χ2‐approximation of critical values, is known to perform poorly, particularly for small and moderate sample sizes. In this paper we propose to employ two re‐sampling techniques to find critical values of the LM test, namely permutation and bootstrap. We derive the properties of exactness and asymptotically correctness for the permutation and bootstrap LM tests, respectively. Our numerical studies indicate that the proposed re‐sampled algorithms significantly improve size and power of the LM test in both skewed and heavy‐tailed processes. We also illustrate our new approaches with an application to the analysis of the Euro/USD currency exchange rates and the German stock index. The Canadian Journal of Statistics 40: 405–426; 2012 © 2012 Statistical Society of Canada  相似文献   

2.
A new test is proposed for the hypothesis of uniformity on bi‐dimensional supports. The procedure is an adaptation of the “distance to boundary test” (DB test) proposed in Berrendero, Cuevas, & Vázquez‐Grande (2006). This new version of the DB test, called DBU test, allows us (as a novel, interesting feature) to deal with the case where the support S of the underlying distribution is unknown. This means that S is not specified in the null hypothesis so that, in fact, we test the null hypothesis that the underlying distribution is uniform on some support S belonging to a given class ${\cal C}$ . We pay special attention to the case that ${\cal C}$ is either the class of compact convex supports or the (broader) class of compact λ‐convex supports (also called r‐convex or α‐convex in the literature). The basic idea is to apply the DB test in a sort of plug‐in version, where the support S is approximated by using methods of set estimation. The DBU method is analysed from both the theoretical and practical point of view, via some asymptotic results and a simulation study, respectively. The Canadian Journal of Statistics 40: 378–395; 2012 © 2012 Statistical Society of Canada  相似文献   

3.
In the existing statistical literature, the almost default choice for inference on inhomogeneous point processes is the most well‐known model class for inhomogeneous point processes: reweighted second‐order stationary processes. In particular, the K‐function related to this type of inhomogeneity is presented as the inhomogeneous K‐function. In the present paper, we put a number of inhomogeneous model classes (including the class of reweighted second‐order stationary processes) into the common general framework of hidden second‐order stationary processes, allowing for a transfer of statistical inference procedures for second‐order stationary processes based on summary statistics to each of these model classes for inhomogeneous point processes. In particular, a general method to test the hypothesis that a given point pattern can be ascribed to a specific inhomogeneous model class is developed. Using the new theoretical framework, we reanalyse three inhomogeneous point patterns that have earlier been analysed in the statistical literature and show that the conclusions concerning an appropriate model class must be revised for some of the point patterns.  相似文献   

4.
To analyze interactions in marked spatiotemporal point processes (MSTPPs), we introduce marked second‐order reduced moment measures and K‐functions for inhomogeneous second‐order intensity‐reweighted stationary MSTPPs. These summary statistics, which allow us to quantify dependence between different mark‐based classifications of points, depend on the specific mark space and mark reference measure chosen. Unbiased and consistent minus‐sampling estimators are derived for all statistics considered, and a test for random labeling is indicated. In addition, we treat Voronoi intensity estimators for MSTPPs. These new statistics are finally employed to analyze an Andaman Sea earthquake data set.  相似文献   

5.
The authors present a consistent lack‐of‐fit test in nonlinear regression models. The proposed procedure possesses some nice properties of Zheng's test such as the consistency, the ability to detect any local alternatives approaching the null at rates slower than the parametric rate. What's more, for a predetermined kernel function, the proposed test is more powerful than Zheng's test and the validity of these findings is confirmed by the simulation studies and a real data example. In addition, the authors find out a close connection between the choices of normal kernel functions and the bandwidths. The Canadian Journal of Statistics 39: 108–125; 2011 © 2011 Statistical Society of Canada  相似文献   

6.
Starting from the characterization of extreme‐value copulas based on max‐stability, large‐sample tests of extreme‐value dependence for multivariate copulas are studied. The two key ingredients of the proposed tests are the empirical copula of the data and a multiplier technique for obtaining approximate p‐values for the derived statistics. The asymptotic validity of the multiplier approach is established, and the finite‐sample performance of a large number of candidate test statistics is studied through extensive Monte Carlo experiments for data sets of dimension two to five. In the bivariate case, the rejection rates of the best versions of the tests are compared with those of the test of Ghoudi et al. (1998) recently revisited by Ben Ghorbal et al. (2009). The proposed procedures are illustrated on bivariate financial data and trivariate geological data. The Canadian Journal of Statistics 39: 703–720; 2011. © 2011 Statistical Society of Canada  相似文献   

7.
Classes of higher-order kernels for estimation of a probability density are constructed by iterating the twicing procedure. Given a kernel K of order l, we build a family of kernels Km of orders l(m + 1) with the attractive property that their Fourier transforms are simply 1 — {1 —$(.)}m+1, where ? is the Fourier transform of K. These families of higher-order kernels are well suited when the fast Fourier transform is used to speed up the calculation of the kernel estimate or the least-squares cross-validation procedure for selection of the window width. We also compare the theoretical performance of the optimal polynomial-based kernels with that of the iterative twicing kernels constructed from some popular second-order kernels.  相似文献   

8.
Ghoudi, Khoudraji & Rivest [The Canadian Journal of Statistics 1998;26:187–197] showed how to test whether the dependence structure of a pair of continuous random variables is characterized by an extreme‐value copula. The test is based on a U‐statistic whose finite‐ and large‐sample variance are determined by the present authors. They propose estimates of this variance which they compare to the jackknife estimate of Ghoudi, Khoudraji & Rivest ( 1998 ) through simulations. They study the finite‐sample and asymptotic power of the test under various alternatives. They illustrate their approach using financial and geological data. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

9.
A statistical model is said to be an order‐restricted statistical model when its parameter takes its values in a closed convex cone C of the Euclidean space. In recent years, order‐restricted likelihood ratio tests and maximum likelihood estimators have been criticized on the grounds that they may violate a cone order monotonicity (COM) property, and hence reverse the cone order induced by C. The authors argue here that these reversals occur only in the case that C is an obtuse cone, and that in this case COM is an inappropriate requirement for likelihood‐based estimates and tests. They conclude that these procedures thus remain perfectly reasonable procedures for order‐restricted inference.  相似文献   

10.
In this study, we provide the Farlie–Gumbel–Morgenstern bivariate copula of rth and sth order statistics. The main emphasis in this study is on the inference procedure which is based on the maximum pseudo-likelihood estimate for the copula parameter. As for the methodology, goodness-of-fit test statistic for copulas which is based on a Cramér–von Mises functional of the empirical copula process is applied for selecting an appropriate model by bootstrapping. An application of the methodology to simulated data set is also presented.  相似文献   

11.
Liu and Singh (1993, 2006) introduced a depth‐based d‐variate extension of the nonparametric two sample scale test of Siegel and Tukey (1960). Liu and Singh (2006) generalized this depth‐based test for scale homogeneity of k ≥ 2 multivariate populations. Motivated by the work of Gastwirth (1965), we propose k sample percentile modifications of Liu and Singh's proposals. The test statistic is shown to be asymptotically normal when k = 2, and compares favorably with Liu and Singh (2006) if the underlying distributions are either symmetric with light tails or asymmetric. In the case of skewed distributions considered in this paper the power of the proposed tests can attain twice the power of the Liu‐Singh test for d ≥ 1. Finally, in the k‐sample case, it is shown that the asymptotic distribution of the proposed percentile modified Kruskal‐Wallis type test is χ2 with k ? 1 degrees of freedom. Power properties of this k‐sample test are similar to those for the proposed two sample one. The Canadian Journal of Statistics 39: 356–369; 2011 © 2011 Statistical Society of Canada  相似文献   

12.
For many diseases, logistic constraints render large incidence studies difficult to carry out. This becomes a drawback, particularly when a new study is needed each time the incidence rate is investigated in a new population. By carrying out a prevalent cohort study with follow‐up it is possible to estimate the incidence rate if it is constant. The authors derive the maximum likelihood estimator (MLE) of the overall incidence rate, λ, as well as age‐specific incidence rates, by exploiting the epidemiologic relationship, (prevalence odds) = (incidence rate) × (mean duration) (P/[1 ? P] = λ × µ). The authors establish the asymptotic distributions of the MLEs and provide approximate confidence intervals for the parameters. Moreover, the MLE of λ is asymptotically most efficient and is the natural estimator obtained by substituting the marginal maximum likelihood estimators for P and µ into P/[1 ? P] = λ × µ. Following‐up the subjects allows the authors to develop these widely applicable procedures. The authors apply their methods to data collected as part of the Canadian Study of Health and Ageing to estimate the incidence rate of dementia amongst elderly Canadians. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

13.
Most of the long memory estimators for stationary fractionally integrated time series models are known to experience non‐negligible bias in small and finite samples. Simple moment estimators are also vulnerable to such bias, but can easily be corrected. In this article, the authors propose bias reduction methods for a lag‐one sample autocorrelation‐based moment estimator. In order to reduce the bias of the moment estimator, the authors explicitly obtain the exact bias of lag‐one sample autocorrelation up to the order n−1. An example where the exact first‐order bias can be noticeably more accurate than its asymptotic counterpart, even for large samples, is presented. The authors show via a simulation study that the proposed methods are promising and effective in reducing the bias of the moment estimator with minimal variance inflation. The proposed methods are applied to the northern hemisphere data. The Canadian Journal of Statistics 37: 476–493; 2009 © 2009 Statistical Society of Canada  相似文献   

14.
The accuracy of a diagnostic test is typically characterized using the receiver operating characteristic (ROC) curve. Summarizing indexes such as the area under the ROC curve (AUC) are used to compare different tests as well as to measure the difference between two populations. Often additional information is available on some of the covariates which are known to influence the accuracy of such measures. The authors propose nonparametric methods for covariate adjustment of the AUC. Models with normal errors and possibly non‐normal errors are discussed and analyzed separately. Nonparametric regression is used for estimating mean and variance functions in both scenarios. In the model that relaxes the assumption of normality, the authors propose a covariate‐adjusted Mann–Whitney estimator for AUC estimation which effectively uses available data to construct working samples at any covariate value of interest and is computationally efficient for implementation. This provides a generalization of the Mann–Whitney approach for comparing two populations by taking covariate effects into account. The authors derive asymptotic properties for the AUC estimators in both settings, including asymptotic normality, optimal strong uniform convergence rates and mean squared error (MSE) consistency. The MSE of the AUC estimators was also assessed in smaller samples by simulation. Data from an agricultural study were used to illustrate the methods of analysis. The Canadian Journal of Statistics 38:27–46; 2010 © 2009 Statistical Society of Canada  相似文献   

15.
A goodness‐of‐fit procedure is proposed for parametric families of copulas. The new test statistics are functionals of an empirical process based on the theoretical and sample versions of Spearman's dependence function. Conditions under which this empirical process converges weakly are seen to hold for many families including the Gaussian, Frank, and generalized Farlie–Gumbel–Morgenstern systems of distributions, as well as the models with singular components described by Durante [Durante ( 2007 ) Comptes Rendus Mathématique. Académie des Sciences. Paris, 344, 195–198]. Thanks to a parametric bootstrap method that allows to compute valid P‐values, it is shown empirically that tests based on Cramér–von Mises distances keep their size under the null hypothesis. Simulations attesting the power of the newly proposed tests, comparisons with competing procedures and complete analyses of real hydrological and financial data sets are presented. The Canadian Journal of Statistics 37: 80‐101; 2009 © 2009 Statistical Society of Canada  相似文献   

16.
In this paper, a new nonparametric methodology is developed for testing whether the changing pattern of a response variable over multiple ordered sub-populations from one treatment group differs with the one from another treatment group. The question is formalized into a nonparametric two-sample comparison problem for the stochastic order among subsamples, through U-statistics with accommodations for zero-inflated distributions. A novel bootstrap procedure is proposed to obtain the critical values with given type I error. Following the procedure, bootstrapped p-values are obtained through simulated samples. It is proven that the distribution of the test statistics is independent from the underlying distributions of the subsamples, when certain sufficient statistics provided. Furthermore, this study also develops a feasible framework for power studies to determine sample sizes, which is necessary in real-world applications. Simulation results suggest that the test is consistent. The methodology is illustrated using a biological experiment with a split-plot design, and significant differences in changing patterns of seed weight between treatments are found with relative small subsample sizes.  相似文献   

17.
The authors consider Bayesian methods for fitting three semiparametric survival models, incorporating time‐dependent covariates that are step functions. In particular, these are models due to Cox [Cox ( 1972 ) Journal of the Royal Statistical Society, Series B, 34, 187–208], Prentice & Kalbfleisch and Cox & Oakes [Cox & Oakes ( 1984 ) Analysis of Survival Data, Chapman and Hall, London]. The model due to Prentice & Kalbfleisch [Prentice & Kalbfleisch ( 1979 ) Biometrics, 35, 25–39], which has seen very limited use, is given particular consideration. The prior for the baseline distribution in each model is taken to be a mixture of Polya trees and posterior inference is obtained through standard Markov chain Monte Carlo methods. They demonstrate the implementation and comparison of these three models on the celebrated Stanford heart transplant data and the study of the timing of cerebral edema diagnosis during emergency room treatment of diabetic ketoacidosis in children. An important feature of their overall discussion is the comparison of semi‐parametric families, and ultimate criterion based selection of a family within the context of a given data set. The Canadian Journal of Statistics 37: 60–79; © 2009 Statistical Society of Canada  相似文献   

18.
19.
A harmonic new better than used in expectation (HNBUE) variable is a random variable which is dominated by an exponential distribution in the convex stochastic order. We use a recently obtained condition on stochastic equality under convex domination to derive characterizations of the exponential distribution and bounds for HNBUE variables based on the mean values of the order statistics of the variable. We apply the results to generate discrepancy measures to test if a random variable is exponential against the alternative that is HNBUE, but not exponential.  相似文献   

20.
Testing homogeneity is a fundamental problem in finite mixture models. It has been investigated by many researchers and most of the existing works have focused on the univariate case. In this article, the authors extend the use of the EM‐test for testing homogeneity to multivariate mixture models. They show that the EM‐test statistic asymptotically has the same distribution as a certain transformation of a single multivariate normal vector. On the basis of this result, they suggest a resampling procedure to approximate the P‐value of the EM‐test. Simulation studies show that the EM‐test has accurate type I errors and adequate power, and is more powerful and computationally efficient than the bootstrap likelihood ratio test. Two real data sets are analysed to illustrate the application of our theoretical results. The Canadian Journal of Statistics 39: 218–238; 2011 © 2011 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号