首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 968 毫秒
1.
The process comparing the empirical cumulative distribution function of the sample with a parametric estimate of the cumulative distribution function is known as the empirical process with estimated parameters and has been extensively employed in the literature for goodness‐of‐fit testing. The simplest way to carry out such goodness‐of‐fit tests, especially in a multivariate setting, is to use a parametric bootstrap. Although very easy to implement, the parametric bootstrap can become very computationally expensive as the sample size, the number of parameters, or the dimension of the data increase. An alternative resampling technique based on a fast weighted bootstrap is proposed in this paper, and is studied both theoretically and empirically. The outcome of this work is a generic and computationally efficient multiplier goodness‐of‐fit procedure that can be used as a large‐sample alternative to the parametric bootstrap. In order to approximately determine how large the sample size needs to be for the parametric and weighted bootstraps to have roughly equivalent powers, extensive Monte Carlo experiments are carried out in dimension one, two and three, and for models containing up to nine parameters. The computational gains resulting from the use of the proposed multiplier goodness‐of‐fit procedure are illustrated on trivariate financial data. A by‐product of this work is a fast large‐sample goodness‐of‐fit procedure for the bivariate and trivariate t distribution whose degrees of freedom are fixed. The Canadian Journal of Statistics 40: 480–500; 2012 © 2012 Statistical Society of Canada  相似文献   

2.
We propose a new type of multivariate statistical model that permits non‐Gaussian distributions as well as the inclusion of conditional independence assumptions specified by a directed acyclic graph. These models feature a specific factorisation of the likelihood that is based on pair‐copula constructions and hence involves only univariate distributions and bivariate copulas, of which some may be conditional. We demonstrate maximum‐likelihood estimation of the parameters of such models and compare them to various competing models from the literature. A simulation study investigates the effects of model misspecification and highlights the need for non‐Gaussian conditional independence models. The proposed methods are finally applied to modeling financial return data. The Canadian Journal of Statistics 40: 86–109; 2012 © 2012 Statistical Society of Canada  相似文献   

3.
For binomial data analysis, many methods based on empirical Bayes interpretations have been developed, in which a variance‐stabilizing transformation and a normality assumption are usually required. To achieve the greatest model flexibility, we conduct nonparametric Bayesian inference for binomial data and employ a special nonparametric Bayesian prior—the Bernstein–Dirichlet process (BDP)—in the hierarchical Bayes model for the data. The BDP is a special Dirichlet process (DP) mixture based on beta distributions, and the posterior distribution resulting from it has a smooth density defined on [0, 1]. We examine two Markov chain Monte Carlo procedures for simulating from the resulting posterior distribution, and compare their convergence rates and computational efficiency. In contrast to existing results for posterior consistency based on direct observations, the posterior consistency of the BDP, given indirect binomial data, is established. We study shrinkage effects and the robustness of the BDP‐based posterior estimators in comparison with several other empirical and hierarchical Bayes estimators, and we illustrate through examples that the BDP‐based nonparametric Bayesian estimate is more robust to the sample variation and tends to have a smaller estimation error than those based on the DP prior. In certain settings, the new estimator can also beat Stein's estimator, Efron and Morris's limited‐translation estimator, and many other existing empirical Bayes estimators. The Canadian Journal of Statistics 40: 328–344; 2012 © 2012 Statistical Society of Canada  相似文献   

4.
The Lagrange Multiplier (LM) test is one of the principal tools to detect ARCH and GARCH effects in financial data analysis. However, when the underlying data are non‐normal, which is often the case in practice, the asymptotic LM test, based on the χ2‐approximation of critical values, is known to perform poorly, particularly for small and moderate sample sizes. In this paper we propose to employ two re‐sampling techniques to find critical values of the LM test, namely permutation and bootstrap. We derive the properties of exactness and asymptotically correctness for the permutation and bootstrap LM tests, respectively. Our numerical studies indicate that the proposed re‐sampled algorithms significantly improve size and power of the LM test in both skewed and heavy‐tailed processes. We also illustrate our new approaches with an application to the analysis of the Euro/USD currency exchange rates and the German stock index. The Canadian Journal of Statistics 40: 405–426; 2012 © 2012 Statistical Society of Canada  相似文献   

5.
We use the two‐state Markov regime‐switching model to explain the behaviour of the WTI crude‐oil spot prices from January 1986 to February 2012. We investigated the use of methods based on the composite likelihood and the full likelihood. We found that the composite‐likelihood approach can better capture the general structural changes in world oil prices. The two‐state Markov regime‐switching model based on the composite‐likelihood approach closely depicts the cycles of the two postulated states: fall and rise. These two states persist for on average 8 and 15 months, which matches the observed cycles during the period. According to the fitted model, drops in oil prices are more volatile than rises. We believe that this information can be useful for financial officers working in related areas. The model based on the full‐likelihood approach was less satisfactory. We attribute its failure to the fact that the two‐state Markov regime‐switching model is too rigid and overly simplistic. In comparison, the composite likelihood requires only that the model correctly specifies the joint distribution of two adjacent price changes. Thus, model violations in other areas do not invalidate the results. The Canadian Journal of Statistics 41: 353–367; 2013 © 2013 Statistical Society of Canada  相似文献   

6.
The authors propose a robust transformation linear mixed‐effects model for longitudinal continuous proportional data when some of the subjects exhibit outlying trajectories over time. It becomes troublesome when including or excluding such subjects in the data analysis results in different statistical conclusions. To robustify the longitudinal analysis using the mixed‐effects model, they utilize the multivariate t distribution for random effects or/and error terms. Estimation and inference in the proposed model are established and illustrated by a real data example from an ophthalmology study. Simulation studies show a substantial robustness gain by the proposed model in comparison to the mixed‐effects model based on Aitchison's logit‐normal approach. As a result, the data analysis benefits from the robustness of making consistent conclusions in the presence of influential outliers. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

7.
As the treatments of cancer progress, a certain number of cancers are curable if diagnosed early. In population‐based cancer survival studies, cure is said to occur when mortality rate of the cancer patients returns to the same level as that expected for the general cancer‐free population. The estimates of cure fraction are of interest to both cancer patients and health policy makers. Mixture cure models have been widely used because the model is easy to interpret by separating the patients into two distinct groups. Usually parametric models are assumed for the latent distribution for the uncured patients. The estimation of cure fraction from the mixture cure model may be sensitive to misspecification of latent distribution. We propose a Bayesian approach to mixture cure model for population‐based cancer survival data, which can be extended to county‐level cancer survival data. Instead of modeling the latent distribution by a fixed parametric distribution, we use a finite mixture of the union of the lognormal, loglogistic, and Weibull distributions. The parameters are estimated using the Markov chain Monte Carlo method. Simulation study shows that the Bayesian method using a finite mixture latent distribution provides robust inference of parameter estimates. The proposed Bayesian method is applied to relative survival data for colon cancer patients from the Surveillance, Epidemiology, and End Results (SEER) Program to estimate the cure fractions. The Canadian Journal of Statistics 40: 40–54; 2012 © 2012 Statistical Society of Canada  相似文献   

8.
Starting from the characterization of extreme‐value copulas based on max‐stability, large‐sample tests of extreme‐value dependence for multivariate copulas are studied. The two key ingredients of the proposed tests are the empirical copula of the data and a multiplier technique for obtaining approximate p‐values for the derived statistics. The asymptotic validity of the multiplier approach is established, and the finite‐sample performance of a large number of candidate test statistics is studied through extensive Monte Carlo experiments for data sets of dimension two to five. In the bivariate case, the rejection rates of the best versions of the tests are compared with those of the test of Ghoudi et al. (1998) recently revisited by Ben Ghorbal et al. (2009). The proposed procedures are illustrated on bivariate financial data and trivariate geological data. The Canadian Journal of Statistics 39: 703–720; 2011. © 2011 Statistical Society of Canada  相似文献   

9.
The class $G^{\rho,\lambda }$ of weighted log‐rank tests proposed by Fleming & Harrington [Fleming & Harrington (1991) Counting Processes and Survival Analysis, Wiley, New York] has been widely used in survival analysis and is nowadays, unquestionably, the established method to compare, nonparametrically, k different survival functions based on right‐censored survival data. This paper extends the $G^{\rho,\lambda }$ class to interval‐censored data. First we introduce a new general class of rank based tests, then we show the analogy to the above proposal of Fleming & Harrington. The asymptotic behaviour of the proposed tests is derived using an observed Fisher information approach and a permutation approach. Aiming to make this family of tests interpretable and useful for practitioners, we explain how to interpret different choices of weights and we apply it to data from a cohort of intravenous drug users at risk for HIV infection. The Canadian Journal of Statistics 40: 501–516; 2012 © 2012 Statistical Society of Canada  相似文献   

10.
Models of infectious disease over contact networks offer a versatile means of capturing heterogeneity in populations during an epidemic. Highly connected individuals tend to be infected at a higher rate early during an outbreak than those with fewer connections. A powerful approach based on the probability generating function of the individual degree distribution exists for modelling the mean field dynamics of outbreaks in such a population. We develop the same idea in a stochastic context, by proposing a comprehensive model for 1‐week‐ahead incidence counts. Our focus is inferring contact network (and other epidemic) parameters for some common degree distributions, in the case when the network is non‐homogeneous ‘at random’. Our model is initially set within a susceptible–infectious–removed framework, then extended to the susceptible–infectious–removed–susceptible scenario, and we apply this methodology to influenza A data.  相似文献   

11.
We show that the maximum likelihood estimators (MLEs) of the fixed effects and within‐cluster correlation are consistent in a heteroscedastic nested‐error regression (HNER) model with completely unknown within‐cluster variances under mild conditions. The result implies that the empirical best linear unbiased prediction (EBLUP) method for small area estimation is valid in such a case. We also show that ignoring the heteroscedasticity can lead to inconsistent estimation of the within‐cluster correlation and inferior predictive performance. A jackknife measure of uncertainty for the EBLUP is developed under the HNER model. Simulation studies are carried out to investigate the finite‐sample performance of the EBLUP and MLE under the HNER model, with comparisons to those under the nested‐error regression model in various situations, as well as that of the jackknife measure of uncertainty. The well‐known Iowa crops data is used for illustration. The Canadian Journal of Statistics 40: 588–603; 2012 © 2012 Statistical Society of Canada  相似文献   

12.
Abstract. In any epidemic, there may exist an unidentified subpopulation which might be naturally immune or isolated and who will not be involved in the transmission of the disease. Estimation of key parameters, for example, the basic reproductive number, without accounting for this possibility would underestimate the severity of the epidemics. Here, we propose a procedure to estimate the basic reproductive number ( R 0 ) in an epidemic model with an unknown initial number of susceptibles. The infection process is usually not completely observed, but is reconstructed by a kernel‐smoothing method under a counting process framework. Simulation is used to evaluate the performance of the estimators for major epidemics. We illustrate the procedure using the Abakaliki smallpox data.  相似文献   

13.
A goodness‐of‐fit procedure is proposed for parametric families of copulas. The new test statistics are functionals of an empirical process based on the theoretical and sample versions of Spearman's dependence function. Conditions under which this empirical process converges weakly are seen to hold for many families including the Gaussian, Frank, and generalized Farlie–Gumbel–Morgenstern systems of distributions, as well as the models with singular components described by Durante [Durante ( 2007 ) Comptes Rendus Mathématique. Académie des Sciences. Paris, 344, 195–198]. Thanks to a parametric bootstrap method that allows to compute valid P‐values, it is shown empirically that tests based on Cramér–von Mises distances keep their size under the null hypothesis. Simulations attesting the power of the newly proposed tests, comparisons with competing procedures and complete analyses of real hydrological and financial data sets are presented. The Canadian Journal of Statistics 37: 80‐101; 2009 © 2009 Statistical Society of Canada  相似文献   

14.
The authors develop default priors for the Gaussian random field model that includes a nugget parameter accounting for the effects of microscale variations and measurement errors. They present the independence Jeffreys prior, the Jeffreys‐rule prior and a reference prior and study posterior propriety of these and related priors. They show that the uniform prior for the correlation parameters yields an improper posterior. In case of known regression and variance parameters, they derive the Jeffreys prior for the correlation parameters. They prove posterior propriety and obtain that the predictive distributions at ungauged locations have finite variance. Moreover, they show that the proposed priors have good frequentist properties, except for those based on the marginal Jeffreys‐rule prior for the correlation parameters, and illustrate their approach by analyzing a dataset of zinc concentrations along the river Meuse. The Canadian Journal of Statistics 40: 304–327; 2012 © 2012 Statistical Society of Canada  相似文献   

15.
The authors derive closed‐form expressions for the full, profile, conditional and modified profile likelihood functions for a class of random growth parameter models they develop as well as Garcia's additive model. These expressions facilitate the determination of parameter estimates for both types of models. The profile, conditional and modified profile likelihood functions are maximized over few parameters to yield a complete set of parameter estimates. In the development of their random growth parameter models the authors specify the drift and diffusion coefficients of the growth parameter process in a natural way which gives interpretive meaning to these coefficients while yielding highly tractable models. They fit several of their random growth parameter models and Garcia's additive model to stock market data, and discuss the results. The Canadian Journal of Statistics 38: 474–487; 2010 © 2010 Statistical Society of Canada  相似文献   

16.
A new test is proposed for the hypothesis of uniformity on bi‐dimensional supports. The procedure is an adaptation of the “distance to boundary test” (DB test) proposed in Berrendero, Cuevas, & Vázquez‐Grande (2006). This new version of the DB test, called DBU test, allows us (as a novel, interesting feature) to deal with the case where the support S of the underlying distribution is unknown. This means that S is not specified in the null hypothesis so that, in fact, we test the null hypothesis that the underlying distribution is uniform on some support S belonging to a given class ${\cal C}$ . We pay special attention to the case that ${\cal C}$ is either the class of compact convex supports or the (broader) class of compact λ‐convex supports (also called r‐convex or α‐convex in the literature). The basic idea is to apply the DB test in a sort of plug‐in version, where the support S is approximated by using methods of set estimation. The DBU method is analysed from both the theoretical and practical point of view, via some asymptotic results and a simulation study, respectively. The Canadian Journal of Statistics 40: 378–395; 2012 © 2012 Statistical Society of Canada  相似文献   

17.
Recurrent event data arise commonly in medical and public health studies. The analysis of such data has received extensive research attention and various methods have been developed in the literature. Depending on the focus of scientific interest, the methods may be broadly classified as intensity‐based counting process methods, mean function‐based estimating equation methods, and the analysis of times to events or times between events. These methods and models cover a wide variety of practical applications. However, there is a critical assumption underlying those methods–variables need to be correctly measured. Unfortunately, this assumption is frequently violated in practice. It is quite common that some covariates are subject to measurement error. It is well known that covariate measurement error can substantially distort inference results if it is not properly taken into account. In the literature, there has been extensive research concerning measurement error problems in various settings. However, with recurrent events, there is little discussion on this topic. It is the objective of this paper to address this important issue. In this paper, we develop inferential methods which account for measurement error in covariates for models with multiplicative intensity functions or rate functions. Both likelihood‐based inference and robust inference based on estimating equations are discussed. The Canadian Journal of Statistics 40: 530–549; 2012 © 2012 Statistical Society of Canada  相似文献   

18.
The authors propose to estimate nonlinear small area population parameters by using the empirical Bayes (best) method, based on a nested error model. They focus on poverty indicators as particular nonlinear parameters of interest, but the proposed methodology is applicable to general nonlinear parameters. They use a parametric bootstrap method to estimate the mean squared error of the empirical best estimators. They also study small sample properties of these estimators by model‐based and design‐based simulation studies. Results show large reductions in mean squared error relative to direct area‐specific estimators and other estimators obtained by “simulated” censuses. The authors also apply the proposed method to estimate poverty incidences and poverty gaps in Spanish provinces by gender with mean squared errors estimated by the mentioned parametric bootstrap method. For the Spanish data, results show a significant reduction in coefficient of variation of the proposed empirical best estimators over direct estimators for practically all domains. The Canadian Journal of Statistics 38: 369–385; 2010 © 2010 Statistical Society of Canada  相似文献   

19.
In this article the author investigates the application of the empirical‐likelihood‐based inference for the parameters of varying‐coefficient single‐index model (VCSIM). Unlike the usual cases, if there is no bias correction the asymptotic distribution of the empirical likelihood ratio cannot achieve the standard chi‐squared distribution. To this end, a bias‐corrected empirical likelihood method is employed to construct the confidence regions (intervals) of regression parameters, which have two advantages, compared with those based on normal approximation, that is, (1) they do not impose prior constraints on the shape of the regions; (2) they do not require the construction of a pivotal quantity and the regions are range preserving and transformation respecting. A simulation study is undertaken to compare the empirical likelihood with the normal approximation in terms of coverage accuracies and average areas/lengths of confidence regions/intervals. A real data example is given to illustrate the proposed approach. The Canadian Journal of Statistics 38: 434–452; 2010 © 2010 Statistical Society of Canada  相似文献   

20.
Autoregressive models with switching regime are a frequently used class of nonlinear time series models, which are popular in finance, engineering, and other fields. We consider linear switching autoregressions in which the intercept and variance possibly switch simultaneously, while the autoregressive parameters are structural and hence the same in all states, and we propose quasi‐likelihood‐based tests for a regime switch in this class of models. Our motivation is from financial time series, where one expects states with high volatility and low mean together with states with low volatility and higher mean. We investigate the performance of our tests in a simulation study, and give an application to a series of IBM monthly stock returns. The Canadian Journal of Statistics 40: 427–446; 2012 © 2012 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号