首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recurrent event data arise commonly in medical and public health studies. The analysis of such data has received extensive research attention and various methods have been developed in the literature. Depending on the focus of scientific interest, the methods may be broadly classified as intensity‐based counting process methods, mean function‐based estimating equation methods, and the analysis of times to events or times between events. These methods and models cover a wide variety of practical applications. However, there is a critical assumption underlying those methods–variables need to be correctly measured. Unfortunately, this assumption is frequently violated in practice. It is quite common that some covariates are subject to measurement error. It is well known that covariate measurement error can substantially distort inference results if it is not properly taken into account. In the literature, there has been extensive research concerning measurement error problems in various settings. However, with recurrent events, there is little discussion on this topic. It is the objective of this paper to address this important issue. In this paper, we develop inferential methods which account for measurement error in covariates for models with multiplicative intensity functions or rate functions. Both likelihood‐based inference and robust inference based on estimating equations are discussed. The Canadian Journal of Statistics 40: 530–549; 2012 © 2012 Statistical Society of Canada  相似文献   

2.
The Lagrange Multiplier (LM) test is one of the principal tools to detect ARCH and GARCH effects in financial data analysis. However, when the underlying data are non‐normal, which is often the case in practice, the asymptotic LM test, based on the χ2‐approximation of critical values, is known to perform poorly, particularly for small and moderate sample sizes. In this paper we propose to employ two re‐sampling techniques to find critical values of the LM test, namely permutation and bootstrap. We derive the properties of exactness and asymptotically correctness for the permutation and bootstrap LM tests, respectively. Our numerical studies indicate that the proposed re‐sampled algorithms significantly improve size and power of the LM test in both skewed and heavy‐tailed processes. We also illustrate our new approaches with an application to the analysis of the Euro/USD currency exchange rates and the German stock index. The Canadian Journal of Statistics 40: 405–426; 2012 © 2012 Statistical Society of Canada  相似文献   

3.
We propose a new type of multivariate statistical model that permits non‐Gaussian distributions as well as the inclusion of conditional independence assumptions specified by a directed acyclic graph. These models feature a specific factorisation of the likelihood that is based on pair‐copula constructions and hence involves only univariate distributions and bivariate copulas, of which some may be conditional. We demonstrate maximum‐likelihood estimation of the parameters of such models and compare them to various competing models from the literature. A simulation study investigates the effects of model misspecification and highlights the need for non‐Gaussian conditional independence models. The proposed methods are finally applied to modeling financial return data. The Canadian Journal of Statistics 40: 86–109; 2012 © 2012 Statistical Society of Canada  相似文献   

4.
Panel count data occur in many fields and a number of approaches have been developed. However, most of these approaches are for situations where there is no terminal event and the observation process is independent of the underlying recurrent event process unconditionally or conditional on the covariates. In this paper, we discuss a more general situation where the observation process is informative and there exists a terminal event which precludes further occurrence of the recurrent events of interest. For the analysis, a semiparametric transformation model is presented for the mean function of the underlying recurrent event process among survivors. To estimate the regression parameters, an estimating equation approach is proposed in which an inverse survival probability weighting technique is used. The asymptotic distribution of the proposed estimates is provided. Simulation studies are conducted and suggest that the proposed approach works well for practical situations. An illustrative example is provided. The Canadian Journal of Statistics 41: 174–191; 2013 © 2012 Statistical Society of Canada  相似文献   

5.
For binomial data analysis, many methods based on empirical Bayes interpretations have been developed, in which a variance‐stabilizing transformation and a normality assumption are usually required. To achieve the greatest model flexibility, we conduct nonparametric Bayesian inference for binomial data and employ a special nonparametric Bayesian prior—the Bernstein–Dirichlet process (BDP)—in the hierarchical Bayes model for the data. The BDP is a special Dirichlet process (DP) mixture based on beta distributions, and the posterior distribution resulting from it has a smooth density defined on [0, 1]. We examine two Markov chain Monte Carlo procedures for simulating from the resulting posterior distribution, and compare their convergence rates and computational efficiency. In contrast to existing results for posterior consistency based on direct observations, the posterior consistency of the BDP, given indirect binomial data, is established. We study shrinkage effects and the robustness of the BDP‐based posterior estimators in comparison with several other empirical and hierarchical Bayes estimators, and we illustrate through examples that the BDP‐based nonparametric Bayesian estimate is more robust to the sample variation and tends to have a smaller estimation error than those based on the DP prior. In certain settings, the new estimator can also beat Stein's estimator, Efron and Morris's limited‐translation estimator, and many other existing empirical Bayes estimators. The Canadian Journal of Statistics 40: 328–344; 2012 © 2012 Statistical Society of Canada  相似文献   

6.
In this paper, we consider a regression analysis for a missing data problem in which the variables of primary interest are unobserved under a general biased sampling scheme, an outcome‐dependent sampling (ODS) design. We propose a semiparametric empirical likelihood method for accessing the association between a continuous outcome response and unobservable interesting factors. Simulation study results show that ODS design can produce more efficient estimators than the simple random design of the same sample size. We demonstrate the proposed approach with a data set from an environmental study for the genetic effects on human lung function in COPD smokers. The Canadian Journal of Statistics 40: 282–303; 2012 © 2012 Statistical Society of Canada  相似文献   

7.
Autoregressive models with switching regime are a frequently used class of nonlinear time series models, which are popular in finance, engineering, and other fields. We consider linear switching autoregressions in which the intercept and variance possibly switch simultaneously, while the autoregressive parameters are structural and hence the same in all states, and we propose quasi‐likelihood‐based tests for a regime switch in this class of models. Our motivation is from financial time series, where one expects states with high volatility and low mean together with states with low volatility and higher mean. We investigate the performance of our tests in a simulation study, and give an application to a series of IBM monthly stock returns. The Canadian Journal of Statistics 40: 427–446; 2012 © 2012 Statistical Society of Canada  相似文献   

8.
In competing risks models, the joint distribution of the event times is not identifiable even when the margins are fully known, which has been referred to as the “identifiability crisis in competing risks analysis” (Crowder, 1991). We model the dependence between the event times by an unknown copula and show that identification is actually possible within many frequently used families of copulas. The result is then extended to the case where one margin is unknown. The Canadian Journal of Statistics 41: 291–303; 2013 © 2013 Statistical Society of Canada  相似文献   

9.
We use the two‐state Markov regime‐switching model to explain the behaviour of the WTI crude‐oil spot prices from January 1986 to February 2012. We investigated the use of methods based on the composite likelihood and the full likelihood. We found that the composite‐likelihood approach can better capture the general structural changes in world oil prices. The two‐state Markov regime‐switching model based on the composite‐likelihood approach closely depicts the cycles of the two postulated states: fall and rise. These two states persist for on average 8 and 15 months, which matches the observed cycles during the period. According to the fitted model, drops in oil prices are more volatile than rises. We believe that this information can be useful for financial officers working in related areas. The model based on the full‐likelihood approach was less satisfactory. We attribute its failure to the fact that the two‐state Markov regime‐switching model is too rigid and overly simplistic. In comparison, the composite likelihood requires only that the model correctly specifies the joint distribution of two adjacent price changes. Thus, model violations in other areas do not invalidate the results. The Canadian Journal of Statistics 41: 353–367; 2013 © 2013 Statistical Society of Canada  相似文献   

10.
The authors develop default priors for the Gaussian random field model that includes a nugget parameter accounting for the effects of microscale variations and measurement errors. They present the independence Jeffreys prior, the Jeffreys‐rule prior and a reference prior and study posterior propriety of these and related priors. They show that the uniform prior for the correlation parameters yields an improper posterior. In case of known regression and variance parameters, they derive the Jeffreys prior for the correlation parameters. They prove posterior propriety and obtain that the predictive distributions at ungauged locations have finite variance. Moreover, they show that the proposed priors have good frequentist properties, except for those based on the marginal Jeffreys‐rule prior for the correlation parameters, and illustrate their approach by analyzing a dataset of zinc concentrations along the river Meuse. The Canadian Journal of Statistics 40: 304–327; 2012 © 2012 Statistical Society of Canada  相似文献   

11.
Clinical trials usually involve efficient and ethical objectives such as maximizing the power and minimizing the total failure number. Interim analysis is now a standard technique in practice to achieve these objectives. Randomized urn models have been extensively studied in the literature. In this paper, we propose to perform interim analysis on clinical trials based on urn models and study its properties. We show that the urn composition, allocation of patients and parameter estimators can be approximated by a joint Gaussian process. Consequently, sequential test statistics of the proposed procedure converge to a Brownian motion in distribution and the sequential test statistics asymptotically satisfy the canonical joint distribution defined in Jennison & Turnbull (Jennison & Turnbull 2000. Group Sequential Methods with Applications to Clinical Trials, Chapman and Hall/CRC). These results provide a solid foundation and open a door to perform the interim analysis on randomized clinical trials with urn models in practice. Furthermore, we demonstrate our proposal through examples and simulations by applying sequential monitoring and stochastic curtailment techniques. The Canadian Journal of Statistics 40: 550–568; 2012 © 2012 Statistical Society of Canada  相似文献   

12.
The penalized spline is a popular method for function estimation when the assumption of “smoothness” is valid. In this paper, methods for estimation and inference are proposed using penalized splines under additional constraints of shape, such as monotonicity or convexity. The constrained penalized spline estimator is shown to have the same convergence rates as the corresponding unconstrained penalized spline, although in practice the squared error loss is typically smaller for the constrained versions. The penalty parameter may be chosen with generalized cross‐validation, which also provides a method for determining if the shape restrictions hold. The method is not a formal hypothesis test, but is shown to have nice large‐sample properties, and simulations show that it compares well with existing tests for monotonicity. Extensions to the partial linear model, the generalized regression model, and the varying coefficient model are given, and examples demonstrate the utility of the methods. The Canadian Journal of Statistics 40: 190–206; 2012 © 2012 Statistical Society of Canada  相似文献   

13.
The class of joint mean‐covariance models uses the modified Cholesky decomposition of the within subject covariance matrix in order to arrive to an unconstrained, statistically meaningful reparameterisation. The new parameterisation of the covariance matrix has two sets of parameters that separately describe the variances and correlations. Thus, with the mean or regression parameters, these models have three sets of distinct parameters. In order to alleviate the problem of inefficient estimation and downward bias in the variance estimates, inherent in the maximum likelihood estimation procedure, the usual REML estimation procedure adjusts for the degrees of freedom lost due to the estimation of the mean parameters. Because of the parameterisation of the joint mean covariance models, it is possible to adapt the usual REML procedure in order to estimate the variance (correlation) parameters by taking into account the degrees of freedom lost by the estimation of both the mean and correlation (variance) parameters. To this end, here we propose adjustments to the estimation procedures based on the modified and adjusted profile likelihoods. The methods are illustrated by an application to a real data set and simulation studies. The Canadian Journal of Statistics 40: 225–242; 2012 © 2012 Statistical Society of Canada  相似文献   

14.
We consider the problem of selecting variables in factor analysis models. The $L_1$ regularization procedure is introduced to perform an automatic variable selection. In the factor analysis model, each variable is controlled by multiple factors when there are more than one underlying factor. We treat parameters corresponding to the multiple factors as grouped parameters, and then apply the group lasso. Furthermore, the weight of the group lasso penalty is modified to obtain appropriate estimates and improve the performance of variable selection. Crucial issues in this modeling procedure include the selection of the number of factors and a regularization parameter. Choosing these parameters can be viewed as a model selection and evaluation problem. We derive a model selection criterion for evaluating the factor analysis model via the weighted group lasso. Monte Carlo simulations are conducted to investigate the effectiveness of the proposed procedure. A real data example is also given to illustrate our procedure. The Canadian Journal of Statistics 40: 345–361; 2012 © 2012 Statistical Society of Canada  相似文献   

15.
It is known that the profile empirical likelihood method based on estimating equations is computationally intensive when the number of nuisance parameters is large. Recently, Li, Peng, & Qi (2011) proposed a jackknife empirical likelihood method for constructing confidence regions for the parameters of interest by estimating the nuisance parameters separately. However, when the estimators for the nuisance parameters have no explicit formula, the computation of the jackknife empirical likelihood method is still intensive. In this paper, an approximate jackknife empirical likelihood method is proposed to reduce the computation in the jackknife empirical likelihood method when the nuisance parameters cannot be estimated explicitly. A simulation study confirms the advantage of the new method. The Canadian Journal of Statistics 40: 110–123; 2012 © 2012 Statistical Society of Canada  相似文献   

16.
A new test is proposed for the hypothesis of uniformity on bi‐dimensional supports. The procedure is an adaptation of the “distance to boundary test” (DB test) proposed in Berrendero, Cuevas, & Vázquez‐Grande (2006). This new version of the DB test, called DBU test, allows us (as a novel, interesting feature) to deal with the case where the support S of the underlying distribution is unknown. This means that S is not specified in the null hypothesis so that, in fact, we test the null hypothesis that the underlying distribution is uniform on some support S belonging to a given class ${\cal C}$ . We pay special attention to the case that ${\cal C}$ is either the class of compact convex supports or the (broader) class of compact λ‐convex supports (also called r‐convex or α‐convex in the literature). The basic idea is to apply the DB test in a sort of plug‐in version, where the support S is approximated by using methods of set estimation. The DBU method is analysed from both the theoretical and practical point of view, via some asymptotic results and a simulation study, respectively. The Canadian Journal of Statistics 40: 378–395; 2012 © 2012 Statistical Society of Canada  相似文献   

17.
Positive quadrant dependence is a specific dependence structure that is of practical importance in for example modelling dependencies in insurance and actuarial sciences. This dependence structure imposes a constraint on the copula function. The interest in this paper is to test for positive quadrant dependence. One way to assess the distribution of the test statistics under the null hypothesis of positive quadrant dependence is to resample from a constrained copula. This requires constrained estimation of a copula function. We show that this use of resampling under a constrained copula improves considerably the power performance of existing testing procedures. We propose two resampling procedures, one based on a parametric constrained copula estimation and one relying on nonparametric estimation of a positive quadrant dependence copula, and discuss their properties. The finite‐sample performances of the resulting testing procedures are evaluated via a simulation study that also includes comparisons with existing tests. Finally, a data set of Danish fire insurance claims is tested for positive quadrant dependence. The Canadian Journal of Statistics 41: 36–64; 2013 © 2012 Statistical Society of Canada  相似文献   

18.
Coarse data is a general type of incomplete data that includes grouped data, censored data, and missing data. The likelihood‐based estimation approach with coarse data is challenging because the likelihood function is in integral form. The Monte Carlo EM algorithm of Wei & Tanner [Wei & Tanner (1990). Journal of the American Statistical Association, 85, 699–704] is adapted to compute the maximum likelihood estimator in the presence of coarse data. Stochastic coarse data is also covered and the computation can be implemented using the parametric fractional imputation method proposed by Kim [Kim (2011). Biometrika, 98, 119–132]. Results from a limited simulation study are presented. The proposed method is also applied to the Korean Longitudinal Study of Aging (KLoSA). The Canadian Journal of Statistics 40: 604–618; 2012 © 2012 Statistical Society of Canada  相似文献   

19.
In a multilevel model for complex survey data, the weight‐inflated estimators of variance components can be biased. We propose a resampling method to correct this bias. The performance of the bias corrected estimators is studied through simulations using populations generated from a simple random effects model. The simulations show that, without lowering the precision, the proposed procedure can reduce the bias of the estimators, especially for designs that are both informative and have small cluster sizes. Application of these resampling procedures to data from an artificial workplace survey provides further evidence for the empirical value of this method. The Canadian Journal of Statistics 40: 150–171; 2012 © 2012 Statistical Society of Canada  相似文献   

20.
A computational problem in many fields is to estimate simultaneously multiple integrals and expectations, assuming that the data are generated by some Monte Carlo algorithm. Consider two scenarios in which draws are simulated from multiple distributions but the normalizing constants of those distributions may be known or unknown. For each scenario, existing estimators can be classified as using individual samples separately or using all the samples jointly. The latter pooled‐sample estimators are statistically more efficient but computationally more costly to evaluate than the separate‐sample estimators. We develop a cluster‐sample approach to obtain computationally effective estimators, after draws are generated for each scenario. We divide all the samples into mutually exclusive clusters and combine samples from each cluster separately. Furthermore, we exploit a relationship between estimators based on samples from different clusters to achieve variance reduction. The resulting estimators, compared with the pooled‐sample estimators, typically yield similar statistical efficiency but have reduced computational cost. We illustrate the value of the new approach by two examples for an Ising model and a censored Gaussian random field. The Canadian Journal of Statistics 41: 151–173; 2013 © 2012 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号