首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 12 毫秒
1.
This article presents results concerning the performance of both single equation and system panel cointegration tests and estimators. The study considers the tests developed in Pedroni (1999 Pedroni , P. ( 1999 ). Critical values for cointegration tests in heterogeneous panels with multiple regressors . Oxford Bulletin of Economics and Statistics 61 : 653670 .[Crossref], [Web of Science ®] [Google Scholar], 2004 Pedroni , P. ( 2004 ). Panel cointegration. Asymptotic and finite sample properties of pooled time series tests with an application to the PPP hypothesis . Econometric Theory 20 : 597625 .[Crossref], [Web of Science ®] [Google Scholar]), Westerlund (2005 Westerlund , J. ( 2005 ). New simple tests for panel cointegration . Econometric Reviews 24 : 297316 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]), Larsson et al. (2001 Larsson , R. , Lyhagen , J. , Löthgren , M. ( 2001 ). Likelihood-based cointegration tests in heterogeneous panels . Econometrics Journal 4 : 109142 .[Crossref] [Google Scholar]), and Breitung (2005 Breitung , J. ( 2005 ). A parametric approach to the estimation of cointegration vectors in panel data . Econometric Reviews 24 : 151173 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) and the estimators developed in Phillips and Moon (1999 Phillips , P. C. B. , Moon , H. R. ( 1999 ). Linear regression limit theory for nonstationary panel data . Econometrica 67 : 10571111 .[Crossref], [Web of Science ®] [Google Scholar]), Pedroni (2000 Pedroni , P. ( 2000 ). Fully modified OLS for heterogeneous cointegrated panels . In: Baltagi , B. H. , ed. Nonstationary Panels, Panel Cointegration, and Dynamic Panels . Amsterdam : Elsevier , pp. 93130 .[Crossref] [Google Scholar]), Kao and Chiang (2000 Kao , C. , Chiang , M.-H. ( 2000 ). On the estimation and inference of a cointegrated regression in panel data . In: Baltagi , B. H. , ed. Nonstationary Panels, Panel Cointegration, and Dynamic Panels . Amsterdam : Elsevier , pp. 179222 .[Crossref] [Google Scholar]), Mark and Sul (2003 Mark , N. C. , Sul , D. ( 2003 ). Cointegration vector estimation by panel dynamic OLS and long-run money demand . Oxford Bulletin of Economics and Statistics 65 : 655680 .[Crossref], [Web of Science ®] [Google Scholar]), Pedroni (2001 Pedroni , P. ( 2001 ). Purchasing power parity tests in cointegrated panels . Review of Economics and Statistics 83 : 13711375 . [Google Scholar]), and Breitung (2005 Breitung , J. ( 2005 ). A parametric approach to the estimation of cointegration vectors in panel data . Econometric Reviews 24 : 151173 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]). We study the impact of stable autoregressive roots approaching the unit circle, of I(2) components, of short-run cross-sectional correlation and of cross-unit cointegration on the performance of the tests and estimators. The data are simulated from three-dimensional individual specific VAR systems with cointegrating ranks varying from zero to two for fourteen different panel dimensions. The usual specifications of deterministic components are considered.  相似文献   

2.
An increasing number of contemporary datasets are high dimensional. Applications require these datasets be screened (or filtered) to select a subset for further study. Multiple testing is the standard tool in such applications, although alternatives have begun to be explored. In order to assess the quality of selection in these high-dimensional contexts, Cui and Wilson (2008b Cui , X. , Wilson , J. ( 2008b ). On the probability of correct selection for large k populations with application to microarray data . Biometrical Journal 50 ( 5 ): 870883 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) proposed two viable methods of calculating the probability that any such selection is correct (PCS). PCS thereby serves as a measure of the quality of competing statistics used for selection. The first simulation study of this article investigates the two PCS statistics of the above article. It shows that in the high-dimensional case PCS can be accurately estimated and is robust under certain conditions. The second simulation study investigates a nonparametric estimator of PCS.  相似文献   

3.
4.
The two well-known and widely used multinomial selection procedures Bechhofor, Elmaghraby, and Morse (BEM) and all vector comparison (AVC) are critically compared in applications related to simulation optimization problems.

Two configurations of population probability distributions in which the best system has the greatest probability p i of yielding the largest value of the performance measure and has or does not have the largest expected performance measure were studied.

The numbers achieved by our simulations clearly show that none of the studied procedures outperform the other in all situations. The user must take into consideration the complexity of the simulations and the performance measure probability distribution properties when deciding which procedure to employ.

An important discovery was that the AVC does not work in populations in which the best system has the greatest probability p i of yielding the largest value of the performance measure but does not have the largest expected performance measure.  相似文献   

5.
This paper presents results on the size and power of first generation panel unit root and stationarity tests obtained from a large scale simulation study. The tests developed in the following papers are included: Levin et al. (2002), Harris and Tzavalis (1999), Breitung (2000), Im et al. (1997, 2003), Maddala and Wu (1999), Hadri (2000), and Hadri and Larsson (2005). Our simulation set-up is designed to address inter alia the following issues. First, we assess the performance as a function of the time and the cross-section dimensions. Second, we analyze the impact of serial correlation introduced by positive MA roots, known to have detrimental impact on time series unit root tests, on the performance. Third, we investigate the power of the panel unit root tests (and the size of the stationarity tests) for a variety of first order autoregressive coefficients. Fourth, we consider both of the two usual specifications of deterministic variables in the unit root literature.  相似文献   

6.
In many medical studies, there are covariates that change their values over time and their analysis is most often modeled using the Cox regression model. However, many of these time-dependent covariates can be expressed as an intermediate event, which can be modeled using a multi-state model. Using the relationship of time-dependent (discrete) covariates and multi-state models, we compare (via simulation studies) the Cox model with time-dependent covariates with the most frequently used multi-state regression models. This article also details the procedures for generating survival data arising from all approaches, including the Cox model with time-dependent covariates.  相似文献   

7.
Prognostic studies are essential to understand the role of particular prognostic factors and, thus, improve prognosis. In most studies, disease progression trajectories of individual patients may end up with one of mutually exclusive endpoints or can involve a sequence of different events.

One challenge in such studies concerns separating the effects of putative prognostic factors on these different endpoints and testing the differences between these effects.

In this article, we systematically evaluate and compare, through simulations, the performance of three alternative multivariable regression approaches in analyzing competing risks and multiple-event longitudinal data. The three approaches are: (1) fitting separate event-specific Cox's proportional hazards models; (2) the extension of Cox's model to competing risks proposed by Lunn and McNeil; and (3) Markov multi-state model.

The simulation design is based on a prognostic study of cancer progression, and several simulated scenarios help investigate different methodological issues relevant to the modeling of multiple-event processes of disease progression. The results highlight some practically important issues. Specifically, the decreased precision of the observed timing of intermediary (non fatal) events has a strong negative impact on the accuracy of regression coefficients estimated with either the Cox's or Lunn-McNeil models, while the Markov model appears to be quite robust, under the same circumstances. Furthermore, the tests based on both Markov and Lunn-McNeil models had similar power for detecting a difference between the effects of the same covariate on the hazards of two mutually exclusive events. The Markov approach yields also accurate Type I error rate and good empirical power for testing the hypothesis that the effect of a prognostic factor on changes after an intermediary event, which cannot be directly tested with the Lunn-McNeil method. Bootstrap-based standard errors improve the coverage rates for Markov model estimates. Overall, the results of our simulations validate Markov multi-state model for a wide range of data structures encountered in prognostic studies of disease progression, and may guide end users regarding the choice of model(s) most appropriate for their specific application.  相似文献   

8.
One of the fundamental issues in analyzing microarray data is to determine which genes are expressed and which ones are not for a given group of subjects. In datasets where many genes are expressed and many are not expressed (i.e., underexpressed), a bimodal distribution for the gene expression levels often results, where one mode of the distribution represents the expressed genes and the other mode represents the underexpressed genes. To model this bimodality, we propose a new class of mixture models that utilize a random threshold value for accommodating bimodality in the gene expression distribution. Theoretical properties of the proposed model are carefully examined. We use this new model to examine the problem of differential gene expression between two groups of subjects, develop prior distributions, and derive a new criterion for determining which genes are differentially expressed between the two groups. Prior elicitation is carried out using empirical Bayes methodology in order to estimate the threshold value as well as elicit the hyperparameters for the two component mixture model. The new gene selection criterion is demonstrated via several simulations to have excellent false positive rate and false negative rate properties. A gastric cancer dataset is used to motivate and illustrate the proposed methodology.  相似文献   

9.
The Bayesian information criterion (BIC) is widely used for variable selection. We focus on the regression setting for which variations of the BIC have been proposed. A version that includes the Fisher Information matrix of the predictor variables performed best in one published study. In this article, we extend the evaluation, introduce a performance measure involving how closely posterior probabilities are approximated, and conclude that the version that includes the Fisher Information often favors regression models having more predictors, depending on the scale and correlation structure of the predictor matrix. In the image analysis application that we describe, we therefore prefer the standard BIC approximation because of its relative simplicity and competitive performance at approximating the true posterior probabilities.  相似文献   

10.
We compare three moment selection approaches, followed by post-selection estimation strategies. The first is adaptive least absolute shrinkage and selection operator (ALASSO) of Zou (2006 Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the American Statistical Association 101:14181429.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), recently extended by Liao (2013 Liao, Z. (2013). Adaptive GMM shrinkage estimation with consistent moment selection. Econometric Theory FirstView:148. [Google Scholar]) to possibly invalid moments in GMM. In this method, we select the valid instruments with ALASSO. The second method is based on the J test, as in Andrews and Lu (2001 Andrews, D. W. K., Lu, B. (2001). Consistent model and moment selection procedures for GMM estimation with application to dynamic panel data models. Journal of Econometrics 101(1):123164.[Crossref], [Web of Science ®] [Google Scholar]). The third one is using a Continuous Updating Objective (CUE) function. This last approach is based on Hong et al. (2003 Hong, H., Preston, B., Shum, M. (2003). Generalized empirical likelihood based model selection criteria for moment condition models. Econometric Theory 19(06):923943. [Google Scholar]), who propose a penalized generalized empirical likelihood-based function to pick up valid moments. They use empirical likelihood, and exponential tilting in their simulations. However, the J-test-based approach of Andrews and Lu (2001 Andrews, D. W. K., Lu, B. (2001). Consistent model and moment selection procedures for GMM estimation with application to dynamic panel data models. Journal of Econometrics 101(1):123164.[Crossref], [Web of Science ®] [Google Scholar]) provides generally better moment selection results than the empirical likelihood and exponential tilting as can be seen in Hong et al. (2003 Hong, H., Preston, B., Shum, M. (2003). Generalized empirical likelihood based model selection criteria for moment condition models. Econometric Theory 19(06):923943. [Google Scholar]). In this article, we examine penalized CUE as a third way of selecting valid moments.

Following a determination of valid moments, we run unpenalized generalized method of moments (GMM) and CUE and model averaging technique of Okui (2011 Okui, R. (2011). Instrumental variable estimation in the presence of many moment conditions. Journal of Econometrics 165(1):7086.[Crossref], [Web of Science ®] [Google Scholar]) to see which one has better postselection estimator performance for structural parameters. The simulations are aimed at the following questions: Which moment selection criterion can better select the valid ones and eliminate the invalid ones? Given the chosen instruments in the first stage, which strategy delivers the best finite sample performance?

We find that the ALASSO in the model selection stage, coupled with either unpenalized GMM or moment averaging of Okui delivers generally the smallest root mean square error (RMSE) for the second stage coefficient estimators.  相似文献   

11.
When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.  相似文献   

12.
This article presents the results of a simulation study of variable selection in a multiple regression context that evaluates the frequency of selecting noise variables and the bias of the adjusted R 2 of the selected variables when some of the candidate variables are authentic. It is demonstrated that for most samples a large percentage of the selected variables is noise, particularly when the number of candidate variables is large relative to the number of observations. The adjusted R 2 of the selected variables is highly inflated.  相似文献   

13.
A Bayesian model consists of two elements: a sampling model and a prior density. The problem of selecting a prior density is nothing but the problem of selecting a Bayesian model where the sampling model is fixed. A predictive approach is used through a decision problem where the loss function is the squared L 2 distance between the sampling density and the posterior predictive density, because the aim of the method is to choose the prior that provides a posterior predictive density as good as possible. An algorithm is developed for solving the problem; this algorithm is based on Lavine's linearization technique.  相似文献   

14.
The present study investigates the performance of fice discrimination methods for data consisting of a mixture of continuous and binary variables. The methods are Fisher’s linear discrimination, logistic discrimination, quadratic discrimination, a kernal model and an independence model. Six-dimensional data, consisting of three binary and three continuous variables, are simulated according to a location model. The results show an almost identical performance for Fisher’s linear discrimination and logistic discrimination. Only in situations with independently distributed variables the independence model does have a reasonable discriminatory ability for the dimensionality considered. If the log likelihood ratio is non-linear ratio is non-linear with respect to its continuous and binary part, the quadratic discrimination method is substantial better than linear and logistic discrimination, followed by the kernel method. A very good performance is obtained when in every situation the better one of linear and quardratic discrimination is used.  相似文献   

15.
This article evaluates the performance of two estimators namely, the Maximum Likelihood Estimator (MLE) and Whittle's Estimator (WE), through a simulation study for the Generalised Autoregressive (GAR) model.

As expected, it is found that for the parameters α and σ2, the MLE and WE have a better performance than Method of Moments (MOM) estimator. For the parameter δ, MOM sometimes appears to have a slightly better performance than MLE and WE, possibly due to truncation approximations associated with the hypergeometric functions for calculating the autocorrelation function. However, the MLE and WE can be used in practice without loss of efficiency.  相似文献   

16.
This study investigates the performance of parametric and nonparametric tests to analyze repeated measures designs. Both multivariate normal and exponential distributions were simulated for varying values of the correlation and ten or twenty subjects within each cell. For multivariate normal distributions, the type I error rates were lower than the usual 0.05 level for nonparametric tests, whereas the parametric tests without the Greenhouse-Geisser or the Huynh-Feldt adjustment produced slightly higher type I error rates. Type I error rates for nonparametric tests, for multivariate exponential distributions, were more stable than parametric, Greenhouse-Geisser or Huynh-Feldt adjusted tests. For ten subjects within each cell, the parametric tests were more powerful than nonparametric tests. For twenty subjects per cell, the power of the nonparametric and parametric tests was comparable.  相似文献   

17.
Abstract

Imputation methods for missing data on a time-dependent variable within time-dependent Cox models are investigated in a simulation study. Quality of life (QoL) assessments were removed from the complete simulated datasets, which have a positive relationship between QoL and disease-free survival (DFS) and delayed chemotherapy and DFS, by missing at random and missing not at random (MNAR) mechanisms. Standard imputation methods were applied before analysis. Method performance was influenced by missing data mechanism, with one exception for simple imputation. The greatest bias occurred under MNAR and large effect sizes. It is important to carefully investigate the missing data mechanism.  相似文献   

18.
ABSTRACT

There is a widespread perception that standard unit-root tests have poor discriminatory power when they are applied to time series with nonlinear dynamics. Via Monte Carlo simulations this study re-examines the finite sample properties of selected univariate tests for unit-root and stationarity under a broad class of nonlinear dynamic models. Our simulation experiments produce a couple of interesting findings. First, performance of tests is driven by the degree of underlying persistence rather than the nonlinear dynamics per se. Tests under study exhibit reasonable performance for nonlinear models with mild persistence, while the accuracy of inference deteriorates substantially when the models are highly persistent regardless of the linearity. Second, when it comes to deciding which one to identify first between linearity and stationarity, our results suggest to conduct linearity test first to enhance the reliability of test inference.  相似文献   

19.
The associations in mortality of adult adoptees and their biological or adoptive parents have been studied in order to separate genetic and environmental influences. The 1003 Danish adoptees born 1924–26 have previously been analysed in a Cox regression model, using dichotomised versions of the parents’ lifetimes as covariates. This model will be referred to as the conditional Cox model, as it analyses lifetimes of adoptees conditional on parental lifetimes. Shared frailty models may be more satisfactory by using the entire observed lifetime of the parents. In a simulation study, sample size, distribution of lifetimes, truncation- and censoring patterns were chosen to illustrate aspects of the adoption dataset, and were generated from the conditional Cox model or a shared frailty model with gamma distributed frailties. First, efficiency was compared in the conditional Cox model and a shared frailty model, based on the conditional approach. For data with type 1 censoring the models showed no differences, whereas in data with random or no censoring, the models had different power in favour of the one from which data were generated. Secondly, estimation in the shared frailty model by a conditional approach or a two-stage copula approach was compared. Both approaches worked well, with no sign of dependence upon the truncation pattern, but some sign of bias depending on the censoring. For frailty parameters close to zero, we found bias when the estimation procedure used did not allow negative estimates. Based on this evaluation, we prefer to use frailty models allowing for negative frailty parameter estimates. The conclusions from earlier analyses of the adoption study were confirmed, though without greater precision than using the conditional Cox model. Analyses of associations between parental lifetimes are also presented.  相似文献   

20.
We examine alternative generalized method of moments procedures for estimation of a stochastic autoregressive volatility model by Monte Carlo methods. We document the existence of a tradeoff between the number of moments, or information, included in estimation and the quality, or precision, of the objective function used for estimation. Furthermore, an approximation to the optimal weighting matrix is used to explore the impact of the weighting matrix for estimation, specification testing, and inference procedures. The results provide guidelines that help achieve desirable small-sample properties in settings characterized by strong conditional heteroscedasticity and correlation among the moments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号