首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This article examines the spurious regression phenomenon between long memory series if the generating mechanism of individual series is assumed to follow a stationary/nonstationary process with mis-specified breaks. By using least-squares regression, the t-ratio becomes divergent and spurious regression is present. The intuition behind this is that the long memory series with change points can increase persistency in the level of regression errors and cause such spurious relationship. Simulation results indicate that the extent of spurious regression heavily relies on memory index, sample size, and location of break. As a remedy, we employ a four-stage procedure motivated by Maynard, Smallwood and Wohar (2013 Maynard, A., Smallwood, A., Wohar, M. E. (2013). Long memory regressors and predictive testing: a two-stage rebalancing approach. Econometric Reviews 32:318360.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], Econ. Rev., 32, 318–360) to alleviate the size distortions. Finally, an empirical illustration using some stock price data from Shanghai Stock Exchange is reported.  相似文献   

2.
The analysis of categorical response data through the multinomial model is very frequent in many statistical, econometric, and biometric applications. However, one of the main problems is the precise estimation of the model parameters when the number of observations is very low. We propose a new Bayesian estimation approach where the prior distribution is constructed through the transformation of the multivariate beta of Olkin and Liu (2003 Olkin , I. , Liu , R. ( 2003 ). A bivariate beta distribution . Stat. Probab. Lett. 62 : 407412 .[Crossref], [Web of Science ®] [Google Scholar]). Moreover, the application of the zero-variance principle allows us to estimate moments in Monte Carlo simulations with a dramatic reduction of their variances. We show the advantages of our approach through applications to some toy examples, where we get efficient parameter estimates.  相似文献   

3.
The combined model as introduced by Molenberghs et al. (2007 Molenberghs, G., Verbeke, G., Demétrio, C. (2007). An extended random-effects approach to modeling repeated, overdispersed count data. Lifetime Data Analysis 13:513531.[Crossref], [PubMed], [Web of Science ®] [Google Scholar], 2010 Molenberghs, G., Verbeke, G., Demétrio, C., Vieira, A. (2010). A family of generalized linear models for repeated measures with normal and conjugate random effects. Statistical Science 25:325347.[Crossref], [Web of Science ®] [Google Scholar]) has been shown to be an appealing tool for modeling not only correlated or overdispersed data but also for data that exhibit both these features. Unlike techniques available in the literature prior to the combined model, which use a single random-effects vector to capture correlation and/or overdispersion, the combined model allows for the correlation and overdispersion features to be modeled by two sets of random effects. In the context of count data, for example, the combined model naturally reduces to the Poisson-normal model, an instance of the generalized linear mixed model in the absence of overdispersion and it also reduces to the negative-binomial model in the absence of correlation. Here, a Poisson model is specified as the parent distribution of the data conditional on a normally distributed random effect at the subject or cluster level and/or a gamma distribution at observation level. Importantly, the development of the combined model and surrounding derivations have relevance well beyond mere data analysis. It so happens that the combined model can also be used to simulate correlated data. If a researcher is interested in comparing marginal models via Monte Carlo simulations, a necessity to generate suitable correlated count data arises. One option is to induce correlation via random effects but calculation of such quantities as the bias is then not straightforward. Since overdispersion and correlation are simultaneous features of longitudinal count data, the combined model presents an appealing framework for generating data to evaluate statistical properties, through a pre-specification of the desired marginal mean (possibly in terms of the covariates and marginal parameters) and a marginal variance-covariance structure. By comparing the marginal mean and variance of the combined model to the desired or pre-specified marginal mean and variance, respectively, the implied hierarchical parameters and the variance-covariance matrices of the normal and Gamma random effects are then derived from which correlated Poisson data are generated. We explore data generation when a random intercept or random intercept and slope model is specified to induce correlation. The data generator, however, allows for any dimension of the random effects although an increase in the random-effects dimension increases the sensitivity of the derived random effects variance-covariance matrix to deviations from positive-definiteness. A simulation study is carried out for the random-intercept model and for the random intercept and slope model, with or without the normal and Gamma random effects. We also pay specific attention to the case of serial correlation.  相似文献   

4.
Marshall and Olkin (1997 Marshall, A.W., Olkin, I. (1997). A new method for adding a parameter to a family of distributions with application to the exponential and Weibull families. Biometrika 84(3):641652.[Crossref], [Web of Science ®] [Google Scholar]) introduced a new method of adding parameter to expand a family of distributions. Using this concept, in this article, the Marshall–Olkin extended Pareto distribution is introduced and some recurrence relations for single and product moments of generalized order statistics are studied. Also the results are deduced for record values and order statistics.  相似文献   

5.
Competing models arise naturally in many research fields, such as survival analysis and economics, when the same phenomenon of interest is explained by different researcher using different theories or according to different experiences. The model selection problem is therefore remarkably important because of its great importance to the subsequent inference; Inference under a misspecified or inappropriate model will be risky. Existing model selection tests such as Vuong's tests [26 Q.H. Vuong, Likelihood ratio test for model selection and non-nested hypothesis, Econometrica 57 (1989), pp. 307333. doi: 10.2307/1912557[Crossref], [Web of Science ®] [Google Scholar]] and Shi's non-degenerate tests [21 X. Shi, A non-degenerate Vuong test, Quant. Econ. 6 (2015), pp. 85121. doi: 10.3982/QE382[Crossref], [Web of Science ®] [Google Scholar]] suffer from the variance estimation and the departure of the normality of the likelihood ratios. To circumvent these dilemmas, we propose in this paper an empirical likelihood ratio (ELR) tests for model selection. Following Shi [21 X. Shi, A non-degenerate Vuong test, Quant. Econ. 6 (2015), pp. 85121. doi: 10.3982/QE382[Crossref], [Web of Science ®] [Google Scholar]], a bias correction method is proposed for the ELR tests to enhance its performance. A simulation study and a real-data analysis are provided to illustrate the performance of the proposed ELR tests.  相似文献   

6.
This article considers estimation of Panel Vector Autoregressive Models of order 1 (PVAR(1)) with focus on fixed T consistent estimation methods in First Differences (FD) with additional strictly exogenous regressors. Additional results for the Panel FD ordinary least squares (OLS) estimator and the FDLS type estimator of Han and Phillips (2010 Han, C., Phillips, P. C. B. (2010). Gmm estimation for dynamic panels with fixed effects and strong instruments at unity. Econometric Theory 26:119151.[Crossref], [Web of Science ®] [Google Scholar]) are provided. Furthermore, we simplify the analysis of Binder et al. (2005 Binder, M., Hsiao, C., Pesaran, M. H. (2005). Estimation and inference in short panel vector autoregressions with unit root and cointegration. Econometric Theory 21:795837.[Crossref], [Web of Science ®] [Google Scholar]) by providing additional analytical results and extend the original model by taking into account possible cross-sectional heteroscedasticity and presence of strictly exogenous regressors. We show that in the three wave panel the log-likelihood function of the unrestricted Transformed Maximum Likelihood (TML) estimator might violate the global identification assumption. The finite-sample performance of the analyzed methods is investigated in a Monte Carlo study.  相似文献   

7.
Under treatment effect heterogeneity, an instrument identifies the instrument-specific local average treatment effect (LATE). With multiple instruments, two-stage least squares (2SLS) estimand is a weighted average of different LATEs. What is often overlooked in the literature is that the postulated moment condition evaluated at the 2SLS estimand does not hold unless those LATEs are the same. If so, the conventional heteroscedasticity-robust variance estimator would be inconsistent, and 2SLS standard errors based on such estimators would be incorrect. I derive the correct asymptotic distribution, and propose a consistent asymptotic variance estimator by using the result of Hall and Inoue (2003 Hall, A.R., and Inoue, A. (2003), “The Large Sample Behaviour of the Generalized Method of Moments Estimator in Misspecified Models,” Journal of Econometrics, 114, 361394.[Crossref], [Web of Science ®] [Google Scholar], Journal of Econometrics) on misspecified moment condition models. This can be used to correctly calculate the standard errors regardless of whether there is more than one LATE or not.  相似文献   

8.
This paper discusses the estimation of average treatment effects in observational causal inferences. By employing a working propensity score and two working regression models for treatment and control groups, Robins et al. (1994 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1994 ). Estimation of regression coefficients when some regressors are not always observed . Journal of the American Statistical Association 89 : 846866 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], 1995 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1995 ). Analysis of semiparametric regression models for repeated outcomes in the presence of missing data . Journal of the American Statistical Association 90 : 106121 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) introduced the augmented inverse probability weighting (AIPW) method for estimation of average treatment effects, which extends the inverse probability weighting (IPW) method of Horvitz and Thompson (1952 Horvitz , D. G. , Thompson , D. J. ( 1952 ). A generalization of sampling without replacement from a finite universe . Journal of the American Statistical Association 47 : 663685 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]); the AIPW estimators are locally efficient and doubly robust. In this paper, we study a hybrid of the empirical likelihood method and the method of moments by employing three estimating functions, which can generate estimators for average treatment effects that are locally efficient and doubly robust. The proposed estimators of average treatment effects are efficient for the given choice of three estimating functions when the working propensity score is correctly specified, and thus are more efficient than the AIPW estimators. In addition, we consider a regression method for estimation of the average treatment effects when working regression models for both the treatment and control groups are correctly specified; the asymptotic variance of the resulting estimator is no greater than the semiparametric variance bound characterized by the theory of Robins et al. (1994 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1994 ). Estimation of regression coefficients when some regressors are not always observed . Journal of the American Statistical Association 89 : 846866 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], 1995 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1995 ). Analysis of semiparametric regression models for repeated outcomes in the presence of missing data . Journal of the American Statistical Association 90 : 106121 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]). Finally, we present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification.  相似文献   

9.
For studying and modeling the time to failure of a system or component, many reliability practitioners used the hazard rate and its monotone behaviors. However, nowadays, there are two problems. First, the modern components have high reliability and, second, their distributions are usually have non monotone hazard rate, such as, the truncated normal, Burr XII, and inverse Gaussian distributions. So, modeling these data based on the hazard rate models seems to be too stringent. Zimmer et al. (1998 Zimmer , W. J. , Wang , Y. , Pathak , P. K. ( 1998 ). Log-odds rate and monotone log-odds rate distributions . J. Qual. Technol. 30 ( 4 ): 376385 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) and Wang et al. (2003 Wang , Y. , Hossain , A. M. , Zimmer , W. J. ( 2003 ). Monotone log-odds rate distributions in reliability analysis . Commun. Statist. Theor. Meth. 32 ( 11 ): 22272244 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], 2008 Wang , Y. , Hossain , A. M. , Zimmer , W. J. ( 2008 ). Useful properties of the three-parameter Burr XII distribution. In: Ahsanullah M., Applied Statistics Research Progress. pp. 11–20 . [Google Scholar]) introduced and studied a new time to failure model in continuous distributions based on log-odds rate (LOR) which is comparable to the model based on the hazard rate.

There are many components and devices in industry, that have discrete distributions with non monotone hazard rate, so, in this article, we introduce the discrete log-odds rate which is different from its analog in continuous case. Also, an alternative discrete reversed hazard rate which we called it the second reversed rate of failure in discrete times is also defined here. It is shown that the failure time distributions can be characterized by the discrete LOR. Moreover, we show that the discrete logistic and log logistics distributions have property of a constant discrete LOR with respect to t and ln t, respectively. Furthermore, properties of some distributions with monotone discrete LOR, such as the discrete Burr XII, discrete Weibull, and discrete truncated normal are obtained.  相似文献   

10.
We investigate the issue of bandwidth estimation in a functional nonparametric regression model with function-valued, continuous real-valued and discrete-valued regressors under the framework of unknown error density. Extending from the recent work of Shang (2013 Shang, H.L. (2013), ‘Bayesian Bandwidth Estimation for a Nonparametric Functional Regression Model with Unknown Error Density’, Computational Statistics &; Data Analysis, 67, 185198. doi: 10.1016/j.csda.2013.05.006[Crossref], [Web of Science ®] [Google Scholar]) [‘Bayesian Bandwidth Estimation for a Nonparametric Functional Regression Model with Unknown Error Density’, Computational Statistics &; Data Analysis, 67, 185–198], we approximate the unknown error density by a kernel density estimator of residuals, where the regression function is estimated by the functional Nadaraya–Watson estimator that admits mixed types of regressors. We derive a likelihood and posterior density for the bandwidth parameters under the kernel-form error density, and put forward a Bayesian bandwidth estimation approach that can simultaneously estimate the bandwidths. Simulation studies demonstrated the estimation accuracy of the regression function and error density for the proposed Bayesian approach. Illustrated by a spectroscopy data set in the food quality control, we applied the proposed Bayesian approach to select the optimal bandwidths in a functional nonparametric regression model with mixed types of regressors.  相似文献   

11.
This article is concerned with sphericity test for the two-way error components panel data model. It is found that the John statistic and the bias-corrected LM statistic recently developed by Baltagi et al. (2011 Baltagi, B. H., Feng, Q., Kao, C. (2011). Testing for sphericity in a fixed effects panel data model. Econometrics Journal 14:2547.[Crossref], [Web of Science ®] [Google Scholar])Baltagi et al. (2012 Baltagi, B. H., Feng, Q., Kao, C. (2012). A Lagrange multiplier test for cross-sectional dependence in a fixed effects panel data model. Journal of Econometrics 170:164177.[Crossref], [Web of Science ®] [Google Scholar], which are based on the within residuals, are not helpful under the present circumstances even though they are in the one-way fixed effects model. However, we prove that when the within residuals are properly transformed, the resulting residuals can serve to construct useful statistics that are similar to those of Baltagi et al. (2011 Baltagi, B. H., Feng, Q., Kao, C. (2011). Testing for sphericity in a fixed effects panel data model. Econometrics Journal 14:2547.[Crossref], [Web of Science ®] [Google Scholar])Baltagi et al. (2012 Baltagi, B. H., Feng, Q., Kao, C. (2012). A Lagrange multiplier test for cross-sectional dependence in a fixed effects panel data model. Journal of Econometrics 170:164177.[Crossref], [Web of Science ®] [Google Scholar]). Simulation results show that the newly proposed statistics perform well under the null hypothesis and several typical alternatives.  相似文献   

12.
This article suggests random and fixed effects spatial two-stage least squares estimators for the generalized mixed regressive spatial autoregressive panel data model. This extends the generalized spatial panel model of Baltagi et al. (2013 Baltagi, B. H., Egger, P., Pfaffermayr, M. (2013). A generalized spatial panel data model with random effects. Econometric Reviews 32:650685.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) by the inclusion of a spatial lag term. The estimation method utilizes the Generalized Moments method suggested by Kapoor et al. (2007 Kapoor, M., Kelejian, H. H., Prucha, I. R. (2007). Panel data models with spatially correlated error components. Journal of Econometrics 127(1):97130.[Crossref], [Web of Science ®] [Google Scholar]) for a spatial autoregressive panel data model. We derive the asymptotic distributions of these estimators and suggest a Hausman test a la Mutl and Pfaffermayr (2011 Mutl, J., Pfaffermayr, M. (2011). The Hausman test in a Cliff and Ord panel model. Econometrics Journal 14:4876.[Crossref], [Web of Science ®] [Google Scholar]) based on the difference between these estimators. Monte Carlo experiments are performed to investigate the performance of these estimators as well as the corresponding Hausman test.  相似文献   

13.
This article introduces a new model called the buffered autoregressive model with generalized autoregressive conditional heteroscedasticity (BAR-GARCH). The proposed model, as an extension of the BAR model in Li et al. (2015 Li, G.D., Guan, B., Li, W.K., and Yu, P. L.H. (2015), “Hysteretic Autoregressive Time Series Models,” Biometrika, 102, 717–723.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]), can capture the buffering phenomena of time series in both the conditional mean and variance. Thus, it provides us a new way to study the nonlinearity of time series. Compared with the existing AR-GARCH and threshold AR-GARCH models, an application to several exchange rates highlights the importance of the BAR-GARCH model.  相似文献   

14.
This article considers several estimators for estimating the ridge parameter k for multinomial logit model based on the work of Khalaf and Shukur (2005 Khalaf, G., and G. Shukur. 2005. Choosing ridge parameters for regression problems. Commun. Statist. Theor. Meth., 34:11771182.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), Alkhamisi et al. (2006 Alkhamisi, M., G. Khalaf, and G. Shukur. 2006. Some modifications for choosing ridge parameters. Commun. Statist. Theor. Meth. 35:20052020.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), and Muniz et al. (2012 Muniz, G., B. M. G. Kibria, K. Månsson, and G. Shukur. 2012. On developing ridge regression parameters: A graphical investigation. in SORT. 36: 115138.[Web of Science ®] [Google Scholar]). The mean square error (MSE) is considered as the performance criterion. A simulation study has been conducted to compare the performance of the estimators. Based on the simulation study we found that increasing the correlation between the independent variables and the number of regressors has negative effect on the MSE. However, when the sample size increases the MSE decreases even when the correlation between the independent variables is large. Based on the minimum MSE criterion some useful estimators for estimating the ridge parameter k are recommended for the practitioners.  相似文献   

15.
In recent years, the suggestion of combining models as an alternative to selecting a single model from a frequentist prospective has been advanced in a number of studies. In this article, we propose a new semiparametric estimator of regression coefficients, which is in the form of a feasible generalized ridge estimator by Hoerl and Kennard (1970b Hoerl, A. E., Kennard, R. W. (1970b). Ridge regression: Application to nonorthogonal problems. Technometrics 12(1):6982.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) but with different biasing factors. We prove that after reparameterization such that the regressors are orthogonal, the generalized ridge estimator is algebraically identical to the model average estimator. Further, the biasing factors that determine the properties of both the generalized ridge and semiparametric estimators are directly linked to the weights used in model averaging. These are interesting results for the interpretations and applications of both semiparametric and ridge estimators. Furthermore, we demonstrate that these estimators based on model averaging weights can have properties superior to the well-known feasible generalized ridge estimator in a large region of the parameter space. Two empirical examples are presented.  相似文献   

16.
17.
In this paper, we consider a model for repeated count data, with within-subject correlation and/or overdispersion. It extends both the generalized linear mixed model and the negative-binomial model. This model, proposed in a likelihood context [17 G. Molenberghs, G. Verbeke, and C.G.B. Demétrio, An extended random-effects approach to modeling repeated, overdispersion count data, Lifetime Data Anal. 13 (2007), pp. 457511.[Web of Science ®] [Google Scholar],18 G. Molenberghs, G. Verbeke, C.G.B. Demétrio, and A. Vieira, A family of generalized linear models for repeated measures with normal and conjugate random effects, Statist. Sci. 25 (2010), pp. 325347. doi: 10.1214/10-STS328[Crossref], [Web of Science ®] [Google Scholar]] is placed in a Bayesian inferential framework. An important contribution takes the form of Bayesian model assessment based on pivotal quantities, rather than the often less adequate DIC. By means of a real biological data set, we also discuss some Bayesian model selection aspects, using a pivotal quantity proposed by Johnson [12 V.E. Johnson, Bayesian model assessment using pivotal quantities, Bayesian Anal. 2 (2007), pp. 719734. doi: 10.1214/07-BA229[Crossref], [Web of Science ®] [Google Scholar]].  相似文献   

18.
Cossette et al. (2010 Cossette, H., Marceau, E., Maume-Deschamps, V. (2010). Discerte-time risk models based on time series for count random variables. ASTIN Bull. 40:123150.[Crossref], [Web of Science ®] [Google Scholar], 2011 Cossette, H., Marceau, E., Toureille, F. (2011). risk models based on time series for count random variables. Insur. Math. Econ. 48:1928.[Crossref], [Web of Science ®] [Google Scholar]) gave a novel collective risk model where the total numbers of claims satisfy the first-order integer-valued autoregressive process. For a risk model, it is interesting to investigate the upper bound of ruin probability. However, the loss increments of the above model are dependent; it is difficult to derive the upper bound of ruin probability. In this article, we propose an approximation model with stationary independent increments. The upper bound of ruin probability and the adjustment coefficient are derived. The approximation model is illustrated via four simulated examples. Results show that the gap of the approximation model and dependent model can be ignored by adjusting values of parameters.  相似文献   

19.
In quadratic discriminant analysis, the use of SAVE (Cook and Weisberg, 1991 Cook, R.D., Weisberg, S. (1991). Discussion of Li (1991). J. Amer. Statist. Assoc. 86:32832.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]; Pardoe et al., 2007 Pardoe, I., Yin, X., Cook, R. (2007). Graphical tools for quadratic discriminant analysis. Technometrics 49:172183.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) is often recommended for dimension-reduction purposes. However, the associated directions tend to over-emphasize the differences of the groups in dispersion, ignoring at the same time those in location. This behavior makes often the plots of the corresponding canonical coordinates difficult to interpret. In this article, the properties of SAVE are investigated and related to those of the SIR and SIRII components. Applications with real data are presented. Comparisons with previous work in this area are also discussed.  相似文献   

20.
The aim of this article is the construction of the test statistic for the detection of changes in vector autoregressive (AR) models where both AR parameters and the variance matrix of the error term are the subjects of a change. The approximating distribution of the proposed statistic is the Gumbel distribution. The proof stands on the approximation of weakly dependent random vectors by independent ones and by application of Horváth’s extension of Darling-Erdös extremal result for random vectors, see Darling and Erdös (1956) Darling, D.A., Erdös, P. (1956). A limit theorem for the maximum of normalized sums of independent random variables. Duke Math. J. 23:143155.[Crossref], [Web of Science ®] [Google Scholar] and Horváth (1993) Horváth, L. (1993). The maximum likelihood method for testing changes in the parameters of normal observations. Ann. Stat. 21(2):671680.[Crossref], [Web of Science ®] [Google Scholar]. The test statistic is a modification of the likelihood ratio.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号