首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article suggests random and fixed effects spatial two-stage least squares estimators for the generalized mixed regressive spatial autoregressive panel data model. This extends the generalized spatial panel model of Baltagi et al. (2013 Baltagi, B. H., Egger, P., Pfaffermayr, M. (2013). A generalized spatial panel data model with random effects. Econometric Reviews 32:650685.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) by the inclusion of a spatial lag term. The estimation method utilizes the Generalized Moments method suggested by Kapoor et al. (2007 Kapoor, M., Kelejian, H. H., Prucha, I. R. (2007). Panel data models with spatially correlated error components. Journal of Econometrics 127(1):97130.[Crossref], [Web of Science ®] [Google Scholar]) for a spatial autoregressive panel data model. We derive the asymptotic distributions of these estimators and suggest a Hausman test a la Mutl and Pfaffermayr (2011 Mutl, J., Pfaffermayr, M. (2011). The Hausman test in a Cliff and Ord panel model. Econometrics Journal 14:4876.[Crossref], [Web of Science ®] [Google Scholar]) based on the difference between these estimators. Monte Carlo experiments are performed to investigate the performance of these estimators as well as the corresponding Hausman test.  相似文献   

2.
Noting that many economic variables display occasional shifts in their second order moments, we investigate the performance of homogenous panel unit root tests in the presence of permanent volatility shifts. It is shown that in this case the test statistic proposed by Herwartz and Siedenburg (2008 Herwartz, H., Siedenburg, F. (2008). Homogenous panel unit root tests under cross-sectional dependence: Finite sample modifications and the wild bootstrap. Computational Statistics and Data Analysis 53(1):137150.[Crossref], [Web of Science ®] [Google Scholar]) is asymptotically standard Gaussian. By means of a simulation study we illustrate the performance of first and second generation panel unit root tests and undertake a more detailed comparison of the test in Herwartz and Siedenburg (2008 Herwartz, H., Siedenburg, F. (2008). Homogenous panel unit root tests under cross-sectional dependence: Finite sample modifications and the wild bootstrap. Computational Statistics and Data Analysis 53(1):137150.[Crossref], [Web of Science ®] [Google Scholar]) and its heteroskedasticity consistent Cauchy counterpart introduced in Demetrescu and Hanck (2012a Demetrescu, M., Hanck, C. (2012a). A simple nonstationary-volatility robust panel unit root test. Economics Letters 117(2):1013.[Crossref], [Web of Science ®] [Google Scholar]). As an empirical illustration, we reassess evidence on the Fisher hypothesis with data from nine countries over the period 1961Q2–2011Q2. Empirical evidence supports panel stationarity of the real interest rate for the entire subperiod. With regard to the most recent two decades, the test results cast doubts on market integration, since the real interest rate is diagnosed nonstationary.  相似文献   

3.
This paper presents a new variable weight method, called the singular value decomposition (SVD) approach, for Kohonen competitive learning (KCL) algorithms based on the concept of Varshavsky et al. [18 R. Varshavsky, A. Gottlieb, M. Linial, and D. Horn, Novel unsupervised feature filtering of bilogical data, Bioinformatics 22 (2006), pp. 507513.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]]. Integrating the weighted fuzzy c-means (FCM) algorithm with KCL, in this paper, we propose a weighted fuzzy KCL (WFKCL) algorithm. The goal of the proposed WFKCL algorithm is to reduce the clustering error rate when data contain some noise variables. Compared with the k-means, FCM and KCL with existing variable-weight methods, the proposed WFKCL algorithm with the proposed SVD's weight method provides a better clustering performance based on the error rate criterion. Furthermore, the complexity of the proposed SVD's approach is less than Pal et al. [17 S.K. Pal, R.K. De, and J. Basak, Unsupervised feature evaluation: a neuro-fuzzy approach, IEEE. Trans. Neural Netw. 11 (2000), pp. 366376.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]], Wang et al. [19 X.Z. Wang, Y.D. Wang, and L.J. Wang, Improving fuzzy c-means clustering based on feature-weight learning, Pattern Recognit. Lett. 25 (2004), pp. 11231132.[Crossref], [Web of Science ®] [Google Scholar]] and Hung et al. [9 W. -L. Hung, M. -S. Yang, and D. -H. Chen, Bootstrapping approach to feature-weight selection in fuzzy c-means algorithms with an application in color image segmentation, Pattern Recognit. Lett. 29 (2008), pp. 13171325.[Crossref], [Web of Science ®] [Google Scholar]].  相似文献   

4.
In this article, we consider investigating whether any of k treatments are better than a control under the assumption of each treatment mean being no less than the control mean. A classic problem is to find the simultaneous confidence bounds for the difference between each treatment and the control. Compared with hypothesis testing, confidence bounds have the attractive advantage of telling more information about the effective treatment. Generally, the one-sided lower bounds are provided as it's enough for detecting effective treatment and the one-sided lower bounds has sharper lower bands than two-sided ones. However, a two-sided procedure provides both upper and lower bounds on the differences. In this article, we develop a new procedure which combines the good aspects of both the one-sided and the two-sided procedures. This new procedure has the same inferential sensitivity of the one-sided procedure proposed by Zhao (2007 Zhao , H. B. ( 2007 ). Comparing several treatments with a control . J. Statist. Plann. Infer. 137 : 29963006 .[Crossref], [Web of Science ®] [Google Scholar]) while also providing simultaneous two-sided bounds for the differences between treatments and the control. By our computation results, we find the new procedure is better than Hayter, Miwa and Liu's procedure (Hayter et al., 2000 Hayter , A. J. , Miwa , T. , Liu , W. ( 2000 ). Combining the advantages of one-sided and two-sided procedures for comparing several treatments with a control . J. Statist. Plann. Infer. 86 : 8199 .[Crossref], [Web of Science ®] [Google Scholar]), when the sample size is balanced. We also illustrate the new procedure by an example.  相似文献   

5.
In recent years, there has been a growing interest in modelling integred-valued time series. In this article, we propose a modified and generalized version of the first order rounded integer-valued autoregressive RINAR(1) model, originally introduced by Kachour and Yao (2009 Kachour , M. , Yao , J. F. ( 2009 ). First-order rounded integer-valued autoregressive (RINAR(1)) process . Journal of Time Series Analysis 30 ( 4 ): 417448 .[Crossref], [Web of Science ®] [Google Scholar]). Indeed, this class can be considered as an alternative of classical models based on the thinning operators. Using a Markov chain method, conditions for stationarity and the existence of moments are investigated. Least squares estimator of the model parameters is considered and its consistence is established. Finally, we describe the price change data using a model of the new class.  相似文献   

6.
The purpose of this article is to develop algorithms for computing the exact Fisher information matrix of periodic time-varying state-space models. We first present a relatively simple recursive algorithm which computes the elements of the exact information matrix without involving numerical differentiation, since all required derivatives are analytically evaluated. The proposed algorithm extends the procedure due to Cavanaugh and Shumway (1996 Cavanaugh , J. E. , Shumway , R. H. ( 1996 ). On computing the expected Fisher information matrix for state-space model parameters . Statist. Probab. Lett. 26 : 347355 .[Crossref], [Web of Science ®] [Google Scholar]) to the periodic state-space framework. Exploiting the approach used in Klein et al. (2000 Klein , A. , Mélard , G. , Zahaf , T. ( 2000 ). Construction of the exact Fisher information matrix of Gaussian time series models by means of matrix differential rules . Linear Alg. Applic. 321 : 209232 .[Crossref], [Web of Science ®] [Google Scholar]), a second algorithm is proposed in order to obtain the exact information matrix as a whole instead of element by element. The algorithms are first developed in a general framework and then specialized to the case of a periodic Gaussian vector autoregressive moving-average (PVARMA) model.  相似文献   

7.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001 Tusher , V. G. , Tibshirani , R. , Chu , G. ( 2001 ). Significance analysis of microarrys applied to the ionizing radiation response . Proceedings of the National Academy of Sciences 98 : 51165121 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008 Lin , D. , Shkedy , Z. , Burzykowski , T. , Göhlmann , H. W. H. , De Bondt , A. , Perera , T. , Geerts , T. , Bijnens , L. ( 2008 ). Significance analysis of microarray (SAM) for comparisons of several treatments with one control . Biometric Journal, MCP 50 ( 5 ): 801823 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008 Lin , D. , Shkedy , Z. , Burzykowski , T. , Göhlmann , H. W. H. , De Bondt , A. , Perera , T. , Geerts , T. , Bijnens , L. ( 2008 ). Significance analysis of microarray (SAM) for comparisons of several treatments with one control . Biometric Journal, MCP 50 ( 5 ): 801823 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods.  相似文献   

8.
In this paper, we consider a model for repeated count data, with within-subject correlation and/or overdispersion. It extends both the generalized linear mixed model and the negative-binomial model. This model, proposed in a likelihood context [17 G. Molenberghs, G. Verbeke, and C.G.B. Demétrio, An extended random-effects approach to modeling repeated, overdispersion count data, Lifetime Data Anal. 13 (2007), pp. 457511.[Web of Science ®] [Google Scholar],18 G. Molenberghs, G. Verbeke, C.G.B. Demétrio, and A. Vieira, A family of generalized linear models for repeated measures with normal and conjugate random effects, Statist. Sci. 25 (2010), pp. 325347. doi: 10.1214/10-STS328[Crossref], [Web of Science ®] [Google Scholar]] is placed in a Bayesian inferential framework. An important contribution takes the form of Bayesian model assessment based on pivotal quantities, rather than the often less adequate DIC. By means of a real biological data set, we also discuss some Bayesian model selection aspects, using a pivotal quantity proposed by Johnson [12 V.E. Johnson, Bayesian model assessment using pivotal quantities, Bayesian Anal. 2 (2007), pp. 719734. doi: 10.1214/07-BA229[Crossref], [Web of Science ®] [Google Scholar]].  相似文献   

9.
We extend Hansen's (2005) recentering method to a continuum of inequality constraints to construct new Kolmogorov–Smirnov tests for stochastic dominance of any pre-specified order. We show that our tests have correct size asymptotically, are consistent against fixed alternatives and are unbiased against some N?1/2 local alternatives. It is shown that by avoiding the use of the least favorable configuration, our tests are less conservative and more powerful than Barrett and Donald's (2003) and in some simulation examples we consider, we find that our tests can be more powerful than the subsampling test of Linton et al. (2005 Linton, O., Maasoumi, E., Whang, Y.-J. (2005). Consistent testing for stochastic dominance under general sampling schemes. The Review of Economic Studies 72:735765.[Crossref], [Web of Science ®] [Google Scholar]). We apply our method to test stochastic dominance relations between Canadian income distributions in 1978 and 1986 as considered in Barrett and Donald (2003 Barrett, G. F., Donald, S. G. (2003). Consistent tests for stochastic dominance. Econometrica 71: 71104.[Crossref], [Web of Science ®] [Google Scholar]) and find that some of the hypothesis testing results are different using the new method.  相似文献   

10.
Heckman's (1976 Heckman, J. J. (1976). The common structure of statistical models of truncation, sample selection and limited dependent variables and a simple estimator for such models. Annals of Economic and Social Measurement 15:475492. [Google Scholar], 1979 Heckman, J. J. (1979). Sample selection bias as a specification error. Econometrica 47(1):153161.[Crossref], [Web of Science ®] [Google Scholar]) sample selection model has been employed in many studies of linear and nonlinear regression applications. It is well known that ignoring the sample selectivity may result in inconsistency of the estimator due to the correlation between the statistical errors in the selection and main equations. In this article, we reconsider the maximum likelihood estimator for the panel sample selection model in Keane et al. (1988 Keane, M., Moffitt, R., Runkle, D. (1988). Real wages over the business cycle: Estimating the impact of heterogeneity with micro data. Journal of Political Economy 96:12321266.[Crossref], [Web of Science ®] [Google Scholar]). Since the panel data model contains individual effects, such as fixed or random effects, the likelihood function is more complicated than that of the classical Heckman model. As an alternative to the existing derivation of the likelihood function in the literature, we show that the conditional distribution of the main equation follows a closed skew-normal (CSN) distribution, of which the linear transformation is still a CSN. Although the evaluation of the likelihood function involves high-dimensional integration, we show that the integration can be further simplified into a one-dimensional problem and can be evaluated by the simulated likelihood method. Moreover, we also conduct a Monte Carlo experiment to investigate the finite sample performance of the proposed estimator and find that our estimator provides reliable and quite satisfactory results.  相似文献   

11.
This article describes how diagnostic procedures were derived for symmetrical nonlinear regression models, continuing the work carried out by Cysneiros and Vanegas (2008 Cysneiros , F. J. A. , Vanegas , L. H. ( 2008 ). Residuals and their statistical properties in symmetrical nonlinear models . Statist. Probab. Lett. 78 : 32693273 .[Crossref], [Web of Science ®] [Google Scholar]) and Vanegas and Cysneiros (2010 Vanegas , L. H. , Cysneiros , F. J. A. ( 2010 ). Assesment of diagnostic procedures in symmetrical nonlinear regression models . Computat. Statist. Data Anal. 54 : 10021016 .[Crossref], [Web of Science ®] [Google Scholar]), who showed that the parameters estimates in nonlinear models are more robust with heavy-tailed than with normal errors. In this article, we focus on assessing if the robustness of this kind of models is also observed in the inference process (i.e., partial F-test). Symmetrical nonlinear regression models includes all symmetric continuous distributions for errors covering both light- and heavy-tailed distributions such as Student-t, logistic-I and -II, power exponential, generalized Student-t, generalized logistic, and contaminated normal. Firstly, a statistical test is shown to evaluating the assumption that the error terms all have equal variance. The results of simulation studies which describe the behavior of the test for heteroscedasticity proposed in the presence of outliers are then given. To assess the robustness of inference process, we present the results of a simulation study which described the behavior of partial F-test in the presence of outliers. Also, some diagnostic procedures are derived to identify influential observations on the partial F-test. As ilustration, a dataset described in Venables and Ripley (2002 Venables , W. N. , Ripley , B. D. ( 2002 ). Modern Applied with S. , 4th ed. New York : Springer .[Crossref] [Google Scholar]), is also analyzed.  相似文献   

12.
Classification and regression tree has been useful in medical research to construct algorithms for disease diagnosis or prognostic prediction. Jin et al. 7 Jin, H., Lu, Y., Harris, R. T., Black, D., Stone, K., Hochberg, M. and Genant, H. 2004. Classification algorithms for hip fracture prediction base on recursive partitioning methods. Med. Decis. Mak., 24: 386398. (doi:10.1177/0272989X04267009)[Crossref], [PubMed], [Web of Science ®] [Google Scholar] developed a robust and cost-saving tree (RACT) algorithm with application in classification of hip fracture risk after 5-year follow-up based on the data from the Study of Osteoporotic Fractures (SOF). Although conventional recursive partitioning algorithms have been well developed, they still have some limitations. Binary splits may generate a big tree with many layers, but trinary splits may produce too many nodes. In this paper, we propose a classification approach combining trinary splits and binary splits to generate a trinary–binary tree. A new non-inferiority test of entropy is used to select the binary or trinary splits. We apply the modified method in SOF to construct a trinary–binary classification rule for predicting risk of osteoporotic hip fracture. Our new classification tree has good statistical utility: it is statistically non-inferior to the optimum binary tree and the RACT based on the testing sample and is also cost-saving. It may be useful in clinical applications: femoral neck bone mineral density, age, height loss and weight gain since age 25 can identify subjects with elevated 5-year hip fracture risk without loss of statistical efficiency.  相似文献   

13.
Distribution-free tests have been proposed in the literature for comparing the hazard rates of two probability distributions when the available samples are complete. In this article, we generalize the test of Kochar (1981 Kochar , S. C. ( 1981 ). A new distribution-free test for the equality of two failure rates . Biometrika 68 : 423426 .[Crossref], [Web of Science ®] [Google Scholar]) to the case when the available sample is Type-II censored, and then examine its power properties.  相似文献   

14.
In cancer research, study of the hazard function provides useful insights into disease dynamics, as it describes the way in which the (conditional) probability of death changes with time. The widely utilized Cox proportional hazard model uses a stepwise nonparametric estimator for the baseline hazard function, and therefore has a limited utility. The use of parametric models and/or other approaches that enables direct estimation of the hazard function is often invoked. A recent work by Cox et al. [6 Cox, C., Chu, H., Schneider, M. F. and Munoz, A. 2007. Parametric survival analysis and taxonomy of hazard functions for the generalized gamma distribution. Stat. Med., 26: 43524374. [Crossref], [PubMed], [Web of Science ®] [Google Scholar]] has stimulated the use of the flexible parametric model based on the Generalized Gamma (GG) distribution, supported by the development of optimization software. The GG distribution allows estimation of different hazard shapes in a single framework. We use the GG model to investigate the shape of the hazard function in early breast cancer patients. The flexible approach based on a piecewise exponential model and the nonparametric additive hazards model are also considered.  相似文献   

15.
For studying and modeling the time to failure of a system or component, many reliability practitioners used the hazard rate and its monotone behaviors. However, nowadays, there are two problems. First, the modern components have high reliability and, second, their distributions are usually have non monotone hazard rate, such as, the truncated normal, Burr XII, and inverse Gaussian distributions. So, modeling these data based on the hazard rate models seems to be too stringent. Zimmer et al. (1998 Zimmer , W. J. , Wang , Y. , Pathak , P. K. ( 1998 ). Log-odds rate and monotone log-odds rate distributions . J. Qual. Technol. 30 ( 4 ): 376385 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) and Wang et al. (2003 Wang , Y. , Hossain , A. M. , Zimmer , W. J. ( 2003 ). Monotone log-odds rate distributions in reliability analysis . Commun. Statist. Theor. Meth. 32 ( 11 ): 22272244 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], 2008 Wang , Y. , Hossain , A. M. , Zimmer , W. J. ( 2008 ). Useful properties of the three-parameter Burr XII distribution. In: Ahsanullah M., Applied Statistics Research Progress. pp. 11–20 . [Google Scholar]) introduced and studied a new time to failure model in continuous distributions based on log-odds rate (LOR) which is comparable to the model based on the hazard rate.

There are many components and devices in industry, that have discrete distributions with non monotone hazard rate, so, in this article, we introduce the discrete log-odds rate which is different from its analog in continuous case. Also, an alternative discrete reversed hazard rate which we called it the second reversed rate of failure in discrete times is also defined here. It is shown that the failure time distributions can be characterized by the discrete LOR. Moreover, we show that the discrete logistic and log logistics distributions have property of a constant discrete LOR with respect to t and ln t, respectively. Furthermore, properties of some distributions with monotone discrete LOR, such as the discrete Burr XII, discrete Weibull, and discrete truncated normal are obtained.  相似文献   

16.
Karlis and Santourian [14 D. Karlis and A. Santourian, Model-based clustering with non-elliptically contoured distribution, Stat. Comput. 19 (2009), pp. 7383. doi: 10.1007/s11222-008-9072-0[Crossref], [Web of Science ®] [Google Scholar]] proposed a model-based clustering algorithm, the expectation–maximization (EM) algorithm, to fit the mixture of multivariate normal-inverse Gaussian (NIG) distribution. However, the EM algorithm for the mixture of multivariate NIG requires a set of initial values to begin the iterative process, and the number of components has to be given a priori. In this paper, we present a learning-based EM algorithm: its aim is to overcome the aforementioned weaknesses of Karlis and Santourian's EM algorithm [14 D. Karlis and A. Santourian, Model-based clustering with non-elliptically contoured distribution, Stat. Comput. 19 (2009), pp. 7383. doi: 10.1007/s11222-008-9072-0[Crossref], [Web of Science ®] [Google Scholar]]. The proposed learning-based EM algorithm was first inspired by Yang et al. [24 M.-S. Yang, C.-Y. Lai, and C.-Y. Lin, A robust EM clustering algorithm for Gaussian mixture models, Pattern Recognit. 45 (2012), pp. 39503961. doi: 10.1016/j.patcog.2012.04.031[Crossref], [Web of Science ®] [Google Scholar]]: the process of how they perform self-clustering was then simulated. Numerical experiments showed promising results compared to Karlis and Santourian's EM algorithm. Moreover, the methodology is applicable to the analysis of extrasolar planets. Our analysis provides an understanding of the clustering results in the ln?P?ln?M and ln?P?e spaces, where M is the planetary mass, P is the orbital period and e is orbital eccentricity. Our identified groups interpret two phenomena: (1) the characteristics of two clusters in ln?P?ln?M space might be related to the tidal and disc interactions (see [9 I.G. Jiang, W.H. Ip, and L.C. Yeh, On the fate of close-in extrasolar planets, Astrophys. J. 582 (2003), pp. 449454. doi: 10.1086/344590[Crossref], [Web of Science ®] [Google Scholar]]); and (2) there are two clusters in ln?P?e space.  相似文献   

17.
‘Middle censoring’ is a very general censoring scheme where the actual value of an observation in the data becomes unobservable if it falls inside a random interval (L, R) and includes both left and right censoring. In this paper, we consider discrete lifetime data that follow a geometric distribution that is subject to middle censoring. Two major innovations in this paper, compared to the earlier work of Davarzani and Parsian [3 N. Davarzani and A. Parsian, Statistical inference for discrete middle-censored data, J. Statist. Plan. Inference 141 (2011), pp. 14551462. doi: 10.1016/j.jspi.2010.10.012[Crossref], [Web of Science ®] [Google Scholar]], include (i) an extension and generalization to the case where covariates are present along with the data and (ii) an alternate approach and proofs which exploit the simple relationship between the geometric and the exponential distributions, so that the theory is more in line with the work of Iyer et al. [6 S.K. Iyer, S.R. Jammalamadaka, and D. Kundu, Analysis of middle censored data with exponential lifetime distributions, J. Statist. Plan. Inference 138 (2008), pp. 35503560. doi: 10.1016/j.jspi.2007.03.062[Crossref], [Web of Science ®] [Google Scholar]]. It is also demonstrated that this kind of discretization of life times gives results that are close to the original data involving exponential life times. Maximum likelihood estimation of the parameters is studied for this middle-censoring scheme with covariates and their large sample distributions discussed. Simulation results indicate how well the proposed estimation methods work and an illustrative example using time-to-pregnancy data from Baird and Wilcox [1 D.D. Baird and A.J. Wilcox, Cigarette smoking associated with delayed conception, J, Am. Med. Assoc. 253 (1985), pp. 29792983. doi: 10.1001/jama.1985.03350440057031[Crossref], [Web of Science ®] [Google Scholar]] is included.  相似文献   

18.
In this article, a generalized Lévy model is proposed and its parameters are estimated in high-frequency data settings. An infinitesimal generator of Lévy processes is used to study the asymptotic properties of the drift and volatility estimators. They are consistent asymptotically and are independent of other parameters making them better than those in Chen et al. (2010 Chen, S. X., Delaigle, A., Hall, P. (2010). Nonparametric estimation for a class of Lévy processes. Journal of Econometrics 157:257271.[Crossref], [Web of Science ®] [Google Scholar]). The estimators proposed here also have fast convergence rates and are simple to implement.  相似文献   

19.
Many articles which have estimated models with forward looking expectations have reported that the magnitude of the coefficients of the expectations term is very large when compared with the effects coming from past dynamics. This has sometimes been regarded as implausible and led to the feeling that the expectations coefficient is biased upwards. A relatively general argument that has been advanced is that the bias could be due to structural changes in the means of the variables entering the structural equation. An alternative explanation is that the bias comes from weak instruments. In this article, we investigate the issue of upward bias in the estimated coefficients of the expectations variable based on a model where we can see what causes the breaks and how to control for them. We conclude that weak instruments are the most likely cause of any bias and note that structural change can affect the quality of instruments. We also look at some empirical work in Castle et al. (2014 Castle, J. L., Doornik, J. A., Hendry, D. F., Nymoen, R. (2014). Misspecification testing: non-invariance of expectations models of inflation. Econometric Reviews 33:56, 553574, doi:10.1080/07474938.2013.825137[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) on the new Kaynesian Phillips curve (NYPC) in the Euro Area and U.S. assessing whether the smaller coefficient on expectations that Castle et al. (2014 Castle, J. L., Doornik, J. A., Hendry, D. F., Nymoen, R. (2014). Misspecification testing: non-invariance of expectations models of inflation. Econometric Reviews 33:56, 553574, doi:10.1080/07474938.2013.825137[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) highlight is due to structural change. Our conclusion is that it is not. Instead it comes from their addition of variables to the NKPC. After allowing for the fact that there are weak instruments in the estimated re-specified model, it would seem that the forward coefficient estimate is actually quite high rather than low.  相似文献   

20.
This article proposes Hartley-Ross type unbiased estimators of finite population mean using information on known parameters of auxiliary variate when the study variate and auxiliary variate are positively correlated. The variances of the proposed unbiased estimators are obtained. It has been shown that the proposed estimators are more efficient than the simple mean estimator, usual ratio estimator and estimators proposed by Sisodia and Dwivedi (1981 Sisodia , B. V. S. , Dwivedi , V. K. ( 1981 ). A modified ratio estimator using coefficient of variation of auxiliary variable . J. Indian Soc. Agricultural Statist. 33 ( 1 ): 1318 . [Google Scholar]), Kadilar and Cingi (2006 Kadilar , C. , Cingi , H. ( 2006 ). A new ratio estimator using correlation coefficient . Int. Statist. 111 . [Google Scholar]), and Kadilar et al. (2007 Kadilar , C. , Candan , M. , Cingi , H. ( 2007 ). Ratio estimators using robust regression . Hacet. J. Math. Statist. 36 ( 2 ): 181188 .[Web of Science ®] [Google Scholar]) under certain realistic conditions. Empirical studies are also carried out to demonstrate the merits of the proposed unbiased estimators over other estimators considered in this article.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号