共查询到20条相似文献,搜索用时 625 毫秒
1.
This article suggests random and fixed effects spatial two-stage least squares estimators for the generalized mixed regressive spatial autoregressive panel data model. This extends the generalized spatial panel model of Baltagi et al. (2013) by the inclusion of a spatial lag term. The estimation method utilizes the Generalized Moments method suggested by Kapoor et al. (2007) for a spatial autoregressive panel data model. We derive the asymptotic distributions of these estimators and suggest a Hausman test a la Mutl and Pfaffermayr (2011) based on the difference between these estimators. Monte Carlo experiments are performed to investigate the performance of these estimators as well as the corresponding Hausman test. 相似文献
2.
Rameela Chandrasekhar 《统计学通讯:理论与方法》2014,43(14):2951-2957
Adaptive designs find an important application in the estimation of unknown percentiles for an underlying dose-response curve. A nonparametric adaptive design was suggested by Mugno et al. (2004) to simultaneously estimate multiple percentiles of an unknown dose-response curve via generalized Polya urns. In this article, we examine the properties of the design proposed by Mugno et al. (2004) when delays in observing responses are encountered. Using simulations, we evaluate a modification of the design under varying group sizes. Our results demonstrate unbiased estimation with minimal loss in efficiency when compared to the original compound urn design. 相似文献
3.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods. 相似文献
4.
Noting that many economic variables display occasional shifts in their second order moments, we investigate the performance of homogenous panel unit root tests in the presence of permanent volatility shifts. It is shown that in this case the test statistic proposed by Herwartz and Siedenburg (2008) is asymptotically standard Gaussian. By means of a simulation study we illustrate the performance of first and second generation panel unit root tests and undertake a more detailed comparison of the test in Herwartz and Siedenburg (2008) and its heteroskedasticity consistent Cauchy counterpart introduced in Demetrescu and Hanck (2012a). As an empirical illustration, we reassess evidence on the Fisher hypothesis with data from nine countries over the period 1961Q2–2011Q2. Empirical evidence supports panel stationarity of the real interest rate for the entire subperiod. With regard to the most recent two decades, the test results cast doubts on market integration, since the real interest rate is diagnosed nonstationary. 相似文献
5.
Feng-Shou Ko 《统计学通讯:理论与方法》2013,42(15):2681-2698
A proposed method based on frailty models is used to identify longitudinal biomarkers or surrogates for a multivariate survival. This method is an extention of earlier models by Wulfsohn and Tsiatis (1997) and Song et al. (2002). In this article, similar to Henderson et al. (2002), a joint likelihood function combines the likelihood functions of the longitudinal biomarkers and the multivariate survival times. We use simulations to explore how the number of individuals, the number of time points per individual and the functional form of the random effects from the longitudianl biomarkers influence the power to detect the association of a longitudinal biomarker and the multivariate survival time. The proposed method is illustrate by using the gastric cancer data. 相似文献
6.
To deal with multicollinearity problem, the biased estimators with two biasing parameters have recently attracted much research interest. The aim of this article is to compare one of the last proposals given by Yang and Chang (2010) with Liu-type estimator (Liu 2003) and k ? d class estimator (Sakallioglu and Kaciranlar 2008) under the matrix mean squared error criterion. As well as giving these comparisons theoretically, we support the results with the extended simulation studies and real data example, which show the advantages of the proposal given by Yang and Chang (2010) over the other proposals with increasing multicollinearity level. 相似文献
7.
Siti Haslinda Mohd Din Marek Molas Jolanda Luime Emmanuel Lesaffre 《Journal of applied statistics》2014,41(8):1627-1644
A variety of statistical approaches have been suggested in the literature for the analysis of bounded outcome scores (BOS). In this paper, we suggest a statistical approach when BOSs are repeatedly measured over time and used as predictors in a regression model. Instead of directly using the BOS as a predictor, we propose to extend the approaches suggested in [16,21,28] to a joint modeling setting. Our approach is illustrated on longitudinal profiles of multiple patients’ reported outcomes to predict the current clinical status of rheumatoid arthritis patients by a disease activities score of 28 joints (DAS28). Both a maximum likelihood as well as a Bayesian approach is developed. 相似文献
8.
Guangyu Mao 《Econometric Reviews》2018,37(5):491-506
This article is concerned with sphericity test for the two-way error components panel data model. It is found that the John statistic and the bias-corrected LM statistic recently developed by Baltagi et al. (2011)Baltagi et al. (2012, which are based on the within residuals, are not helpful under the present circumstances even though they are in the one-way fixed effects model. However, we prove that when the within residuals are properly transformed, the resulting residuals can serve to construct useful statistics that are similar to those of Baltagi et al. (2011)Baltagi et al. (2012). Simulation results show that the newly proposed statistics perform well under the null hypothesis and several typical alternatives. 相似文献
9.
Analysis of covariance (ANCOVA) is the standard procedure for comparing several treatments when the response variable depends on one or more covariates. We consider the problem of testing the equality of treatment effects when the variances are not assumed to be equal. It is well known that classical F test is not robust with respect to the assumption of equal variances and may lead to misleading conclusions if the variances are not equal. Ananda (1998) developed a generalized F test for testing the equality of treatment effects. However, simulation studies show that the actual size of this test can be much higher than the nominal level when the sample sizes are small, particularly when the number of treatments is large. In this article, we develop a test using the parametric bootstrap approach of Krishnamoorthy et al. (2007). Our simulations show that the actual size of our proposed test is close to the nominal level, irrespective of the number of treatments and sample sizes. Our simulations also indicate that our proposed PB test is more robust, with respect to the assumption of normality, than the generalized F test. Therefore, our proposed PB test provides a satisfactory alternative to the generalized F test. 相似文献
10.
Here, we apply the smoothing technique proposed by Chaubey et al. (2007) for the empirical survival function studied in Bagai and Prakasa Rao (1991) for a sequence of stationary non-negative associated random variables.The derivative of this estimator in turn is used to propose a nonparametric density estimator. The asymptotic properties of the resulting estimators are studied and contrasted with some other competing estimators. A simulation study is carried out comparing the recent estimator based on the Poisson weights (Chaubey et al., 2011) showing that the two estimators have comparable finite sample global as well as local behavior. 相似文献
11.
Biao Zhang 《Econometric Reviews》2016,35(2):201-231
This paper discusses the estimation of average treatment effects in observational causal inferences. By employing a working propensity score and two working regression models for treatment and control groups, Robins et al. (1994, 1995) introduced the augmented inverse probability weighting (AIPW) method for estimation of average treatment effects, which extends the inverse probability weighting (IPW) method of Horvitz and Thompson (1952); the AIPW estimators are locally efficient and doubly robust. In this paper, we study a hybrid of the empirical likelihood method and the method of moments by employing three estimating functions, which can generate estimators for average treatment effects that are locally efficient and doubly robust. The proposed estimators of average treatment effects are efficient for the given choice of three estimating functions when the working propensity score is correctly specified, and thus are more efficient than the AIPW estimators. In addition, we consider a regression method for estimation of the average treatment effects when working regression models for both the treatment and control groups are correctly specified; the asymptotic variance of the resulting estimator is no greater than the semiparametric variance bound characterized by the theory of Robins et al. (1994, 1995). Finally, we present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. 相似文献
12.
Yan Fan 《Journal of applied statistics》2016,43(14):2595-2607
Competing models arise naturally in many research fields, such as survival analysis and economics, when the same phenomenon of interest is explained by different researcher using different theories or according to different experiences. The model selection problem is therefore remarkably important because of its great importance to the subsequent inference; Inference under a misspecified or inappropriate model will be risky. Existing model selection tests such as Vuong's tests [26] and Shi's non-degenerate tests [21] suffer from the variance estimation and the departure of the normality of the likelihood ratios. To circumvent these dilemmas, we propose in this paper an empirical likelihood ratio (ELR) tests for model selection. Following Shi [21], a bias correction method is proposed for the ELR tests to enhance its performance. A simulation study and a real-data analysis are provided to illustrate the performance of the proposed ELR tests. 相似文献
13.
Haibing Zhao 《统计学通讯:理论与方法》2014,43(6):1179-1191
In this article, we consider investigating whether any of k treatments are better than a control under the assumption of each treatment mean being no less than the control mean. A classic problem is to find the simultaneous confidence bounds for the difference between each treatment and the control. Compared with hypothesis testing, confidence bounds have the attractive advantage of telling more information about the effective treatment. Generally, the one-sided lower bounds are provided as it's enough for detecting effective treatment and the one-sided lower bounds has sharper lower bands than two-sided ones. However, a two-sided procedure provides both upper and lower bounds on the differences. In this article, we develop a new procedure which combines the good aspects of both the one-sided and the two-sided procedures. This new procedure has the same inferential sensitivity of the one-sided procedure proposed by Zhao (2007) while also providing simultaneous two-sided bounds for the differences between treatments and the control. By our computation results, we find the new procedure is better than Hayter, Miwa and Liu's procedure (Hayter et al., 2000), when the sample size is balanced. We also illustrate the new procedure by an example. 相似文献
14.
Aman Ullah Alan T. K. Wan Huansha Wang Xinyu Zhang Guohua Zou 《Econometric Reviews》2017,36(1-3):370-384
In recent years, the suggestion of combining models as an alternative to selecting a single model from a frequentist prospective has been advanced in a number of studies. In this article, we propose a new semiparametric estimator of regression coefficients, which is in the form of a feasible generalized ridge estimator by Hoerl and Kennard (1970b) but with different biasing factors. We prove that after reparameterization such that the regressors are orthogonal, the generalized ridge estimator is algebraically identical to the model average estimator. Further, the biasing factors that determine the properties of both the generalized ridge and semiparametric estimators are directly linked to the weights used in model averaging. These are interesting results for the interpretations and applications of both semiparametric and ridge estimators. Furthermore, we demonstrate that these estimators based on model averaging weights can have properties superior to the well-known feasible generalized ridge estimator in a large region of the parameter space. Two empirical examples are presented. 相似文献
15.
In this article, we introduce a new two-parameter estimator by grafting the contraction estimator into the modified ridge estimator proposed by Swindel (1976). This new two-parameter estimator is a general estimator which includes the ordinary least squares, the ridge, the Liu, and the contraction estimators as special cases. Furthermore, by setting restrictions Rβ = r on the parameter values we introduce a new restricted two-parameter estimator which includes the well-known restricted least squares, the restricted ridge proposed by Groß (2003), the restricted contraction estimators, and a new restricted Liu estimator which we call the modified restricted Liu estimator different from the restricted Liu estimator proposed by Kaç?ranlar et al. (1999). We also obtain necessary and sufficient condition for the superiority of the new two-parameter estimator over the ordinary least squares estimator and the comparison of the new restricted two-parameter estimator to the new two-parameter estimator is done by the criterion of matrix mean square error. The estimators of the biasing parameters are given and a simulation study is done for the comparison as well as the determination of the biasing parameters. 相似文献
16.
Shaoyong Hu 《统计学通讯:理论与方法》2014,43(1):151-164
In this article, we discuss about the stochastic comparisons and optimal allocation for policy limits and deductibles. We order the total retained losses of a policyholder in the usual stochastic order under more general conditions of X i (i = 1,…, n), based on which the optimal allocation of policy limits and deductibles are achieved in some special cases. Several results in Cheung (2007) and Lu and Meng (2011) are generalized here. 相似文献
17.
Housila P. Singh 《统计学通讯:理论与方法》2017,46(2):521-531
This paper aimed at providing an efficient new unbiased estimator for estimating the proportion of a potentially sensitive attribute in survey sampling. The suggested randomization device makes use of the means, variances of scrambling variables, and the two scalars lie between “zero” and “one.” Thus, the same amount of information has been used at the estimation stage. The variance formula of the suggested estimator has been obtained. We have compared the proposed unbiased estimator with that of Kuk (1990) and Franklin (1989), and Singh and Chen (2009) estimators. Relevant conditions are obtained in which the proposed estimator is more efficient than Kuk (1990) and Franklin (1989) and Singh and Chen (2009) estimators. The optimum estimator (OE) in the proposed class of estimators has been identified which finally depends on moments ratios of the scrambling variables. The variance of the optimum estimator has been obtained and compared with that of the Kuk (1990) and Franklin (1989) estimator and Singh and Chen (2009) estimator. It is interesting to mention that the “optimum estimator” of the class of estimators due to Singh and Chen (2009) depends on the parameter π under investigation which limits the use of Singh and Chen (2009) OE in practice while the proposed OE in this paper is free from such a constraint. The proposed OE depends only on the moments ratios of scrambling variables. This is an advantage over the Singh and Chen (2009) estimator. Numerical illustrations are given in the support of the present study when the scrambling variables follow normal distribution. Theoretical and empirical results are very sound and quite illuminating in the favor of the present study. 相似文献
18.
This article considers constructing confidence intervals for the date of a structural break in linear regression models. Using extensive simulations, we compare the performance of various procedures in terms of exact coverage rates and lengths of the confidence intervals. These include the procedures of Bai (1997) based on the asymptotic distribution under a shrinking shift framework, Elliott and Müller (2007) based on inverting a test locally invariant to the magnitude of break, Eo and Morley (2015) based on inverting a likelihood ratio test, and various bootstrap procedures. On the basis of achieving an exact coverage rate that is closest to the nominal level, Elliott and Müller's (2007) approach is by far the best one. However, this comes with a very high cost in terms of the length of the confidence intervals. When the errors are serially correlated and dealing with a change in intercept or a change in the coefficient of a stationary regressor with a high signal-to-noise ratio, the length of the confidence interval increases and approaches the whole sample as the magnitude of the change increases. The same problem occurs in models with a lagged dependent variable, a common case in practice. This drawback is not present for the other methods, which have similar properties. Theoretical results are provided to explain the drawbacks of Elliott and Müller's (2007) method. 相似文献
19.
Recently, Koyuncu et al. (2013) proposed an exponential type estimator to improve the efficiency of mean estimator based on randomized response technique. In this article, we propose an improved exponential type estimator which is more efficient than the Koyuncu et al. (2013) estimator, which in turn was shown to be more efficient than the usual mean estimator, ratio estimator, regression estimator, and the Gupta et al. (2012) estimator. Under simple random sampling without replacement (SRSWOR) scheme, bias and mean square error expressions for the proposed estimator are obtained up to first order of approximation and comparisons are made with the Koyuncu et al. (2013) estimator. A simulation study is used to observe the performances of these two estimators. Theoretical findings are also supported by a numerical example with real data. We also show how to, extend the proposed estimator to the case when more than one auxiliary variable is available. 相似文献
20.
Several methods using different approaches have been developed to remedy the consequences of collinearity. To the best of our knowledge, only the raise estimator proposed by García et al. (2010) deals with this problem from a geometric perspective. This article fully develops the raise estimator for a model with two standardized explanatory variables. Inference in the raise estimator is examined, showing that it can be obtained from ordinary least squares methodology. In addition, contrary to what happens in ridge regression, the raise estimator maintains the coefficient of determination value constant. The expression of the variance inflation factor for the raise estimator is also presented. Finally, a comparative study of the raise and ridge estimators is carried out using an example. 相似文献