首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
This article suggests random and fixed effects spatial two-stage least squares estimators for the generalized mixed regressive spatial autoregressive panel data model. This extends the generalized spatial panel model of Baltagi et al. (2013 Baltagi, B. H., Egger, P., Pfaffermayr, M. (2013). A generalized spatial panel data model with random effects. Econometric Reviews 32:650685.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) by the inclusion of a spatial lag term. The estimation method utilizes the Generalized Moments method suggested by Kapoor et al. (2007 Kapoor, M., Kelejian, H. H., Prucha, I. R. (2007). Panel data models with spatially correlated error components. Journal of Econometrics 127(1):97130.[Crossref], [Web of Science ®] [Google Scholar]) for a spatial autoregressive panel data model. We derive the asymptotic distributions of these estimators and suggest a Hausman test a la Mutl and Pfaffermayr (2011 Mutl, J., Pfaffermayr, M. (2011). The Hausman test in a Cliff and Ord panel model. Econometrics Journal 14:4876.[Crossref], [Web of Science ®] [Google Scholar]) based on the difference between these estimators. Monte Carlo experiments are performed to investigate the performance of these estimators as well as the corresponding Hausman test.  相似文献   

2.
Adaptive designs find an important application in the estimation of unknown percentiles for an underlying dose-response curve. A nonparametric adaptive design was suggested by Mugno et al. (2004 Mugno, R.A., Zhus, W., Rosenberger, W.F. (2004). Adaptive urn designs for estimating several percentiles of a dose-response curve. Statist. Med. 23(13):21372150.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) to simultaneously estimate multiple percentiles of an unknown dose-response curve via generalized Polya urns. In this article, we examine the properties of the design proposed by Mugno et al. (2004 Mugno, R.A., Zhus, W., Rosenberger, W.F. (2004). Adaptive urn designs for estimating several percentiles of a dose-response curve. Statist. Med. 23(13):21372150.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) when delays in observing responses are encountered. Using simulations, we evaluate a modification of the design under varying group sizes. Our results demonstrate unbiased estimation with minimal loss in efficiency when compared to the original compound urn design.  相似文献   

3.
The Significance Analysis of Microarrays (SAM; Tusher et al., 2001 Tusher , V. G. , Tibshirani , R. , Chu , G. ( 2001 ). Significance analysis of microarrys applied to the ionizing radiation response . Proceedings of the National Academy of Sciences 98 : 51165121 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) method is widely used in analyzing gene expression data while controlling the FDR by using resampling-based procedure in the microarray setting. One of the main components of the SAM procedure is the adjustment of the test statistic. The introduction of the fudge factor to the test statistic aims at deflating the large value of test statistics due to the small standard error of gene-expression. Lin et al. (2008 Lin , D. , Shkedy , Z. , Burzykowski , T. , Göhlmann , H. W. H. , De Bondt , A. , Perera , T. , Geerts , T. , Bijnens , L. ( 2008 ). Significance analysis of microarray (SAM) for comparisons of several treatments with one control . Biometric Journal, MCP 50 ( 5 ): 801823 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) pointed out that the fudge factor does not effectively improve the power and the control of the FDR as compared to the SAM procedure without the fudge factor in the presence of small variance genes. Motivated by the simulation results presented in Lin et al. (2008 Lin , D. , Shkedy , Z. , Burzykowski , T. , Göhlmann , H. W. H. , De Bondt , A. , Perera , T. , Geerts , T. , Bijnens , L. ( 2008 ). Significance analysis of microarray (SAM) for comparisons of several treatments with one control . Biometric Journal, MCP 50 ( 5 ): 801823 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]), in this article, we extend our study to compare several methods for choosing the fudge factor in the modified t-type test statistics and use simulation studies to investigate the power and the control of the FDR of the considered methods.  相似文献   

4.
Noting that many economic variables display occasional shifts in their second order moments, we investigate the performance of homogenous panel unit root tests in the presence of permanent volatility shifts. It is shown that in this case the test statistic proposed by Herwartz and Siedenburg (2008 Herwartz, H., Siedenburg, F. (2008). Homogenous panel unit root tests under cross-sectional dependence: Finite sample modifications and the wild bootstrap. Computational Statistics and Data Analysis 53(1):137150.[Crossref], [Web of Science ®] [Google Scholar]) is asymptotically standard Gaussian. By means of a simulation study we illustrate the performance of first and second generation panel unit root tests and undertake a more detailed comparison of the test in Herwartz and Siedenburg (2008 Herwartz, H., Siedenburg, F. (2008). Homogenous panel unit root tests under cross-sectional dependence: Finite sample modifications and the wild bootstrap. Computational Statistics and Data Analysis 53(1):137150.[Crossref], [Web of Science ®] [Google Scholar]) and its heteroskedasticity consistent Cauchy counterpart introduced in Demetrescu and Hanck (2012a Demetrescu, M., Hanck, C. (2012a). A simple nonstationary-volatility robust panel unit root test. Economics Letters 117(2):1013.[Crossref], [Web of Science ®] [Google Scholar]). As an empirical illustration, we reassess evidence on the Fisher hypothesis with data from nine countries over the period 1961Q2–2011Q2. Empirical evidence supports panel stationarity of the real interest rate for the entire subperiod. With regard to the most recent two decades, the test results cast doubts on market integration, since the real interest rate is diagnosed nonstationary.  相似文献   

5.
A proposed method based on frailty models is used to identify longitudinal biomarkers or surrogates for a multivariate survival. This method is an extention of earlier models by Wulfsohn and Tsiatis (1997 Wulfsohn , M. S. , Tsiatis , A. A. ( 1997 ). A joint model for survival and longitudinal data measured with error . Biometrics 53 ( 1 ): 330339 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) and Song et al. (2002 Song , X. , Davidian , M. , Tsiatis , A. A. ( 2002 ). A Semiparametric likelihood approach to joint modeling of longitudinal and time-to-event data . Biometrics 58 ( 4 ): 742753 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]). In this article, similar to Henderson et al. (2002 Henderson , R. , Diggle , P. J. , Dobson , A. ( 2002 ). Identification and efficacy of longitudinal markers for survival . Biostatistics 3 ( 1 ): 3350 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]), a joint likelihood function combines the likelihood functions of the longitudinal biomarkers and the multivariate survival times. We use simulations to explore how the number of individuals, the number of time points per individual and the functional form of the random effects from the longitudianl biomarkers influence the power to detect the association of a longitudinal biomarker and the multivariate survival time. The proposed method is illustrate by using the gastric cancer data.  相似文献   

6.
To deal with multicollinearity problem, the biased estimators with two biasing parameters have recently attracted much research interest. The aim of this article is to compare one of the last proposals given by Yang and Chang (2010 Yang, H., and X. Chang. 2010. A new two-parameter estimator in linear regression. Communications in Statistics: Theory and Methods 39 (6):92334.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) with Liu-type estimator (Liu 2003 Liu, K. 2003. Using Liu-type estimator to combat collinearity. Communications in Statistics: Theory and Methods 32 (5):100920.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) and k ? d class estimator (Sakallioglu and Kaciranlar 2008 Sakallioglu, S., and S. Kaciranlar. 2008. A new biased estimator based on ridge estimation. Statistical Papers 49:66989.[Crossref], [Web of Science ®] [Google Scholar]) under the matrix mean squared error criterion. As well as giving these comparisons theoretically, we support the results with the extended simulation studies and real data example, which show the advantages of the proposal given by Yang and Chang (2010 Yang, H., and X. Chang. 2010. A new two-parameter estimator in linear regression. Communications in Statistics: Theory and Methods 39 (6):92334.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) over the other proposals with increasing multicollinearity level.  相似文献   

7.
A variety of statistical approaches have been suggested in the literature for the analysis of bounded outcome scores (BOS). In this paper, we suggest a statistical approach when BOSs are repeatedly measured over time and used as predictors in a regression model. Instead of directly using the BOS as a predictor, we propose to extend the approaches suggested in [16 E. Lesaffre, D. Rizopoulos, and R. Tsonaka, The logistics-transform for bounded outcome scores, Biostatistics 8 (2007), pp. 7285. doi: 10.1093/biostatistics/kxj034[Crossref], [PubMed], [Web of Science ®] [Google Scholar],21 M. Molas and E. Lesaffre, A comparison of the three random effects approaches to analyse repeated bounded outcome scores with an application in a stroke revalidation study, Stat. Med. 27 (2008), pp. 66126633. doi: 10.1002/sim.3432[Crossref], [PubMed], [Web of Science ®] [Google Scholar],28 R. Tsonaka, D. Rizopoulos, and E. Lesaffre, Power and sample size calculations for discrete bounded outcome scores, Stat. Med. 25 (2006), pp. 42414252. doi: 10.1002/sim.2679[Crossref], [PubMed], [Web of Science ®] [Google Scholar]] to a joint modeling setting. Our approach is illustrated on longitudinal profiles of multiple patients’ reported outcomes to predict the current clinical status of rheumatoid arthritis patients by a disease activities score of 28 joints (DAS28). Both a maximum likelihood as well as a Bayesian approach is developed.  相似文献   

8.
This article is concerned with sphericity test for the two-way error components panel data model. It is found that the John statistic and the bias-corrected LM statistic recently developed by Baltagi et al. (2011 Baltagi, B. H., Feng, Q., Kao, C. (2011). Testing for sphericity in a fixed effects panel data model. Econometrics Journal 14:2547.[Crossref], [Web of Science ®] [Google Scholar])Baltagi et al. (2012 Baltagi, B. H., Feng, Q., Kao, C. (2012). A Lagrange multiplier test for cross-sectional dependence in a fixed effects panel data model. Journal of Econometrics 170:164177.[Crossref], [Web of Science ®] [Google Scholar], which are based on the within residuals, are not helpful under the present circumstances even though they are in the one-way fixed effects model. However, we prove that when the within residuals are properly transformed, the resulting residuals can serve to construct useful statistics that are similar to those of Baltagi et al. (2011 Baltagi, B. H., Feng, Q., Kao, C. (2011). Testing for sphericity in a fixed effects panel data model. Econometrics Journal 14:2547.[Crossref], [Web of Science ®] [Google Scholar])Baltagi et al. (2012 Baltagi, B. H., Feng, Q., Kao, C. (2012). A Lagrange multiplier test for cross-sectional dependence in a fixed effects panel data model. Journal of Econometrics 170:164177.[Crossref], [Web of Science ®] [Google Scholar]). Simulation results show that the newly proposed statistics perform well under the null hypothesis and several typical alternatives.  相似文献   

9.
Analysis of covariance (ANCOVA) is the standard procedure for comparing several treatments when the response variable depends on one or more covariates. We consider the problem of testing the equality of treatment effects when the variances are not assumed to be equal. It is well known that classical F test is not robust with respect to the assumption of equal variances and may lead to misleading conclusions if the variances are not equal. Ananda (1998 Ananda , M. M. A. ( 1998 ). Bayesian and non-Bayesian solutions to analysis of covariance models under heteroscedasticity . J. Econometrics 86 : 177192 .[Crossref], [Web of Science ®] [Google Scholar]) developed a generalized F test for testing the equality of treatment effects. However, simulation studies show that the actual size of this test can be much higher than the nominal level when the sample sizes are small, particularly when the number of treatments is large. In this article, we develop a test using the parametric bootstrap approach of Krishnamoorthy et al. (2007 Krishnamoorthy , K. , Lu , F. , Mathew , T. ( 2007 ). A parametric bootstrap approach for ANOVA with unequal variances: Fixed and random models . Computat. Statist. Data Anal. 51 : 57315742 .[Crossref], [Web of Science ®] [Google Scholar]). Our simulations show that the actual size of our proposed test is close to the nominal level, irrespective of the number of treatments and sample sizes. Our simulations also indicate that our proposed PB test is more robust, with respect to the assumption of normality, than the generalized F test. Therefore, our proposed PB test provides a satisfactory alternative to the generalized F test.  相似文献   

10.
Here, we apply the smoothing technique proposed by Chaubey et al. (2007 Chaubey , Y. P. , Sen , A. , Sen , P. K. ( 2007 ). A new smooth density estimator for non-negative random variables. Technical Report No. 1/07. Department of Mathematics and Statistics, Concordia University, Montreal, Canada . [Google Scholar]) for the empirical survival function studied in Bagai and Prakasa Rao (1991 Bagai , I. , Prakasa Rao , B. L. S. ( 1991 ). Estimation of the survival function for stationary associated processes . Statist. Probab. Lett. 12 : 385391 .[Crossref], [Web of Science ®] [Google Scholar]) for a sequence of stationary non-negative associated random variables.The derivative of this estimator in turn is used to propose a nonparametric density estimator. The asymptotic properties of the resulting estimators are studied and contrasted with some other competing estimators. A simulation study is carried out comparing the recent estimator based on the Poisson weights (Chaubey et al., 2011 Chaubey , Y. P. , Dewan , I. , Li , J. ( 2011 ). Smooth estimation of survival and density functions for a stationary associated process using poisson weights . Statist. Probab. Lett. 81 : 267276 .[Crossref], [Web of Science ®] [Google Scholar]) showing that the two estimators have comparable finite sample global as well as local behavior.  相似文献   

11.
This paper discusses the estimation of average treatment effects in observational causal inferences. By employing a working propensity score and two working regression models for treatment and control groups, Robins et al. (1994 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1994 ). Estimation of regression coefficients when some regressors are not always observed . Journal of the American Statistical Association 89 : 846866 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], 1995 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1995 ). Analysis of semiparametric regression models for repeated outcomes in the presence of missing data . Journal of the American Statistical Association 90 : 106121 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) introduced the augmented inverse probability weighting (AIPW) method for estimation of average treatment effects, which extends the inverse probability weighting (IPW) method of Horvitz and Thompson (1952 Horvitz , D. G. , Thompson , D. J. ( 1952 ). A generalization of sampling without replacement from a finite universe . Journal of the American Statistical Association 47 : 663685 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]); the AIPW estimators are locally efficient and doubly robust. In this paper, we study a hybrid of the empirical likelihood method and the method of moments by employing three estimating functions, which can generate estimators for average treatment effects that are locally efficient and doubly robust. The proposed estimators of average treatment effects are efficient for the given choice of three estimating functions when the working propensity score is correctly specified, and thus are more efficient than the AIPW estimators. In addition, we consider a regression method for estimation of the average treatment effects when working regression models for both the treatment and control groups are correctly specified; the asymptotic variance of the resulting estimator is no greater than the semiparametric variance bound characterized by the theory of Robins et al. (1994 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1994 ). Estimation of regression coefficients when some regressors are not always observed . Journal of the American Statistical Association 89 : 846866 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar], 1995 Robins , J. M. , Rotnitzky , A. , Zhao , L. P. ( 1995 ). Analysis of semiparametric regression models for repeated outcomes in the presence of missing data . Journal of the American Statistical Association 90 : 106121 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]). Finally, we present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification.  相似文献   

12.
Competing models arise naturally in many research fields, such as survival analysis and economics, when the same phenomenon of interest is explained by different researcher using different theories or according to different experiences. The model selection problem is therefore remarkably important because of its great importance to the subsequent inference; Inference under a misspecified or inappropriate model will be risky. Existing model selection tests such as Vuong's tests [26 Q.H. Vuong, Likelihood ratio test for model selection and non-nested hypothesis, Econometrica 57 (1989), pp. 307333. doi: 10.2307/1912557[Crossref], [Web of Science ®] [Google Scholar]] and Shi's non-degenerate tests [21 X. Shi, A non-degenerate Vuong test, Quant. Econ. 6 (2015), pp. 85121. doi: 10.3982/QE382[Crossref], [Web of Science ®] [Google Scholar]] suffer from the variance estimation and the departure of the normality of the likelihood ratios. To circumvent these dilemmas, we propose in this paper an empirical likelihood ratio (ELR) tests for model selection. Following Shi [21 X. Shi, A non-degenerate Vuong test, Quant. Econ. 6 (2015), pp. 85121. doi: 10.3982/QE382[Crossref], [Web of Science ®] [Google Scholar]], a bias correction method is proposed for the ELR tests to enhance its performance. A simulation study and a real-data analysis are provided to illustrate the performance of the proposed ELR tests.  相似文献   

13.
In this article, we consider investigating whether any of k treatments are better than a control under the assumption of each treatment mean being no less than the control mean. A classic problem is to find the simultaneous confidence bounds for the difference between each treatment and the control. Compared with hypothesis testing, confidence bounds have the attractive advantage of telling more information about the effective treatment. Generally, the one-sided lower bounds are provided as it's enough for detecting effective treatment and the one-sided lower bounds has sharper lower bands than two-sided ones. However, a two-sided procedure provides both upper and lower bounds on the differences. In this article, we develop a new procedure which combines the good aspects of both the one-sided and the two-sided procedures. This new procedure has the same inferential sensitivity of the one-sided procedure proposed by Zhao (2007 Zhao , H. B. ( 2007 ). Comparing several treatments with a control . J. Statist. Plann. Infer. 137 : 29963006 .[Crossref], [Web of Science ®] [Google Scholar]) while also providing simultaneous two-sided bounds for the differences between treatments and the control. By our computation results, we find the new procedure is better than Hayter, Miwa and Liu's procedure (Hayter et al., 2000 Hayter , A. J. , Miwa , T. , Liu , W. ( 2000 ). Combining the advantages of one-sided and two-sided procedures for comparing several treatments with a control . J. Statist. Plann. Infer. 86 : 8199 .[Crossref], [Web of Science ®] [Google Scholar]), when the sample size is balanced. We also illustrate the new procedure by an example.  相似文献   

14.
In recent years, the suggestion of combining models as an alternative to selecting a single model from a frequentist prospective has been advanced in a number of studies. In this article, we propose a new semiparametric estimator of regression coefficients, which is in the form of a feasible generalized ridge estimator by Hoerl and Kennard (1970b Hoerl, A. E., Kennard, R. W. (1970b). Ridge regression: Application to nonorthogonal problems. Technometrics 12(1):6982.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) but with different biasing factors. We prove that after reparameterization such that the regressors are orthogonal, the generalized ridge estimator is algebraically identical to the model average estimator. Further, the biasing factors that determine the properties of both the generalized ridge and semiparametric estimators are directly linked to the weights used in model averaging. These are interesting results for the interpretations and applications of both semiparametric and ridge estimators. Furthermore, we demonstrate that these estimators based on model averaging weights can have properties superior to the well-known feasible generalized ridge estimator in a large region of the parameter space. Two empirical examples are presented.  相似文献   

15.
In this article, we introduce a new two-parameter estimator by grafting the contraction estimator into the modified ridge estimator proposed by Swindel (1976 Swindel , B. F. ( 1976 ). Good ridge estimators based on prior information . Commun. Statist. Theor. Meth. A5 : 10651075 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]). This new two-parameter estimator is a general estimator which includes the ordinary least squares, the ridge, the Liu, and the contraction estimators as special cases. Furthermore, by setting restrictions Rβ = r on the parameter values we introduce a new restricted two-parameter estimator which includes the well-known restricted least squares, the restricted ridge proposed by Groß (2003 Groß , J. ( 2003 ). Restricted ridge estimation . Statist. Probab. Lett. 65 : 5764 .[Crossref], [Web of Science ®] [Google Scholar]), the restricted contraction estimators, and a new restricted Liu estimator which we call the modified restricted Liu estimator different from the restricted Liu estimator proposed by Kaç?ranlar et al. (1999 Kaç?ranlar , S. , Sakall?o?lu , S. , Akdeniz , F. , Styan , G. P. H. , Werner , H. J. ( 1999 ). A new biased estimator in linear regression and a detailed analysis of the widely-analysed dataset on Portland cement . Sankhya Ser. B., Ind. J. Statist. 61 : 443459 . [Google Scholar]). We also obtain necessary and sufficient condition for the superiority of the new two-parameter estimator over the ordinary least squares estimator and the comparison of the new restricted two-parameter estimator to the new two-parameter estimator is done by the criterion of matrix mean square error. The estimators of the biasing parameters are given and a simulation study is done for the comparison as well as the determination of the biasing parameters.  相似文献   

16.
In this article, we discuss about the stochastic comparisons and optimal allocation for policy limits and deductibles. We order the total retained losses of a policyholder in the usual stochastic order under more general conditions of X i (i = 1,…, n), based on which the optimal allocation of policy limits and deductibles are achieved in some special cases. Several results in Cheung (2007 Cheung , K. C. ( 2007 ). Optimal allocation of policy limits and deductibles . Insur. Math. Econ. 41 : 291382 .[Crossref], [Web of Science ®] [Google Scholar]) and Lu and Meng (2011 Lu , Z. , Meng , L. ( 2011 ). Stochastic comparisons for allocations of policy limits and deductibles with applications . Insur. Math. Econ. 48 : 338343 .[Crossref], [Web of Science ®] [Google Scholar]) are generalized here.  相似文献   

17.
This paper aimed at providing an efficient new unbiased estimator for estimating the proportion of a potentially sensitive attribute in survey sampling. The suggested randomization device makes use of the means, variances of scrambling variables, and the two scalars lie between “zero” and “one.” Thus, the same amount of information has been used at the estimation stage. The variance formula of the suggested estimator has been obtained. We have compared the proposed unbiased estimator with that of Kuk (1990 Kuk, A.Y.C. (1990). Asking sensitive questions inderectely. Biometrika 77:436438.[Crossref], [Web of Science ®] [Google Scholar]) and Franklin (1989 Franklin, L.A. (1989). A comparision of estimators for randomized response sampling with continuous distribution s from a dichotomous population. Commun. Stat. Theor. Methods 18:489505.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), and Singh and Chen (2009 Singh, S., Chen, C.C. (2009). Utilization of higher order moments of scrambling variables in randomized response sampling. J. Stat. Plann. Inference. 139:33773380.[Crossref], [Web of Science ®] [Google Scholar]) estimators. Relevant conditions are obtained in which the proposed estimator is more efficient than Kuk (1990 Kuk, A.Y.C. (1990). Asking sensitive questions inderectely. Biometrika 77:436438.[Crossref], [Web of Science ®] [Google Scholar]) and Franklin (1989 Franklin, L.A. (1989). A comparision of estimators for randomized response sampling with continuous distribution s from a dichotomous population. Commun. Stat. Theor. Methods 18:489505.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) and Singh and Chen (2009 Singh, S., Chen, C.C. (2009). Utilization of higher order moments of scrambling variables in randomized response sampling. J. Stat. Plann. Inference. 139:33773380.[Crossref], [Web of Science ®] [Google Scholar]) estimators. The optimum estimator (OE) in the proposed class of estimators has been identified which finally depends on moments ratios of the scrambling variables. The variance of the optimum estimator has been obtained and compared with that of the Kuk (1990 Kuk, A.Y.C. (1990). Asking sensitive questions inderectely. Biometrika 77:436438.[Crossref], [Web of Science ®] [Google Scholar]) and Franklin (1989 Franklin, L.A. (1989). A comparision of estimators for randomized response sampling with continuous distribution s from a dichotomous population. Commun. Stat. Theor. Methods 18:489505.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) estimator and Singh and Chen (2009 Singh, S., Chen, C.C. (2009). Utilization of higher order moments of scrambling variables in randomized response sampling. J. Stat. Plann. Inference. 139:33773380.[Crossref], [Web of Science ®] [Google Scholar]) estimator. It is interesting to mention that the “optimum estimator” of the class of estimators due to Singh and Chen (2009 Singh, S., Chen, C.C. (2009). Utilization of higher order moments of scrambling variables in randomized response sampling. J. Stat. Plann. Inference. 139:33773380.[Crossref], [Web of Science ®] [Google Scholar]) depends on the parameter π under investigation which limits the use of Singh and Chen (2009 Singh, S., Chen, C.C. (2009). Utilization of higher order moments of scrambling variables in randomized response sampling. J. Stat. Plann. Inference. 139:33773380.[Crossref], [Web of Science ®] [Google Scholar]) OE in practice while the proposed OE in this paper is free from such a constraint. The proposed OE depends only on the moments ratios of scrambling variables. This is an advantage over the Singh and Chen (2009 Singh, S., Chen, C.C. (2009). Utilization of higher order moments of scrambling variables in randomized response sampling. J. Stat. Plann. Inference. 139:33773380.[Crossref], [Web of Science ®] [Google Scholar]) estimator. Numerical illustrations are given in the support of the present study when the scrambling variables follow normal distribution. Theoretical and empirical results are very sound and quite illuminating in the favor of the present study.  相似文献   

18.
This article considers constructing confidence intervals for the date of a structural break in linear regression models. Using extensive simulations, we compare the performance of various procedures in terms of exact coverage rates and lengths of the confidence intervals. These include the procedures of Bai (1997 Bai, J. (1997). Estimation of a change point in multiple regressions. Review of Economics and Statistics 79:551563.[Crossref], [Web of Science ®] [Google Scholar]) based on the asymptotic distribution under a shrinking shift framework, Elliott and Müller (2007 Elliott, G., Müller, U. (2007). Confidence sets for the date of a single break in linear time series regressions. Journal of Econometrics 141:11961218.[Crossref], [Web of Science ®] [Google Scholar]) based on inverting a test locally invariant to the magnitude of break, Eo and Morley (2015 Eo, Y., Morley, J. (2015). Likelihood-ratio-based confidence sets for the timing of structural breaks. Quantitative Economics 6:463497.[Crossref], [Web of Science ®] [Google Scholar]) based on inverting a likelihood ratio test, and various bootstrap procedures. On the basis of achieving an exact coverage rate that is closest to the nominal level, Elliott and Müller's (2007 Elliott, G., Müller, U. (2007). Confidence sets for the date of a single break in linear time series regressions. Journal of Econometrics 141:11961218.[Crossref], [Web of Science ®] [Google Scholar]) approach is by far the best one. However, this comes with a very high cost in terms of the length of the confidence intervals. When the errors are serially correlated and dealing with a change in intercept or a change in the coefficient of a stationary regressor with a high signal-to-noise ratio, the length of the confidence interval increases and approaches the whole sample as the magnitude of the change increases. The same problem occurs in models with a lagged dependent variable, a common case in practice. This drawback is not present for the other methods, which have similar properties. Theoretical results are provided to explain the drawbacks of Elliott and Müller's (2007 Elliott, G., Müller, U. (2007). Confidence sets for the date of a single break in linear time series regressions. Journal of Econometrics 141:11961218.[Crossref], [Web of Science ®] [Google Scholar]) method.  相似文献   

19.
Recently, Koyuncu et al. (2013 Koyuncu, N., Gupta, S., Sousa, R. (2014). Exponential type estimators of the mean of a sensitive variable in the presence of non-sensitive auxiliary information. Communications in Statistics- Simulation and Computation[PubMed], [Web of Science ®] [Google Scholar]) proposed an exponential type estimator to improve the efficiency of mean estimator based on randomized response technique. In this article, we propose an improved exponential type estimator which is more efficient than the Koyuncu et al. (2013 Koyuncu, N., Gupta, S., Sousa, R. (2014). Exponential type estimators of the mean of a sensitive variable in the presence of non-sensitive auxiliary information. Communications in Statistics- Simulation and Computation[PubMed], [Web of Science ®] [Google Scholar]) estimator, which in turn was shown to be more efficient than the usual mean estimator, ratio estimator, regression estimator, and the Gupta et al. (2012 Gupta, S., Shabbir, J., Sousa, R., Corte-Real, P. (2012). Regression estimation of the mean of a sensitive variable in the presence of auxiliary information. Communications in Statistics – Theory and Methods 41:23942404.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) estimator. Under simple random sampling without replacement (SRSWOR) scheme, bias and mean square error expressions for the proposed estimator are obtained up to first order of approximation and comparisons are made with the Koyuncu et al. (2013 Koyuncu, N., Gupta, S., Sousa, R. (2014). Exponential type estimators of the mean of a sensitive variable in the presence of non-sensitive auxiliary information. Communications in Statistics- Simulation and Computation[PubMed], [Web of Science ®] [Google Scholar]) estimator. A simulation study is used to observe the performances of these two estimators. Theoretical findings are also supported by a numerical example with real data. We also show how to, extend the proposed estimator to the case when more than one auxiliary variable is available.  相似文献   

20.
Several methods using different approaches have been developed to remedy the consequences of collinearity. To the best of our knowledge, only the raise estimator proposed by García et al. (2010 García, C.B., García, J., Soto, J. (2010). The raise method: An alternative procedure to estimate the parameters in presence of collinearity. Qual. Quantity 45(2):403423.[Crossref], [Web of Science ®] [Google Scholar]) deals with this problem from a geometric perspective. This article fully develops the raise estimator for a model with two standardized explanatory variables. Inference in the raise estimator is examined, showing that it can be obtained from ordinary least squares methodology. In addition, contrary to what happens in ridge regression, the raise estimator maintains the coefficient of determination value constant. The expression of the variance inflation factor for the raise estimator is also presented. Finally, a comparative study of the raise and ridge estimators is carried out using an example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号