首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
A variety of statistical approaches have been suggested in the literature for the analysis of bounded outcome scores (BOS). In this paper, we suggest a statistical approach when BOSs are repeatedly measured over time and used as predictors in a regression model. Instead of directly using the BOS as a predictor, we propose to extend the approaches suggested in [16 E. Lesaffre, D. Rizopoulos, and R. Tsonaka, The logistics-transform for bounded outcome scores, Biostatistics 8 (2007), pp. 7285. doi: 10.1093/biostatistics/kxj034[Crossref], [PubMed], [Web of Science ®] [Google Scholar],21 M. Molas and E. Lesaffre, A comparison of the three random effects approaches to analyse repeated bounded outcome scores with an application in a stroke revalidation study, Stat. Med. 27 (2008), pp. 66126633. doi: 10.1002/sim.3432[Crossref], [PubMed], [Web of Science ®] [Google Scholar],28 R. Tsonaka, D. Rizopoulos, and E. Lesaffre, Power and sample size calculations for discrete bounded outcome scores, Stat. Med. 25 (2006), pp. 42414252. doi: 10.1002/sim.2679[Crossref], [PubMed], [Web of Science ®] [Google Scholar]] to a joint modeling setting. Our approach is illustrated on longitudinal profiles of multiple patients’ reported outcomes to predict the current clinical status of rheumatoid arthritis patients by a disease activities score of 28 joints (DAS28). Both a maximum likelihood as well as a Bayesian approach is developed.  相似文献   

2.
Adaptive clinical trial designs can often improve drug-study efficiency by utilizing data obtained during the course of the trial. We present a novel Bayesian two-stage adaptive design for Phase II clinical trials with Poisson-distributed outcomes that allows for person-observation-time adjustments for early termination due to either futility or efficacy. Our design is motivated by the adaptive trial from [9 V. Sambucini, A Bayesian predictive two-stage design for Phase II clinical trials, Stat. Med. 27 (2008), pp. 11991224. doi: 10.1002/sim.3021[Crossref], [PubMed], [Web of Science ®] [Google Scholar]], which uses binomial data. Although many frequentist and Bayesian two-stage adaptive designs for count data have been proposed in the literature, many designs do not allow for person-time adjustments after the first stage. This restriction limits flexibility in the study design. However, our proposed design allows for such flexibility by basing the second-stage person-time on the first-stage observed-count data. We demonstrate the implementation of our Bayesian predictive adaptive two-stage design using a hypothetical Phase II trial of Immune Globulin (Intravenous).  相似文献   

3.
This article considers constructing confidence intervals for the date of a structural break in linear regression models. Using extensive simulations, we compare the performance of various procedures in terms of exact coverage rates and lengths of the confidence intervals. These include the procedures of Bai (1997 Bai, J. (1997). Estimation of a change point in multiple regressions. Review of Economics and Statistics 79:551563.[Crossref], [Web of Science ®] [Google Scholar]) based on the asymptotic distribution under a shrinking shift framework, Elliott and Müller (2007 Elliott, G., Müller, U. (2007). Confidence sets for the date of a single break in linear time series regressions. Journal of Econometrics 141:11961218.[Crossref], [Web of Science ®] [Google Scholar]) based on inverting a test locally invariant to the magnitude of break, Eo and Morley (2015 Eo, Y., Morley, J. (2015). Likelihood-ratio-based confidence sets for the timing of structural breaks. Quantitative Economics 6:463497.[Crossref], [Web of Science ®] [Google Scholar]) based on inverting a likelihood ratio test, and various bootstrap procedures. On the basis of achieving an exact coverage rate that is closest to the nominal level, Elliott and Müller's (2007 Elliott, G., Müller, U. (2007). Confidence sets for the date of a single break in linear time series regressions. Journal of Econometrics 141:11961218.[Crossref], [Web of Science ®] [Google Scholar]) approach is by far the best one. However, this comes with a very high cost in terms of the length of the confidence intervals. When the errors are serially correlated and dealing with a change in intercept or a change in the coefficient of a stationary regressor with a high signal-to-noise ratio, the length of the confidence interval increases and approaches the whole sample as the magnitude of the change increases. The same problem occurs in models with a lagged dependent variable, a common case in practice. This drawback is not present for the other methods, which have similar properties. Theoretical results are provided to explain the drawbacks of Elliott and Müller's (2007 Elliott, G., Müller, U. (2007). Confidence sets for the date of a single break in linear time series regressions. Journal of Econometrics 141:11961218.[Crossref], [Web of Science ®] [Google Scholar]) method.  相似文献   

4.
This paper presents a new variable weight method, called the singular value decomposition (SVD) approach, for Kohonen competitive learning (KCL) algorithms based on the concept of Varshavsky et al. [18 R. Varshavsky, A. Gottlieb, M. Linial, and D. Horn, Novel unsupervised feature filtering of bilogical data, Bioinformatics 22 (2006), pp. 507513.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]]. Integrating the weighted fuzzy c-means (FCM) algorithm with KCL, in this paper, we propose a weighted fuzzy KCL (WFKCL) algorithm. The goal of the proposed WFKCL algorithm is to reduce the clustering error rate when data contain some noise variables. Compared with the k-means, FCM and KCL with existing variable-weight methods, the proposed WFKCL algorithm with the proposed SVD's weight method provides a better clustering performance based on the error rate criterion. Furthermore, the complexity of the proposed SVD's approach is less than Pal et al. [17 S.K. Pal, R.K. De, and J. Basak, Unsupervised feature evaluation: a neuro-fuzzy approach, IEEE. Trans. Neural Netw. 11 (2000), pp. 366376.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]], Wang et al. [19 X.Z. Wang, Y.D. Wang, and L.J. Wang, Improving fuzzy c-means clustering based on feature-weight learning, Pattern Recognit. Lett. 25 (2004), pp. 11231132.[Crossref], [Web of Science ®] [Google Scholar]] and Hung et al. [9 W. -L. Hung, M. -S. Yang, and D. -H. Chen, Bootstrapping approach to feature-weight selection in fuzzy c-means algorithms with an application in color image segmentation, Pattern Recognit. Lett. 29 (2008), pp. 13171325.[Crossref], [Web of Science ®] [Google Scholar]].  相似文献   

5.
The complication in analyzing tumor data is that the tumors detected in a screening program tend to be slowly progressive tumors, which is the so-called left-truncated sampling that is inherent in screening studies. Under the assumption that all subjects have the same tumor growth function, Ghosh (2008 Ghosh, D. (2008). Proportional hazards regression for cancer studies. Biometrics 64:141148.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) developed estimation procedures for the Cox proportional hazards model. Shen (2011a Shen, P.-S. (2011a). Proportional hazards regression for cancer screening data. J. Stat. Comput. Simul. 18:367377.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) demonstrated that Ghosh (2008 Ghosh, D. (2008). Proportional hazards regression for cancer studies. Biometrics 64:141148.[Crossref], [PubMed], [Web of Science ®] [Google Scholar])'s approach can be extended to the case when each subject has a specific growth function. In this article, under linear transformation model, we present a general framework to the analysis of data from cancer screening studies. We developed estimation procedures under linear transformation model, which includes Cox's model as a special case. A simulation study is conducted to demonstrate the potential usefulness of the proposed estimators.  相似文献   

6.
Two-period crossover design is one of the commonly used designs in clinical trials. But, the estimation of treatment effect is complicated by the possible presence of carryover effect. It is known that ignoring the carryover effect when it exists can lead to poor estimates of the treatment effect. The classical approach by Grizzle (1965 Grizzle, J.E. (1965). The two-period change-over design and its use in clinical trials. Biometrics 21:467480. See Grizzle (1974) for corrections.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) consists of two stages. First, a preliminary test is conducted on carryover effect. If the carryover effect is significant, analysis is based only on data from period one; otherwise, analysis is based on data from both periods. A Bayesian approach with improper priors was proposed by Grieve (1985 Grieve, A.P. (1985). A Bayesian analysis of the two-period crossover design for clinical trials. Biometrics 41:979990.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) which uses a mixture of two models: a model with carryover effect and another without. The indeterminacy of the Bayes factor due to the arbitrary constant in the improper prior was addressed by assigning a minimally discriminatory value to the constant. In this article, we present an objective Bayesian estimation approach to the two-period crossover design which is also based on a mixture model, but using the commonly recommended Zellner–Siow g-prior. We provide simulation studies and a real data example and compare the numerical results with Grizzle (1965 Grizzle, J.E. (1965). The two-period change-over design and its use in clinical trials. Biometrics 21:467480. See Grizzle (1974) for corrections.[Crossref], [PubMed], [Web of Science ®] [Google Scholar])’s and Grieve (1985 Grieve, A.P. (1985). A Bayesian analysis of the two-period crossover design for clinical trials. Biometrics 41:979990.[Crossref], [PubMed], [Web of Science ®] [Google Scholar])’s approaches.  相似文献   

7.
The complication in analyzing tumor data is that the tumors detected in a screening program tend to be slowly progressive tumors, which is the so-called length-biased sampling that is inherent in screening studies. Under the assumption that all subjects have the same tumor growth function, Ghosh (2008 Ghosh , D. ( 2008 ). Proportional hazards regression for cancer studies . Biometrics 64 : 141148 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) developed estimation procedures for proportional hazards model. In this article, by modeling growth function as a function of covariates, we demonstrate that Ghosh (2008 Ghosh , D. ( 2008 ). Proportional hazards regression for cancer studies . Biometrics 64 : 141148 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar])'s approach can be extended to the case when each subject has a specific growth function. A simulation study is conducted to demonstrate the potential usefulness of the proposed estimators for the regression parameters in the proportional and additive hazards model.  相似文献   

8.
Karlis and Santourian [14 D. Karlis and A. Santourian, Model-based clustering with non-elliptically contoured distribution, Stat. Comput. 19 (2009), pp. 7383. doi: 10.1007/s11222-008-9072-0[Crossref], [Web of Science ®] [Google Scholar]] proposed a model-based clustering algorithm, the expectation–maximization (EM) algorithm, to fit the mixture of multivariate normal-inverse Gaussian (NIG) distribution. However, the EM algorithm for the mixture of multivariate NIG requires a set of initial values to begin the iterative process, and the number of components has to be given a priori. In this paper, we present a learning-based EM algorithm: its aim is to overcome the aforementioned weaknesses of Karlis and Santourian's EM algorithm [14 D. Karlis and A. Santourian, Model-based clustering with non-elliptically contoured distribution, Stat. Comput. 19 (2009), pp. 7383. doi: 10.1007/s11222-008-9072-0[Crossref], [Web of Science ®] [Google Scholar]]. The proposed learning-based EM algorithm was first inspired by Yang et al. [24 M.-S. Yang, C.-Y. Lai, and C.-Y. Lin, A robust EM clustering algorithm for Gaussian mixture models, Pattern Recognit. 45 (2012), pp. 39503961. doi: 10.1016/j.patcog.2012.04.031[Crossref], [Web of Science ®] [Google Scholar]]: the process of how they perform self-clustering was then simulated. Numerical experiments showed promising results compared to Karlis and Santourian's EM algorithm. Moreover, the methodology is applicable to the analysis of extrasolar planets. Our analysis provides an understanding of the clustering results in the ln?P?ln?M and ln?P?e spaces, where M is the planetary mass, P is the orbital period and e is orbital eccentricity. Our identified groups interpret two phenomena: (1) the characteristics of two clusters in ln?P?ln?M space might be related to the tidal and disc interactions (see [9 I.G. Jiang, W.H. Ip, and L.C. Yeh, On the fate of close-in extrasolar planets, Astrophys. J. 582 (2003), pp. 449454. doi: 10.1086/344590[Crossref], [Web of Science ®] [Google Scholar]]); and (2) there are two clusters in ln?P?e space.  相似文献   

9.
The concept of negative variance components in linear mixed-effects models, while confusing at first sight, has received considerable attention in the literature, for well over half a century, following the early work of Chernoff [7 H. Chernoff, On the distribution of the likelihood ratio, Ann. Math. Statist. 25 (1954), pp. 573578.[Crossref] [Google Scholar]] and Nelder [21 J.A. Nelder, The interpretation of negative components of variance, Biometrika 41 (1954), pp. 544548.[Crossref], [Web of Science ®] [Google Scholar]]. Broadly, negative variance components in linear mixed models are allowable if inferences are restricted to the implied marginal model. When a hierarchical view-point is adopted, in the sense that outcomes are specified conditionally upon random effects, the variance–covariance matrix of the random effects must be positive-definite (positive-semi-definite is also possible, but raises issues of degenerate distributions). Many contemporary software packages allow for this distinction. Less work has been done for generalized linear mixed models. Here, we study such models, with extension to allow for overdispersion, for non-negative outcomes (counts). Using a study of trichomes counts on tomato plants, it is illustrated how such negative variance components play a natural role in modeling both the correlation between repeated measures on the same experimental unit and over- or underdispersion.  相似文献   

10.
In this paper, we consider a model for repeated count data, with within-subject correlation and/or overdispersion. It extends both the generalized linear mixed model and the negative-binomial model. This model, proposed in a likelihood context [17 G. Molenberghs, G. Verbeke, and C.G.B. Demétrio, An extended random-effects approach to modeling repeated, overdispersion count data, Lifetime Data Anal. 13 (2007), pp. 457511.[Web of Science ®] [Google Scholar],18 G. Molenberghs, G. Verbeke, C.G.B. Demétrio, and A. Vieira, A family of generalized linear models for repeated measures with normal and conjugate random effects, Statist. Sci. 25 (2010), pp. 325347. doi: 10.1214/10-STS328[Crossref], [Web of Science ®] [Google Scholar]] is placed in a Bayesian inferential framework. An important contribution takes the form of Bayesian model assessment based on pivotal quantities, rather than the often less adequate DIC. By means of a real biological data set, we also discuss some Bayesian model selection aspects, using a pivotal quantity proposed by Johnson [12 V.E. Johnson, Bayesian model assessment using pivotal quantities, Bayesian Anal. 2 (2007), pp. 719734. doi: 10.1214/07-BA229[Crossref], [Web of Science ®] [Google Scholar]].  相似文献   

11.
This article proposes a new likelihood-based panel cointegration rank test which extends the test of Örsal and Droge (2014 Örsal, D. D. K., Droge, B. (2014). Panel cointegration testing in the presence of a time trend. Computational Statistics and Data Analysis 76:377390.[Crossref], [Web of Science ®] [Google Scholar]) (henceforth panel SL test) to dependent panels. The dependence is modelled by unobserved common factors which affect the variables in each cross-section through heterogeneous loadings. The data are defactored following the panel analysis of nonstationarity in idiosyncratic and common components (PANIC) approach of Bai and Ng (2004 Bai, J., Ng, S. (2004). A PANIC attack on unit roots and cointegration. Econometrica 72(4):11271177.[Crossref], [Web of Science ®] [Google Scholar]) and the cointegrating rank of the defactored data is then tested by the panel SL test. A Monte Carlo study demonstrates that the proposed testing procedure has reasonable size and power properties in finite samples.  相似文献   

12.
This article is concerned with sphericity test for the two-way error components panel data model. It is found that the John statistic and the bias-corrected LM statistic recently developed by Baltagi et al. (2011 Baltagi, B. H., Feng, Q., Kao, C. (2011). Testing for sphericity in a fixed effects panel data model. Econometrics Journal 14:2547.[Crossref], [Web of Science ®] [Google Scholar])Baltagi et al. (2012 Baltagi, B. H., Feng, Q., Kao, C. (2012). A Lagrange multiplier test for cross-sectional dependence in a fixed effects panel data model. Journal of Econometrics 170:164177.[Crossref], [Web of Science ®] [Google Scholar], which are based on the within residuals, are not helpful under the present circumstances even though they are in the one-way fixed effects model. However, we prove that when the within residuals are properly transformed, the resulting residuals can serve to construct useful statistics that are similar to those of Baltagi et al. (2011 Baltagi, B. H., Feng, Q., Kao, C. (2011). Testing for sphericity in a fixed effects panel data model. Econometrics Journal 14:2547.[Crossref], [Web of Science ®] [Google Scholar])Baltagi et al. (2012 Baltagi, B. H., Feng, Q., Kao, C. (2012). A Lagrange multiplier test for cross-sectional dependence in a fixed effects panel data model. Journal of Econometrics 170:164177.[Crossref], [Web of Science ®] [Google Scholar]). Simulation results show that the newly proposed statistics perform well under the null hypothesis and several typical alternatives.  相似文献   

13.
Soltani and Mohammadpour (2006 Soltani , A. R. , Mohammadpour , M. (2006). Moving average representations for multivariate stationary processes. J. Time Ser. Anal. 27(6):831841.[Crossref], [Web of Science ®] [Google Scholar]) observed that in general the backward and forward moving average coefficients, correspondingly, for the multivariate stationary processes, unlike the univariate processes, are different. This has stimulated researches concerning derivations of forward moving average coefficients in terms of the backward moving average coefficients. In this article we develop a practical procedure whenever the underlying process is a multivariate moving average (or univariate periodically correlated) process of finite order. Our procedure is based on two key observations: order reduction (Li, 2005 Li , L. M. ( 2005 ). Factorization of moving average spectral densities by state space representations and stacking . J. Multivariate Anal. 96 : 425438 .[Crossref], [Web of Science ®] [Google Scholar]) and first-order analysis (Mohammadpour and Soltani, 2010 Mohammadpour , M. , Soltani , A. R. ( 2010 ). Forward moving average representation for multivariate MA(1) processes . Commun. Statist. Theory Meth. 39 : 729737 .[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]).  相似文献   

14.
Ye Li 《Econometric Reviews》2017,36(1-3):289-353
We consider issues related to inference about locally ordered breaks in a system of equations, as originally proposed by Qu and Perron (2007 Qu, Z., Perron, P. (2007). Estimating and testing structural changes in multivariate regressions. Econometrica 75:459502.[Crossref], [Web of Science ®] [Google Scholar]). These apply when break dates in different equations within the system are not separated by a positive fraction of the sample size. This allows constructing joint confidence intervals of all such locally ordered break dates. We extend the results of Qu and Perron (2007 Qu, Z., Perron, P. (2007). Estimating and testing structural changes in multivariate regressions. Econometrica 75:459502.[Crossref], [Web of Science ®] [Google Scholar]) in several directions. First, we allow the covariates to be any mix of trends and stationary or integrated regressors. Second, we allow for breaks in the variance-covariance matrix of the errors. Third, we allow for multiple locally ordered breaks, each occurring in a different equation within a subset of equations in the system. Via some simulation experiments, we show first that the limit distributions derived provide good approximations to the finite sample distributions. Second, we show that forming confidence intervals in such a joint fashion allows more precision (tighter intervals) compared to the standard approach of forming confidence intervals using the method of Bai and Perron (1998 Bai, J., Perron, P. (1998). Estimating and testing linear models with multiple structural changes. Econometrica 66:4778.[Crossref], [Web of Science ®] [Google Scholar]) applied to a single equation. Simulations also indicate that using the locally ordered break confidence intervals yields better coverage rates than using the framework for globally distinct breaks when the break dates are separated by roughly 10% of the total sample size.  相似文献   

15.
Distribution-free tests have been proposed in the literature for comparing the hazard rates of two probability distributions when the available samples are complete. In this article, we generalize the test of Kochar (1981 Kochar , S. C. ( 1981 ). A new distribution-free test for the equality of two failure rates . Biometrika 68 : 423426 .[Crossref], [Web of Science ®] [Google Scholar]) to the case when the available sample is Type-II censored, and then examine its power properties.  相似文献   

16.
In this paper, a new extension for the generalized Rayleigh distribution is introduced. The proposed model, called Marshall–Olkin extended generalized Rayleigh distribution, arises based on the scheme introduced by Marshall and Olkin (1997) Marshall, A.W., Olkin, I. (1997). A new method for adding a parameter to a family of distributions with application to the exponential and Weibull families. Biometrika 84:641652.[Crossref], [Web of Science ®] [Google Scholar]. A comprehensive account of the mathematical properties of the new distribution is provided. We discuss about the estimation of the model parameters based on two estimation methods. Empirical applications of the new model to real data are presented for illustrative purposes.  相似文献   

17.
The Hosmer–Lemeshow test is a widely used method for evaluating the goodness of fit of logistic regression models. But its power is much influenced by the sample size, like other chi-square tests. Paul, Pennell, and Lemeshow (2013 Paul, P., M. L. Pennell, and S. Lemeshow. 2013. Standardizing the power of the Hosmer–Lemeshow goodness of fit test in large data sets. Statistics in Medicine 32:6780.[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) considered using a large number of groups for large data sets to standardize the power. But simulations show that their method performs poorly for some models. In addition, it does not work when the sample size is larger than 25,000. In the present paper, we propose a modified Hosmer–Lemeshow test that is based on estimation and standardization of the distribution parameter of the Hosmer–Lemeshow statistic. We provide a mathematical derivation for obtaining the critical value and power of our test. Through simulations, we can see that our method satisfactorily standardizes the power of the Hosmer–Lemeshow test. It is especially recommendable for enough large data sets, as the power is rather stable. A bank marketing data set is also analyzed for comparison with existing methods.  相似文献   

18.
The probability matching prior for linear functions of Poisson parameters is derived. A comparison is made between the confidence intervals obtained by Stamey and Hamilton (2006 Stamey, J., Hamilton, C. (2006). A note on confidence intervals for a linear function of Poisson rates. Commun. Statist. Simul. &; Computat. 35(4):849856.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), and the intervals derived by us when using the Jeffreys’ and probability matching priors. The intervals obtained from the Jeffreys’ prior are in some cases fiducial intervals (Krishnamoorthy and Lee, 2010 Krishnamoorthy, K., Lee, M. (2010). Inference for functions of parameters in discrete distributions based on fiducial approach: Binomial and Poisson cases. J. Statist. Plann. Infere. 140(5):11821192.[Crossref], [Web of Science ®] [Google Scholar]). A weighted Monte Carlo method is used for the probability matching prior. The power and size of the test, using Bayesian methods, is compared to tests used by Krishnamoorthy and Thomson (2004 Krishnamoorthy, K., Thomson, J. (2004). A more powerful test for comparing two Poisson means. J. Statist. Plann. Infere. 119(1):2335.[Crossref], [Web of Science ®] [Google Scholar]). The Jeffreys’, probability matching and two other priors are used.  相似文献   

19.
This article proposes an asymptotic expansion for the Studentized linear discriminant function using two-step monotone missing samples under multivariate normality. The asymptotic expansions related to discriminant function have been obtained for complete data under multivariate normality. The result derived by Anderson (1973 Anderson , T. W. ( 1973 ). An asymptotic expansion of the distribution of the Studentized classification statistic W . The Annals of Statistics 1 : 964972 .[Crossref], [Web of Science ®] [Google Scholar]) plays an important role in deciding the cut-off point that controls the probabilities of misclassification. This article provides an extension of the result derived by Anderson (1973 Anderson , T. W. ( 1973 ). An asymptotic expansion of the distribution of the Studentized classification statistic W . The Annals of Statistics 1 : 964972 .[Crossref], [Web of Science ®] [Google Scholar]) in the case of two-step monotone missing samples under multivariate normality. Finally, numerical evaluations by Monte Carlo simulations were also presented.  相似文献   

20.
ABSTRACT

This paper reviews and extends the literature on the finite sample behavior of tests for sample selection bias. Monte Carlo results show that, when the “multicollinearity problem” identified by Nawata (1993 Nawata , K. ( 1993 ). A note on the estimation of models with sample-selection biases . Economics Letters 42 : 1524 . [CSA] [CROSSREF] [Crossref], [Web of Science ®] [Google Scholar]) is severe, (i) the t-test based on the Heckman–Greene variance estimator can be unreliable, (ii) the Likelihood Ratio test remains powerful, and (iii) nonnormality can be interpreted as severe sample selection bias by Maximum Likelihood methods, leading to negative Wald statistics. We also confirm previous findings (Leung and Yu, 1996 Leung , S. F. , Yu , S. ( 1996 ). On the choice between sample selection and two-part models . Journal of Econometrics 72 : 197229 . [CSA] [CROSSREF] [Crossref], [Web of Science ®] [Google Scholar]) that the standard regression-based t-test (Heckman, 1979 Heckman , J. J. ( 1979 ). Sample selection bias as a specification error . Econometrica 47 : 153161 . [CSA] [Crossref], [Web of Science ®] [Google Scholar]) and the asymptotically efficient Lagrange Multiplier test (Melino, 1982 Melino , A. ( 1982 ). Testing for sample selection bias . Review of Economic Studies 49 : 151153 . [CSA] [Crossref], [Web of Science ®] [Google Scholar]), are robust to nonnormality but have very little power.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号